Jaron Maene

email - scholar - github - bsky - linkedin


Hey there, I'm Jaron, a PhD candidate part of the DTAI research group at KU Leuven, supervised by Luc De Raedt. I do research in the intersection of probabilistic reasoning and deep learning (neurosymbolic AI).


news

This summer I'll be on a research internship at Basis, working on a probabilistic programming library called Weighted. Jun 25
Next semester I'll be at UCLA to visit Prof. Guy Van den Broeck. Feb 25
"Extracting Finite State Machines from Transformers" was accepted at the Workshop on Mechanistic Interpretability at ICML2024.paper Jun 24
Attended the AI winter school at Paderborn University. Feb 24
Attended the DeepLearn 2023 Winter school. Jan 23
I received a prize from IBM at the NLC2CMD competition at NeurIPS20, for my work on program synthesis with LLMs during a research internship at Bell Labs. Dec 20

selected papers

Embeddings as Probabilistic Equivalence in Logic Programs
Proposes a distribution semantics of logic programming using probabilistic equivalence instead of probabilistic facts, leading to an end-to-end differentiable prover without local minima.
NeurIPS25
KLay: Accelerating Arithmetic Circuits for Neurosymbolic AI
Introduces knowledge layers (KLay), a new GPU-optimized library for evaluating sparse arithmetic circuits, achieving speedups of multiple orders of magnitude over existing methods.
papervideocode
ICLR25
The Gradient of Algebraic Model Counting (Oral, top 4.6%)
Extends algebraic model counting from inference to learning by generalizing gradients and backpropagation to different semirings, unifying various learning algorithms in neurosymbolic AI.
papervideocode
AAAI25
On the Hardness of Probabilistic Neurosymbolic Learning
Studies the computational complexity of differentiating probabilistic reasoning and introduces WeightME, an unbiased gradient estimator with provides probabilistic guarantees.
papervideocode
ICML24
Soft-Unification in Deep Probabilistic Programming
Introduces DeepSoftLog, a principled probabilistic framework for soft-unification that addresses limitations in previous neural theorem provers and enables end-to-end differentiable logic rule learning.
papervideocode
NeurIPS23

appearances

I gave a talk a the StarAI lab at UCLA. Nov 24
Presented two posters at the first NeSy conference. Sep 24
I gave an invited seminar at the TU Wien Institute of Logic and Computation. Jul 24
"Soft-Unification in Deep Probabilistic Logic", talk at the Generative NeSy Workshop. May 24
"Neurosymbolic Learning, a Probabilistic Journey", DTAI seminar.slides
"AI and your Research", Invited talk (non-technical) at the LC&Y institute on the use of AI in research.video Mar 24
Presented a poster at the Flanders AI research day 2023. Nov 23

master students

Sam McManagan. "A Neurosymbolic Approach to Solving Referring Expression Comprehension Tasks" 2025
Andrei-Bogdan Florea. "Think before you Learn: Image Segmentation with Weak Supervision" 2025
Andres Van Schel. "An Evaluation of Current Mechanistic Interpretability Techniques on an Entailment Prediction Task in Propositional Logic with BERT" 2025
Wout Seynaeve. "Learning from Logical Constraints: A Unifying approach for Weakly Supervised Semantic Segmentation" 2025
Rik Adriaensen. "Extracting Finite State Machines from Transformers" 2024

service

I have reviewed for JMLR, JAIR, ICLR25, and NeurIPS25.


Last updated: September, 2025.