Research interests
I am interested in formal (symbolic/ distributional/
probabilistic) models of linguistic meaning and how they
can be applied and evaluated, using cognitive modeling,
behavioral experiments, and computational methods in
natural language understanding. Below, I list my current
research topics. For more information about my current
and past research, see my list of
publications.
Distributional Formal Semantics (DFS)
DFS offers probabilistic distributed
representations that are also inherently
compositional. When used as part of a recurrent
neural network, these representations allow for
capturing incremental meaning construction and
probabilistic inference.
Article
(2022) Software
Neurocomputational modeling of ERPs
We aim to arrive at an explicit
neurocomputational model of the electrophysiology
of language comprehension, focusing on the N400 and
the P600 component of the Event Related brain
Potential (ERP) signal.
Article
(2021)
Rational encoding and decoding
In Project C3 within SFB 1102, we
investigate information-theoretic explanations of
encoding and decoding behaviour, with an emphasis
on the mechanisms that underlie the linearization
of referring expressions
Article
(2021)