About

I am a post-doc of quantitative English Linguistics at the Universität Zürich — with a knack for all things data-related.

The goal of my research is to understand more about the relationship between our use of language and our knowledge of it — the ‘grammar in our heads’. To this end, I extract, process, and analyse large amounts of (very messy!) textual data and then test hypotheses about linguistic patterns in experiments, mainly via reading times — see here for a sample experiment. I have also worked with time-series and geo-spatial data sets.

I primarily use R, and I am the author of the {collostructions} package. More recently, I have been diving into the wonderful world of Python and its opportunities to solve many ‘big data’ modelling challenges. For analysis, I mainly use exploratory analysis and visualization techniques as well as predictive mixed-effects modelling and classification to make sense of complex, mulitvariate datasets.

Lac Neuchâtel, October 2017.