Mapping Word Meanings With AI
Tiny Spaces, Big Ideas - Katrin Erk
Video URL
Additional Notes:
- Katrin Erk and Gabriella Chronis (2023). Word embeddings are word story embeddings (and that’s fine). In: Shalom Lappin and Jean-Philippe Bernardy (eds.), Algebraic Structures in Natural Language, Taylor and Francis, Boca Raton and Oxford.
This paper looks at word meaning through the lens of language models. It finds that language model representations of word meaning contain what we call “story traces”, and argues that that’s not a bug, it’s as it should be.
- Ray S. Jackendoff and Katrin Erk (2025), Toward a Deeper Lexical Semantics. Top. Cogn. Sci., https://doi.org/10.1111/tops.70013,
This is a non-computational paper that lays out our view of word meaning, as a complex and heterogeneous system
- Gabriella Chronis, Kyle Mahowald and Katrin Erk (2023). A Method for Studying Semantic Construal in Grammatical Constructions with Interpretable Contextual Embedding Spaces. Proceedings of ACL.
This is a computational paper that analyzes internal representations of language models for what they can tell us about particular questions in word meaning.
- Adam Kilgarriff (1997), I Don’t Believe in Word Senses. Computers and the Humanities 31:2
Paper mentioned in the talk by a lexicographer that argues that word senses don’t exist.