EHU CRG events are via MS Teams, and are free to attend. The MS Teams link is sent the day before the event. If you have problems registering, or have any questions, please contact the organiser, Costas Gabrielatos ([email protected]).
—————————————————————————————————————————————————————
MEETING #15 Thursday 6 or Friday 7 March 2025
Topic: LLMs
Yannis Korkontzelos (Edge Hill University)
Title TBC
—————————————————————————————————————————————————————
MEETING #16 Friday 2 May 2025, 2-3 pm
Topic: LLMs and Lexical Priming Theory
Michael Pace-Sigge (University of Eastern Finland)
Large-Language-Model Tools and the Theory of Lexical Priming: Where technology and human cognition meet and diverge
Abstract
This paper revisits Michael Hoey’s Lexical Priming Theory (2005) in the light of recent discussions of Large Language Models as forms of machine learning (commonly referred to as AI), which have been the centre of a lot of publicity in the wake of tools like OpenAI’s ChatGPT or Google’s BARD/Gemini. Historically, theories of language have faced inherent difficulties, given language’s exclusive use by humans and the complexities involved in studying language acquisition and processing. The intersection between Hoey’s theory and Machine Learning tools, particularly those employing Large Language Models (LLMs), has been highlighted by several researchers. Hoey’s theory relies on the psychological concept of priming, aligning with approaches dating back to Ross M. Quillian’s 1960s proposal for a “Teachable Language Comprehender.” The theory posits that every word is primed for discourse based on cumulative effects, a concept mirrored in how LLMs are trained on vast corpora of text data.
This paper tests LLM-produced samples against naturally (human-)produced material in the light of a number of language usage situations, investigates results from A.I. research and compares the results with how Hoey describes his theory. While LLMs can display a high degree of structural integrity and coherence, they still appear to fall short of meeting human-language criteria which include grounding and the objective to meet a communicative need.
References
Hoey, M. (2005). Lexical Priming. London: Routledge.
Hoey, M. (2009). Corpus-driven approaches to grammar. In: Römer, U. & Schulze, R: Exploring the lexis-grammar interface. Amsterdam/Philadelphia: John Benjamins.pp. 33-47.
Pace-Sigge, M. & Sumakul, T. (2022). What Teaching an Algorithm Teaches When Teaching Students How to Write Academic Texts. In Jantunen, Jarmo Harri, et al. Diversity of Methods and Materials in Digital Human Sciences. Proceedings of the Digital Research Data and Human Sciences DRDHum Conference 2022.
Quillian, R. M. (1967). Word concepts: A theory and simulation of some basic semantic capabilities. Behavioural Science, 12(5), 410-430. https://doi.org/10.1002/bs.3830120511
Tools
Brezina, V. & Platt, W. (2023) #LancsBox X, Lancaster University, http://lancsbox.lancs.ac.uk.
Google [2023] (2024). BARD/Gemini. https://BARD.google.com/chat
OpenAI. [2022] (2024) ChatGPT.(GPT 3.5) https://chat.openai.com/
Scott, M. (2023). WordSmith Tools version 8, Stroud: Lexical Analysis Software.