Fina will present her study on the efficacy of five prompt methods with task demonstration strategies across 17 different prompt templates. Furthermore, she will present the applied framework for evaluation grounded in Wikidata's ontology.
The capabilities of Large Language Models (LLMs,) such as Mistral 7B, Llama 3, GPT-4, present a significant opportunity for knowledge extraction (KE) from text. However, LLMs' context-sensitivity can hinder obtaining precise and task-aligned outcomes, thereby requiring prompt engineering.
Fina's research reveals that LLMs are capable of extracting a wide array of facts, with significant performance improvement when incorporating simple instructions and task demonstrations using examples selected via retrieval mechanisms. The findings challenge the need to frame extraction as a reasoning task, as alternative prompting strategies prove just as effective.
Date & Time: October 1st, 13.00 - 14.00
Location: Room L1.10, Lab42, Science Park
About the Speaker:
Fina Yilmaz Polat is a PhD Candidate at the Intelligent Data Engineering Lab (INDElab), at the Informatics Institute (Faculty of Science). She is a key contributor to the ENEXA project, focused on developing human-centered and explainable machine learning approaches, and serves on the board of Inclusive AI.