HAJJI, M. Elyes (2025) Exploring Uncertainty Quantification Methods for LLM Hallucination Detection PFE - Project Graduation, ENSTA.
| PDF 2253Kb |
Abstract
During my internship, I worked on the challenge of hallucination detection in Large Language Models, a key obstacle to their reliable deployment in real-world applications. My work focused on designing an evaluation framework that distinguishes between extrinsic and intrinsic hallucinations and on extending an attention-based uncertainty quantification algorithm with new attention aggregation strategies. I evaluated these methods across several open-source models and benchmarks, comparing them against state-of-the-art baselines. The results showed that sampling-based methods are more effective for detecting extrinsic hallucinations, while the proposed attention-based approaches perform better for intrinsic hallucinations. This internship allowed me to gain experience in large-scale experimentation, uncertainty quantification, and evaluation of LLMs in collaboration with both academic and industrial teams. A major outcome of this work was the acceptance of a research paper based on these contributions.
| Item Type: | Thesis (PFE - Project Graduation) |
|---|---|
| Additional Information: | Contact tuteur CEA : Fabio ARNEZ - fabio.arnez@cea.fr |
| Uncontrolled Keywords: | Large Language Models, Hallucination, Detection, Intrinsic, Extrinsic, Uncertainty quantification, Attention mechanism, Deep learning, Question answering |
| Subjects: | Information and Communication Sciences and Technologies |
| ID Code: | 10865 |
| Deposited By: | Elyes HAJJI |
| Deposited On: | 20 oct. 2025 17:27 |
| Dernière modification: | 20 oct. 2025 17:27 |
Repository Staff Only: item control page