Ben Salah , Malek (2024) Uncertainty quantification in machine learning: A study of conformal prediction with conditional guarantees PFE - Projet de fin d'études, ENSTA.

Fichier(s) associé(s) à ce document :

[img]
Prévisualisation
PDF
Available under License Creative Commons Attribution Non-commercial No Derivatives.

1428Kb

Résumé

In this work, we address the critical importance of uncertainty quantification in machine learning, particularly within the context of causal inference. We begin by a short survey of uncertainty quantification methods, with an emphasis on conformal prediction comparing its performance across various scenarios. We then turn our attention to the technical aspects of conformal prediction focusing on the conditional conformal prediction framework of Gibbs et al. and propose a extension of their coverage theorem to account for non-exchangeable data and derive a new lower bound on the coverage showing that the non-exchangeability would theoretically worsen the performance in terms of coverage. To demonstrate the practical relevance of our findings, we applied conformal prediction to the causal dynamical variational auto-encoder (CDVAE) model, highlighting its effectiveness in producing accurate and trustworthy predictions. Concluding with an array of experiments that showcase the limitation of exact conditional coverage.

Type de document:Rapport ou mémoire (PFE - Projet de fin d'études)
Sujets:Mathématiques et leurs applications
Code ID :10496
Déposé par :Malek BEN SALAH
Déposé le :11 déc. 2024 11:59
Dernière modification:11 déc. 2024 11:59

Modifier les métadonnées de ce document.