Guidara, Mr. Houssem (2020) Fairness, discrimination and explicability of data processing PRE - Research Project, ENSTA.
![]()
| PDF 1160Kb |
Abstract
Algorithmic decision-making systems are now very widely used to make decisions in various contexts, some of them may considerably affect individuals’ lives and professional careers such as university admission, credit lending and criminal justice. This is driven by the idea that machine learning algorithms are totally objective and always unaffected by the human cognitive biases and discriminatory behaviors. However, many research papers argues that those ADM systems are not as fair and objective as we think they are. Accordingly, it is of the most importance to evaluate, investigate and improve the extent to which machine learning algorithms are fair and immune to discriminatory tendencies. To do this, we need to clearly define the notion of fairness in the context of machine learning mathematically so it can be implemented and understood by machine learning algorithms. This step of defining and enforcing the notion of fair decisions is crucial because ADM systems are fueled by the data gathered from individuals and the time those individuals feel that their personal data might be used to justify the unfairness of decisions which may greatly impact their lives, they will simply disallow sharing their data making those revolutionary machine learning algorithms totally useless. Nevertheless, ensuring the respect of the value of fairness when implementing ADM systems is not enough for people to allow sharing their data. That is to say, individuals need guarantees that their personal data is totally secure and the process of gathering and using their personal data is immune against malicious attacks. This aligns with the notion of differential privacy which provides strong protection to protect the privacy of those who choose to share their data to fuel the data-driven algorithms. Differential privacy is based on the idea that adding or removing one individual’s data has almost no impact on the overall outcomes. In other words, one particular individual’s data does not make enough perturbation on the outcomes of a differentially private algorithm to be able to identify him. However, we need to investigate the extent to which the use of differentially private algorithms may impact fairness measures.
Item Type: | Thesis (PRE - Research Project) |
---|---|
Uncontrolled Keywords: | Machine learning fairness |
Subjects: | Information and Communication Sciences and Technologies Mathematics and Applications |
ID Code: | 8352 |
Deposited By: | Houcem GUIDARA |
Deposited On: | 22 mars 2021 14:37 |
Dernière modification: | 22 mars 2021 14:37 |
Repository Staff Only: item control page