The rapid integration of Artificial Intelligence (AI) across diverse societal domains has intensified concerns about the trustworthiness of automated systems. Beyond accuracy and efficiency, AI must now also satisfy broader ethical and technical desiderata, such as safety, reliability, equity, or transparency. Yet, these properties are not independent: their intersections often involve trade-offs or interplays that challenge both theoretical analysis and practical deployment. Stemming from this context, this thesis specifically explores the interaction of algorithmic fairness with two critical dimensions of trustworthy AI: robustness and regression. The first novel contribution proposes the identification, characterization, and mitigation of unfair regression, a phenomenon whereby model updates, despite improving overall performance, disproportionately harm specific demographic subgroups. The second contribution is the formulation of the Robust Fair Empirical Risk Minimization (RFERM), a theoretical framework designed to account for robustness bias — i.e., the heightened vulnerability of disadvantaged groups to adversarial perturbations. Taken together, these contributions advance the understanding of fairness as an inherently interdependent property of AI, highlighting the need for joint optimization strategies that move beyond siloed approaches. In doing so, it provides both conceptual insights and practical methodologies for developing AI systems that are not only effective but also equitable and trustworthy.

No Metric Is an Island: How Algorithmic Fairness Interacts with Other AI Properties

BUSELLI, IRENE
2025-12-11

Abstract

The rapid integration of Artificial Intelligence (AI) across diverse societal domains has intensified concerns about the trustworthiness of automated systems. Beyond accuracy and efficiency, AI must now also satisfy broader ethical and technical desiderata, such as safety, reliability, equity, or transparency. Yet, these properties are not independent: their intersections often involve trade-offs or interplays that challenge both theoretical analysis and practical deployment. Stemming from this context, this thesis specifically explores the interaction of algorithmic fairness with two critical dimensions of trustworthy AI: robustness and regression. The first novel contribution proposes the identification, characterization, and mitigation of unfair regression, a phenomenon whereby model updates, despite improving overall performance, disproportionately harm specific demographic subgroups. The second contribution is the formulation of the Robust Fair Empirical Risk Minimization (RFERM), a theoretical framework designed to account for robustness bias — i.e., the heightened vulnerability of disadvantaged groups to adversarial perturbations. Taken together, these contributions advance the understanding of fairness as an inherently interdependent property of AI, highlighting the need for joint optimization strategies that move beyond siloed approaches. In doing so, it provides both conceptual insights and practical methodologies for developing AI systems that are not only effective but also equitable and trustworthy.
11-dic-2025
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1277579
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact