The rapid deployment of AI reveals persistent socio-technical and data-driven biases that reflect profound epistemic limitations in knowledge production. These biases are not accidental, but symptomatic of deeper epistemic limitations in the way AI knowledge is produced — often by homogeneous teams within technocentric paradigms that exclude alternative perspectives. This paper argues that the underrepresentation of diverse social actors in AI development not only perpetuates inequality, but also severely limits the epistemic and ethical robustness of AI systems. The focus of this paper arises in particular from the preliminary findings obtained in the Horizon Europe project STEP, which highlight the potential of the framework to improve the inclusivity and trustworthiness of AI. The central thesis is that social diversity must be considered as an epistemic condition and not just an ethical or demographic ideal. Drawing on sociology, psychology and educational science, the authors show how integrating plural forms of knowledge, lived experiences and cultural perspectives into the design and development process can lead to AI systems that are more context-sensitive, equitable and trustworthy. Rather than proposing inclusion as an external corrective, this paper discusses a paradigm shift in AI development - a paradigm shift that embeds diversity into the infrastructure of knowledge production itself. The contribution of this paper is twofold. First, it proposes a theoretical model of integrative knowledge production that identifies mechanisms through which interdisciplinary collaboration can challenge dominant epistemologies and promote systemic reflexivity. Second, a participatory design framework is outlined to operationalise this model through concrete methodological tools, including dialogic co-design workshops, ethnographic participation in data selection and cross-functional team structuring. These practises aim to break through technocratic compartmentalisation by creating space for social critique and situated intelligence within AI development cycles. Finally, the authors reflect on the transformative potential of this approach and suggest that rethinking who is involved in AI knowledge production will not only change the outcomes of AI systems, but also the normative foundations of the technological future. From this perspective, ethical AI is not just explainable or compliant — it is structurally inclusive, responsive to different lifeworlds and open to critical reinvention.

Rethinking Holistic Ai Development Through Social Diversity, Interdisciplinary Collaboration and Integrative Knowledge Production

Leone Cinzia;Angela Celeste Taramasso;Anna Siri
2026-01-01

Abstract

The rapid deployment of AI reveals persistent socio-technical and data-driven biases that reflect profound epistemic limitations in knowledge production. These biases are not accidental, but symptomatic of deeper epistemic limitations in the way AI knowledge is produced — often by homogeneous teams within technocentric paradigms that exclude alternative perspectives. This paper argues that the underrepresentation of diverse social actors in AI development not only perpetuates inequality, but also severely limits the epistemic and ethical robustness of AI systems. The focus of this paper arises in particular from the preliminary findings obtained in the Horizon Europe project STEP, which highlight the potential of the framework to improve the inclusivity and trustworthiness of AI. The central thesis is that social diversity must be considered as an epistemic condition and not just an ethical or demographic ideal. Drawing on sociology, psychology and educational science, the authors show how integrating plural forms of knowledge, lived experiences and cultural perspectives into the design and development process can lead to AI systems that are more context-sensitive, equitable and trustworthy. Rather than proposing inclusion as an external corrective, this paper discusses a paradigm shift in AI development - a paradigm shift that embeds diversity into the infrastructure of knowledge production itself. The contribution of this paper is twofold. First, it proposes a theoretical model of integrative knowledge production that identifies mechanisms through which interdisciplinary collaboration can challenge dominant epistemologies and promote systemic reflexivity. Second, a participatory design framework is outlined to operationalise this model through concrete methodological tools, including dialogic co-design workshops, ethnographic participation in data selection and cross-functional team structuring. These practises aim to break through technocratic compartmentalisation by creating space for social critique and situated intelligence within AI development cycles. Finally, the authors reflect on the transformative potential of this approach and suggest that rethinking who is involved in AI knowledge production will not only change the outcomes of AI systems, but also the normative foundations of the technological future. From this perspective, ethical AI is not just explainable or compliant — it is structurally inclusive, responsive to different lifeworlds and open to critical reinvention.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11567/1269736
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact