by Beatrice Bonami
Can inclusion be insidious? Can it be proclaimed openly while remaining hollow in practice? According to Ruha Benjamin, the answer is yes – and technology may be one of the most effective means to be insidious. As Prof. Benjamin argues, technological discourse often presents itself as universally relatable, while technological systems primarily benefit a privileged few. Digital technology is seen by many as a scientific advancement that serves humanity at large. Artificial Intelligence (AI) systems, for example, are widely celebrated as the technology of the future. Yet their limitations are becoming increasingly apparent: it is an industry that is neither ethically nor equitably produced and maintained. These are concerns that scholars already began identifying in the 1970s – concerns that have become more visible today, especially in the field of studies of technology and society (STS).
The recent Italian STS Conference (10th Edition in 2025) in Milan addressed the complex relationships between society and technology, with specific focus on AI and its societal impacts. As a keynote, Ruha Benjamin pointed out how AI mobilizes private corporations’ priorities, which does not account for marginalised, vulnerable and racialised bodies experiencing digital systems. She proposes thus that instead of thinking of AI putting Big Tech corporations in the spotlight, that we, scholars, think of AI in its spectrum of imagination, a form of artificial agency that is respectful and responsive to humanity (Artificial Imagination), thus accounting for an inclusion to AI systems that might not be insidious. In this context, adopting a decolonial approach is not only relevant for the Global South but essential for understanding AI globally. The question of what a truly decolonial AI might look like remains unresolved, but one crucial praxis towards decoloniality is to confront the issue of data poverty, one of the main topics addressed at the 10th STS Italia. Unless we acknowledge that the AI industry deliberately conceals its data extraction strategies as the core of its profitability, the discussion on decoloniality risks remaining superficial. This was the central theme of my presentation at STS Italia, where I introduced the paper Tackling Data Poverty and Scarcity: A Review from the Perspective of African Data and AI Policies.
My paper examined ten policy documents from eight African countries to analyse how governments address data poverty and scarcity while attempting both to align with global AI trajectories and to emancipate their systems from the dominance of Global North corporations. The paper argued that data trusts and pooled governance models could enable more equitable data sharing – yet feasibility remains limited: notably, the UN AI Advisory Board’s proposal for data trusts (Global AI Data Framework) was the only recommendation not endorsed by the UN General Assembly. Building on UN’s idea of a Global AI Data Framework, I proposed pathways so African countries can secure fair access to – and stewardship over – their own data ecologies, rather than continuing to supply raw data into extractive infrastructures. The paper contributed to two panels on e-governance and AI equity in the Global South, which joined discussions with the panel from Stefania Milan and Iginio Gagliardone on up-cycling new forms of data economy and democracy and the panel from Stefano Calzatti on Quantum Ecologies. The debates focused on underscoring the need for policy frameworks that prevent data inclusion from becoming another form of dispossession, echoing concerns raised in Prof. Ruha Benjamin’s opening lecture.