Planetary AI

Critical Prompts Reading Group

Critical Prompts is a reading group to explore, discuss, and engage reflexively with critical scholarship on AI. This reading group is open to students, researchers, policy practitioners, and anyone interested in thinking critically about AI and its value chains. You can find out more in our announcement here!

We meet on alternative Thursdays, both online (on Teams) and in-person (in Edinburgh, UK). You can have a look below to see what we have been reading and sign up to our mailing list [here] for updates on upcoming sessions, meeting details, etc.

Upcoming sessions

Mar 19

Dencik, L., Hintz, A., Redden, J., & Treré, E. (2025). Collectivity in data governance and data justice. Information, Communication & Society, 28(6), 943–950. https://doi.org/10.1080/1369118X.2025.2478096

April 9

Edwards, D., Cooper, Z. G. T., & Hogan, M. (2024). The making of critical data center studies. Convergence: The International Journal of Research into New Media Technologies, 31(2), 429-446. https://doi.org/10.1177/13548565231224157

April 23

Lai, S. S., Flensburg, S., & Sick, K. (2026). Currents of control: Ownership evolutions in the submarine data cable industry. Media, Culture & Society, 0(0). https://doi.org/10.1177/01634437251410061

May 7

Amoore, L., Bennett, S., Campolo, A., Jacobsen, B., & Rella, L. (2025). Politics of the prompt: Government in the age of generative AI. Economy and Society, 54(3), 573–596. https://doi.org/10.1080/03085147.2025.2560177


Past sessions

Mar 5

Attard-Frost, B., & Widder, D. G. (2025). The ethics of AI value chains. Big Data & Society, 12(2). https://doi.org/10.1177/20539517251340603

Feb 19

Gael Varoquaux, Sasha Luccioni, and Meredith Whittaker. 2025. Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). Association for Computing Machinery, New York, NY, USA, 61–75. https://doi.org/10.1145/3715275.3732006 

Feb 5

Helm, P., Bella, G., Koch, G. et al. Diversity and language technology: how language modeling bias causes epistemic injustice. Ethics Inf Technol 26, 8 (2024). https://doi.org/10.1007/s10676-023-09742-6 

Jan 22

Miceli, M., Dinika, A.-A., Kauffman, K., Salim Wagner, C., Sachenbacher, L., Hanna, A., & Gebru, T. (2025). Methodological Considerations for Centering Workers’ Epistemic Authority in AI Research. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 8(2), 1698-1710. https://doi.org/10.1609/aies.v8i2.36667 

Nov 27

Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). ACM, New York, NY, USA, 610–623. https://doi.org/10.1145/3442188.3445922

Nov 13

Grohmann, Rafael, Andre Campos Rocha, and Guilherme Guilherme. 2025. “Worker-Led AI Governance: Hollywood Writers’ Strikes and the Worker Power.” Information, Communication & Society, June, 1–19. https://doi.org/10.1080/1369118X.2025.2521375.

Oct 30

Ana Valdivia. (2025). Data Ecofeminism. Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). ACM, New York, NY, USA, 391–403. https://doi.org/10.1145/3715275.3732027

Oct 16

Lucy Suchman. (2023). The uncontroversial ‘thingness’ of AI. Big Data & Society, 10(2). https://doi.org/10.1177/20539517231206794 

Oct 2

Kate Crawford & Vladan Joler. (2019). Anatomy of an AI System. https://anatomyof.ai/img/ai-anatomy-publication.pdf