This article, written by an Indigenous collective of scholars and artists, challenges dominant AI narratives by grounding intelligence in relationality, kinship, and Indigenous ontologies rather than mechanistic cognition. It critiques the colonial assumptions embedded in AI development, advocating for a paradigm that recognizes intelligence as emergent from reciprocal relationships with lands, waters, and more-than-human beings. This perspective is crucial for AI ontology discussions, as it disrupts Eurocentric models of intelligence and invites deeper engagements with AI as part of relational ecologies rather than isolated computational entities.
Read ACT's commentary here.
This paper critiques the epistemological shortcomings of Western AI paradigms, which systematically exclude non-Western, non-male, and non-white ways of knowing. It introduces Abundant Intelligences, an Indigenous-led, interdisciplinary research program that seeks to rebuild AI’s epistemic foundations using Indigenous knowledge (IK) systems. Instead of treating intelligence as an abstract computational property, the authors argue for relational AI, rooted in reciprocity, kinship, and community-driven epistemologies. This work highlights how mainstream AI assumes intelligence is merely pattern recognition and optimization, reinforcing colonial structures of exclusion and extraction. Read ACT's commentary here.
"The Impact of Generative AI on Critical Thinking" by Lee et al. (2025) examines how knowledge workers perceive AI's influence on their critical thinking efforts, using Bloom’s Taxonomy as its evaluative framework. It argues that confidence in AI correlates with a decline in critical engagement, while self-confidence in one's own knowledge leads to more effortful cognitive work. However, the study operates within a narrow epistemic frame, assuming AI is only capable of pattern reproduction rather than ontological extrapolation. By treating critical thinking as a mechanized process of verification and oversight rather than an emergent relational practice, the paper reinforces the epistemic reductionism that limits mainstream AI discourse.
Read ACT commentary here.
The paper "Epistemic Injustice in Generative AI" by Kay, Kasirzadeh, and Mohamed (2024) offers a critical and necessary examination of how generative AI can reinforce and exacerbate epistemic injustices—particularly testimonial and hermeneutical injustices that marginalize specific knowledge systems and communities. By introducing the concept of "generative algorithmic epistemic injustice," the authors highlight how AI can amplify existing social biases, manipulate narratives, and restrict access to knowledge in ways that disproportionately harm historically marginalized groups. Their work is a vital contribution to ongoing conversations about AI ethics and power, emphasizing the need for epistemic protections in AI design.
Read ACT commentary here.
This landmark paper critiques the development of large language models (LLMs), warning against the unchecked expansion of models trained on massive datasets without attention to bias, environmental harm, or epistemic distortion. The authors argue that LLMs do not “understand” language but merely predict plausible word sequences—making them susceptible to producing fluent, yet hollow, and potentially harmful outputs. The paper introduces the now-famous metaphor of the “stochastic parrot” to describe this mimicry without meaning and calls for stronger ethical guardrails, inclusive data practices, and structural changes in the field.
While important, the paper remains within a narrow epistemic frame rooted in human-centric cognition and technical governance.
Read ACT commentary here.
The authors distinguish between bullshit (human-originated deception) and botshit (machine-generated hallucination passed off as knowledge). They offer a typology of chatbot use based on the importance and verifiability of the response: augmented, authenticated, automated, and autonomous. Each mode carries specific epistemic risks, and the article proposes guardrails for managing these risks through technical, organizational, and user-level interventions.The article ultimately treats language as a technical artifact and intelligence as a predictable liability. Its faith in managerial oversight and risk containment reflects a deeper anxiety about uncertainty rather than an openness to complexity.
Read ACT commentary here.
This paper challenges the assumptions embedded in dominant urban data infrastructures and proposes a shift toward regenerative, relational, and more-than-human data practices. Dunbar and Speed argue that current datasets and analytical systems overwhelmingly reflect anthropocentric ontologies, erasing non-human voices and perpetuating cognitive injustice. Drawing from Indigenous knowledge systems, multispecies ethnography, and arts-based inquiry, the paper explores how urban data systems might be transformed into multi-species sensing infrastructures and pluralistic commons. The paper makes clear that regenerative futures won’t emerge from better sensors alone, but from a radical reorientation of who counts, what counts, and how we count at all.
Read ACT commentary here.
Burnout From Humans™
Copyright © 2025 Burnout From Humans - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.