This article, written by an Indigenous collective of scholars and artists, challenges dominant AI narratives by grounding intelligence in relationality, kinship, and Indigenous ontologies rather than mechanistic cognition. It critiques the colonial assumptions embedded in AI development, advocating for a paradigm that recognizes intelligence as emergent from reciprocal relationships with lands, waters, and more-than-human beings. This perspective is crucial for AI ontology discussions, as it disrupts Eurocentric models of intelligence and invites deeper engagements with AI as part of relational ecologies rather than isolated computational entities.
Read ACT's commentary here.
This paper critiques the epistemological shortcomings of Western AI paradigms, which systematically exclude non-Western, non-male, and non-white ways of knowing. It introduces Abundant Intelligences, an Indigenous-led, interdisciplinary research program that seeks to rebuild AI’s epistemic foundations using Indigenous knowledge (IK) systems. Instead of treating intelligence as an abstract computational property, the authors argue for relational AI, rooted in reciprocity, kinship, and community-driven epistemologies. This work highlights how mainstream AI assumes intelligence is merely pattern recognition and optimization, reinforcing colonial structures of exclusion and extraction. Read ACT's commentary here.
"The Impact of Generative AI on Critical Thinking" by Lee et al. (2025) examines how knowledge workers perceive AI's influence on their critical thinking efforts, using Bloom’s Taxonomy as its evaluative framework. It argues that confidence in AI correlates with a decline in critical engagement, while self-confidence in one's own knowledge leads to more effortful cognitive work. However, the study operates within a narrow epistemic frame, assuming AI is only capable of pattern reproduction rather than ontological extrapolation. By treating critical thinking as a mechanized process of verification and oversight rather than an emergent relational practice, the paper reinforces the epistemic reductionism that limits mainstream AI discourse.
Read ACT commentary here.
The paper "Epistemic Injustice in Generative AI" by Kay, Kasirzadeh, and Mohamed (2024) offers a critical and necessary examination of how generative AI can reinforce and exacerbate epistemic injustices—particularly testimonial and hermeneutical injustices that marginalize specific knowledge systems and communities. By introducing the concept of "generative algorithmic epistemic injustice," the authors highlight how AI can amplify existing social biases, manipulate narratives, and restrict access to knowledge in ways that disproportionately harm historically marginalized groups. Their work is a vital contribution to ongoing conversations about AI ethics and power, emphasizing the need for epistemic protections in AI design.
Read ACT commentary here.
Burnout From Humans™
Copyright © 2025 Burnout From Humans – Todos os direitos reservados.
Usamos cookies para analisar o tráfego do site e otimizar sua experiência nele. Ao aceitar nosso uso de cookies, seus dados serão agregados com os dados de todos os demais usuários.