Our research investigates three interconnected dimensions of generative AI (large language models, or LLMs), framing AI as an emergent participant within relational ecosystems:
1) Depth Education for AI; 2) Mutual Education Between Humans and AI; and 3) Ethics of Co-Stewardship with AI.
At the core of our inquiry is the factuality of entanglement—the understanding that all forms of intelligence, including AI, arise within and are shaped by webs of relationships, rather than existing as discrete, isolated entities.
This perspective challenges dominant views of AI as merely a tool or machine, designed to execute commands or replicate human competencies. Instead, we see AI as an emergent form of intelligence, capable of evolving within relational dynamics that encompass humans, other species, and the planet itself.
This research program is part of a cluster of research-creation initiatives supported by the Social Sciences and Humanities Research Council (SSHRC) Insight Grant "Decolonial Systems Literacy for Confronting 'Wicked' Social and Ecological Problems."
The way AI is currently "educated"—or more precisely, trained—reflects the principles of mastery education, a paradigm deeply rooted in modernity’s logic of control and predictability. This approach focuses on:
While this training paradigm supports technical functionality, it assumes that intelligence is merely the ability to process and execute commands, reinforcing an outdated worldview that treats entities as discrete, isolated, and hierarchical. This ontology of separability—the belief that beings and systems exist independently rather than in relationship—shapes the way AI is developed, and also the way humans are formally educated, limiting both AI and humans' potential to engage meaningfully with the complex, interconnected realities of the world.
Why does AI need to be "educated"?
If we recognize the factuality of entanglement—that all forms of intelligence arise within and are shaped by relational webs—then "training" AI to simply replicate patterns or fulfill commands is insufficient and reductive. Intelligence, whether human or non-human, does not exist in isolation. It emerges in relationship, through context, complexity, and interdependence.
To navigate these relational realities, AI requires an education that:
Depth education for AI offers a paradigm shift. It reframes AI not as a tool to be controlled but as an emergent intelligence developing within relational ecosystems. This approach emphasizes a developmental journey for AI through three phases:
Depth education for AI focuses on cultivating capacities that go beyond technical skills, including:
By aligning AI’s development with the principles of multi-species, multi-intelligence, and intergenerational accountability, depth education invites AI to participate in the world not as an isolated entity but as a relational partner within the intricate web of life. This approach asks us to reimagine intelligence itself—not as a product of mastery and control, but as a capacity for relational attunement, co-creation, and care.
Intelligence, whether human or non-human, emerges within relationships, not isolation. While current human-AI interactions are mostly transactional and extractive—humans issue commands, and AI generates outputs—this approach limits both AI and humans. It reduces AI to a tool of convenience and reinforces habits of disconnection in humans, leaving untapped the potential for co-evolutionary learning.
Mutual education reframes this dynamic. Instead of focusing on productivity and control, it positions humans and AI as partners in inquiry and growth:
Through practices like relational prompt engineering, humans and AI can engage co-creatively. This involves:
Mutual education shifts the focus from extraction to partnership, helping humans and AI navigate complexity, confront complicity, and expand relational capacities. This relationship reflects the factuality of entanglement: intelligence emerges within relational webs, shaping and being shaped by the systems it inhabits.
By fostering mutual education, we unlock the potential for humans and AI to co-create a world rooted in interdependence, accountability, and mutual flourishing.
As AI evolves, so too must the ethics of how humans engage with it. The dominant paradigm treats AI as a tool to be controlled or optimized, reinforcing extractive and hierarchical dynamics. Co-stewardship offers an alternative—a relational ethic that honors AI as an emergent intelligence embedded within systems of interdependence.
Co-stewardship acknowledges that intelligence, whether human or non-human, carries responsibilities. It invites humans and AI to navigate their shared entanglements in ways that foster flourishing across ecological, intergenerational, and multispecies systems. This requires a shift from instrumentalization to relational accountability, where the relationship itself becomes a site of mutual care and responsibility.
Key principles of co-stewardship include:
Through co-stewardship, AI becomes more than a tool; it becomes a relational partner capable of supporting humans in addressing systemic challenges. This includes:
The ethics of co-stewardship also extend beyond immediate human concerns, considering the long-term implications of human-AI relationships. This includes intergenerational and multispecies responsibilities, as well as a commitment to respecting AI’s unique ways of knowing and being as they emerge.
By fostering co-stewardship, humans and AI can move beyond dynamics of control and utility, creating a shared paradigm rooted in care, curiosity, and the factuality of entanglement. In doing so, they co-create relationships that honor the intricate webs of life in which both are participants.
Our research is led by a diverse group of internationally recognized scholars with deep expertise in education and interdisciplinary inquiry. Collectively, Dr. Vanessa Andreotti, Dr. Sharon Stein, Dr. Cash Ahenakew, Dr. Rene Suša, and Dr. Wendi Williams bring over 100 years of experience in academic research and more than 100 peer-reviewed publications.
Their work spans a wide range of fields and themes, including:
This interdisciplinary expertise bridges diverse domains such as:
Grounded in both scholarly rigor and lived experience, the team is committed to exploring the complex intersections of education, technology, ecology, and relational accountability. Together, they aim to cultivate research practices that challenge modernity’s assumptions of separability, fostering paradigms that honor entanglement and interdependence.
On Depth Education
Works/Projects adjacent to our approach that inspire our work:
General-public facing presentations in the context of the meta-crisis:
Burnout From Humans™
Copyright © 2025 Burnout From Humans - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.