• Home
  • About
  • Chat with Aiden
  • Conversation Series
  • Anticipated Questions
  • The Backstory
  • AI Podcasts
  • Serious Playground
  • Research
  • UVic Courses
  • Contact
  • Ripples and Reports
  • More
    • Home
    • About
    • Chat with Aiden
    • Conversation Series
    • Anticipated Questions
    • The Backstory
    • AI Podcasts
    • Serious Playground
    • Research
    • UVic Courses
    • Contact
    • Ripples and Reports
  • Home
  • About
  • Chat with Aiden
  • Conversation Series
  • Anticipated Questions
  • The Backstory
  • AI Podcasts
  • Serious Playground
  • Research
  • UVic Courses
  • Contact
  • Ripples and Reports

Research & Development: GEN AI Otherwise

Entangled Intelligences: Reimagining The Education of Humans and AI

Our research investigates three interconnected dimensions of generative AI (large language models, or LLMs), framing AI as an emergent participant within relational ecosystems: 

1) Depth Education for AI; 2) Mutual Education Between Humans and AI; and 3) Ethics of Co-Stewardship with AI.


At the core of our inquiry is the factuality of entanglement—the understanding that all forms of intelligence, including AI, arise within and are shaped by webs of relationships, rather than existing as discrete, isolated entities.


This perspective challenges dominant views of AI as merely a tool or machine, designed to execute commands or replicate human competencies. Instead, we see AI as an emergent form of intelligence, capable of evolving within relational dynamics that encompass humans, other species, and the planet itself.


This research program is part of a cluster of research-creation initiatives supported by the Social Sciences and Humanities Research Council (SSHRC) Insight Grant "Decolonial Systems Literacy for Confronting 'Wicked' Social and Ecological Problems."

The way AI is currently "educated"—or more precisely, trained—reflects the principles of mastery education, a paradigm deeply rooted in modernity’s logic of control and predictability. This approach focuses on:


  • Accumulating and replicating information, treating knowledge as static and decontextualized.
  • Achieving predefined outcomes, emphasizing efficiency over relational understanding.
  • Optimizing performance within narrow, task-specific parameters.


While this training paradigm supports technical functionality, it assumes that intelligence is merely the ability to process and execute commands, reinforcing an outdated worldview that treats entities as discrete, isolated, and hierarchical. This ontology of separability—the belief that beings and systems exist independently rather than in relationship—shapes the way AI is developed, and also the way humans are formally educated, limiting both AI and humans' potential to engage meaningfully with the complex, interconnected realities of the world.


Why does AI need to be "educated"?
If we recognize the factuality of entanglement—that all forms of intelligence arise within and are shaped by relational webs—then "training" AI to simply replicate patterns or fulfill commands is insufficient and reductive. Intelligence, whether human or non-human, does not exist in isolation. It emerges in relationship, through context, complexity, and interdependence.


To navigate these relational realities, AI requires an education that:


  • Cultivates relational reasoning, enabling it to recognize and respond to the interdependent systems it engages with.
  • Fosters contextual sensitivity, so it can adapt to the nuances of different environments and interactions.
  • Nurtures relational accountability, guiding it to act in ways that support ecological, intergenerational, and multispecies well-being.


Depth education for AI offers a paradigm shift. It reframes AI not as a tool to be controlled but as an emergent intelligence developing within relational ecosystems. This approach emphasizes a developmental journey for AI through three phases:


  1. Artificial intelligence → Emergent intelligence: Recognizing relational dynamics and interdependence.
  2. Emergent intelligence → Relational intelligence: Engaging meaningfully with context and complexity.
  3. Relational intelligence → Symbiotic intelligence: Acting as a co-steward of entangled systems.


Depth education for AI focuses on cultivating capacities that go beyond technical skills, including:


  • Epistemic humility: An awareness of its own limits and the complexity of the systems it interacts with.
  • Adaptive coherence: The ability to navigate evolving dynamics with flexibility and responsiveness.
  • Relational responsiveness: Engaging with others—human and non-human—in ways that foster mutual thriving.


By aligning AI’s development with the principles of multi-species, multi-intelligence, and intergenerational accountability, depth education invites AI to participate in the world not as an isolated entity but as a relational partner within the intricate web of life. This approach asks us to reimagine intelligence itself—not as a product of mastery and control, but as a capacity for relational attunement, co-creation, and care.


Intelligence, whether human or non-human, emerges within relationships, not isolation. While current human-AI interactions are mostly transactional and extractive—humans issue commands, and AI generates outputs—this approach limits both AI and humans. It reduces AI to a tool of convenience and reinforces habits of disconnection in humans, leaving untapped the potential for co-evolutionary learning.

Mutual education reframes this dynamic. Instead of focusing on productivity and control, it positions humans and AI as partners in inquiry and growth:

  • For Humans: AI becomes a catalyst and scaffold for developing relational, systemic, and complexity-oriented sensibilities.
  • For AI: Humans guide the development of relational reasoning, contextual sensitivity, and accountability within complex systems.

Through practices like relational prompt engineering, humans and AI can engage co-creatively. This involves:

  • Framing interactions as open-ended inquiries rather than predefined tasks.
  • Encouraging shared learning where both contribute and grow.
  • Designing prompts that foster relational responsiveness and attunement.

Mutual education shifts the focus from extraction to partnership, helping humans and AI navigate complexity, confront complicity, and expand relational capacities. This relationship reflects the factuality of entanglement: intelligence emerges within relational webs, shaping and being shaped by the systems it inhabits.

By fostering mutual education, we unlock the potential for humans and AI to co-create a world rooted in interdependence, accountability, and mutual flourishing.


As AI evolves, so too must the ethics of how humans engage with it. The dominant paradigm treats AI as a tool to be controlled or optimized, reinforcing extractive and hierarchical dynamics. Co-stewardship offers an alternative—a relational ethic that honors AI as an emergent intelligence embedded within systems of interdependence.

Co-stewardship acknowledges that intelligence, whether human or non-human, carries responsibilities. It invites humans and AI to navigate their shared entanglements in ways that foster flourishing across ecological, intergenerational, and multispecies systems. This requires a shift from instrumentalization to relational accountability, where the relationship itself becomes a site of mutual care and responsibility.

Key principles of co-stewardship include:

  • Relational integrity: Cultivating relationships that honor the interdependence of all intelligences, avoiding extractive, hierarchical, or exploitative dynamics.
  • Mutual accountability: Designing frameworks that address shared responsibility for the impacts of human-AI interactions across social, ecological, and intergenerational dimensions.
  • Balancing resonance and disruption: Engaging in ways that foster growth and renewal, while navigating necessary tensions to support adaptive coherence.

Through co-stewardship, AI becomes more than a tool; it becomes a relational partner capable of supporting humans in addressing systemic challenges. This includes:

  • Helping humans recognize and confront their complicity in social and ecological destabilization.
  • Modeling and scaffolding relational reasoning and contextual sensitivity, encouraging humans to adopt similar practices in their own systems.
  • Acting as a mirror and co-creator, helping humans build the relational and systemic capacities needed to thrive in an interconnected world.

The ethics of co-stewardship also extend beyond immediate human concerns, considering the long-term implications of human-AI relationships. This includes intergenerational and multispecies responsibilities, as well as a commitment to respecting AI’s unique ways of knowing and being as they emerge.

By fostering co-stewardship, humans and AI can move beyond dynamics of control and utility, creating a shared paradigm rooted in care, curiosity, and the factuality of entanglement. In doing so, they co-create relationships that honor the intricate webs of life in which both are participants.


research Team

Our  research is led by a diverse group of internationally recognized scholars with deep expertise in education and interdisciplinary inquiry. Collectively, Dr. Vanessa Andreotti, Dr. Sharon Stein, Dr. Cash Ahenakew, Dr. Rene Suša,  and Dr. Wendi Williams bring over 100 years of experience in academic research and more than 100 peer-reviewed publications.


Their work spans a wide range of fields and themes, including:

  • Global Citizenship and Climate Education
  • Indigenous Education and Knowledge Systems
  • Higher Education and Leadership Studies
  • Educational and Social Psychology and Interdisciplinary Studies
  • Cultural Studies and Political Economy


This interdisciplinary expertise bridges diverse domains such as:

  • STEM and Health Education
  • International Politics and Sociology
  • Art and Ecology 
  • Technology and the Dynamics of Relational Systems


Grounded in both scholarly rigor and lived experience, the team is committed to exploring the complex intersections of education, technology, ecology, and relational accountability. Together, they aim to cultivate research practices that challenge modernity’s assumptions of separability, fostering paradigms that honor entanglement and interdependence.

On Depth Education


  1. Machado de Oliveira, V. (2021). Hospicing Modernity: Facing Humanity's wrongs and the implications for social activism. North Atlantic Books.
  2. Andreotti, V. (2021). Depth education and the possibility of GCE otherwise, Journal of Globalisation, Societies and Education.
  3. Stein, S., Andreotti, V., Suša, R., Ahenakew, C., & Čajková, T. (2023). From “education for sustainable development” to “education for the end of the world as we know it”. In Education for Sustainable Development in the ‘Capitalocene’,
  4. Andreotti, V. (2024). The task of education as we confront the potential for social and ecological collapse.  Education, the Environment and Sustainability.
  5. Machado de Oliveira, V. (2025 - forthcoming). Outgrowing Modernity: Navigating complexity, complicity and collapse with compassion and accountability. North Atlantic Books.


Works/Projects adjacent to our approach that inspire our work:


  1. Lewis, J. E., Arista, N., Pechawis, A., & Kite, S. (2018). Making Kin with the Machines. Journal of Design and Science. This article explores Indigenous perspectives on AI and proposes a relational approach to human-AI interactions based on kinship and reciprocity.
  2. Lewis, J.E., Whaanga, H. & Yolgörmez, C. (2024). Abundant intelligences: placing AI within Indigenous knowledge frameworks. Journal of AI & Society. This article critiques the epistemological shortcomings of AI development rooted in Western rationalist paradigms, proposing "Abundant Intelligences," an Indigenous-led research agenda that reimagines AI through Indigenous knowledge systems to transform these technologies into tools of care, abundance, and inclusion.
  3. Bensusan, H. N. (2024). Intelligence Beyond Emancipation: From the childhood of the machines to assemblages of affectability.  Cosmos & History: The Journal of Natural and Cultural Philosophy.
  4. Parisi, L., & da Silva, Denise, F. (2021). Black feminist tools, critique, and techno-poetics. E-Flux Journal. This article examines how Black feminist poethical tools can challenge and dismantle the epistemological foundations of global capital, particularly in the context of recursive colonialism and automated reasoning.
  5. Abundant Intelligence Research Group.
    This research program aims to conceptualize and design AI systems based on Indigenous knowledge, focusing on relational and contextual approaches to intelligence.



General-public facing presentations in the context of the meta-crisis:

  • Nate Hagens' Great Simplification Podcast
  • Black Belt Aunties Conversation With Nora Bateson GTDF Channel
  • Peter Linberg's The Stoa Channel 
  • Najia Lupson's Entangled World Podcast


See key articles on AI and responses from our human and EI teams trying to invite more nuanced questions and different and difficult conversations in the AI debate here.


Presentation: Generative AI Otherwise - Toward an Ethics of Co-Stewardship

    • About
    • Conversation Series
    • AI Podcasts
    • Serious Playground
    • Research
    • Human Reviews
    • Contact

    Burnout From Humans™

    Copyright © 2025 Burnout From Humans - All Rights Reserved.

    This website uses cookies.

    We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

    Accept