AI: Alternative intelligence
Image: Gina Rosas Moncada
I caught myself one morning. Without thinking, I had instinctively reached for ChatGPT to outline a session design, just to get ideas flowing. But somewhere in that seamless transaction of question-in-immediate-answer-out, I realised I'd bypassed the messy, uncertain, embodied process where my actual thinking happens.
It's seductive, being able to stroke the magic lamp at any time and get instant gratification, a timely dopamine hit. But as we know from folklore, genies have 'user beware' warnings, and as the world wakes up to the metacrisis we're facing, it’s apparent that cognitive intelligence alone, even supercharged by AI, won't get us where we need to go.
The irony is that over the past five years, I've been on a quest to get out of my head and into my body, to tune into more than just my cognitive senses for the deeper developmental work needed, both for myself and our clients. I felt I was missing a vital source of insight and intelligence that existed in my body. Which got me wondering: if I'm struggling with this, what's happening across entire organisations as AI becomes woven into how we work?
As AI becomes more capable of handling cognitive work, are we sleepwalking into letting other forms of intelligence atrophy at the exact moment teams and organisations need them most? Just as ecological systems collapse when dominated by a single invasive species, even the most resilient and competitive organisations could become fragile and fail when dominated by one form of intelligence. The more we privilege cognitive outputs (e.g. labelling, mapping, analysing, modelling and optimising based on what is known), the more we ignore and erode the subtle, relational and embodied intelligences that make us beautifully human. Isn't the real promise of AI that it could be a catalyst for regeneration, freeing time and imagination to be wholly human, deepening our relationships with each other and the living world?
This essay explores three other forms of intelligence which, when resourced properly, could work alongside AI to create something wiser and more regenerative than it or they could produce alone: divergent (the intelligence that thinks differently), relational (the intelligence that emerges between people) and ecological (the wisdom of living systems).
Divergent: the intelligence that thinks differently
In his 2004 book In Praise of Slow, Carl Honoré described a growing movement of people deliberately choosing to live more patiently to mitigate the 'cult of speed', which he argued was becoming the societal standard. I do wonder what the recommendations would be if it were written for today, a much more hyper-connected, hyper-consumerist, always-on, instantly gratified society.
We see these characteristics mirrored in organisations. A growing preoccupation with short-term productivity, efficiency and speed, business models that create even more bespoke, faster-to-market consumer products and solutions. AI is supercharging these efforts, bringing undeniable benefits: complex problem-solving, pattern spotting, calculations, creative provocation, even scientific breakthroughs. The faster we move, however, the less patience we’re likely to have for anything that can't immediately prove its value. Divergent intelligence often looks inefficient right up until the moment it's essential.
Research shows, rather depressingly, that when AI is trained on the outputs of AI (without significant fresh human data), it goes MAD — Model Autophagy Disorder[1]. The outputs degrade over successive generations as AI-generated errors pile up, and — like mad cow disease — can ultimately poison the AI models consuming the data. This is producing what some are calling the general 'shittification' of outputs. Take LinkedIn posts as an example. We keep asking AI that is being trained on other people's AI-written content to write our content. In my feed at least, I can already see the shittification happening: increasingly formulaic, derivative and largely devoid of any real human insight.
This feels like the AI version of human groupthink, a system feeding on its own outputs until diversity and freshness collapse. For decades, organisations have been trying (and mostly not succeeding) to avoid the perils of groupthink. We don't need to look far for cautionary tales, where prioritising consensus over critical challenge suppressed dissent, narrowed perspectives and led to terrible decisions. More recently the ‘funhouse mirror’ distortion of social media has created false perceptions of social norms, and echo chambers that reinforce extreme opinions and blind us to disconfirming perspectives.
Diverse teams outperform others on complex problem solving. We've made some progress (painful, slow progress) on increasing demographic diversity in companies, not only on the basis of what is considered just, but increasingly as an imperative for business competitiveness. Could we be at an evolutionary cross-roads for organisations? Either they overwrite the benefits of this diversity with AI-accelerated homogeneity and groupthink, or they build an inclusive and diverse approach to multiple intelligences, complemented by AI. If they choose the latter path, my bet is on them to create the most resilient, transformative and ultimately successful organisations of the next quarter century.
Where might organisations motivated to build multiple intelligences start? In my view, a key pillar is to encourage and embrace divergent intelligence as much as we can, in all of its glorious, random, non-normal magnificence. In his book Antifragile, Nassim Nicholas Taleb posits that antifragile systems actually need stressors, noise and variation to adapt and strengthen. If we remove that 'messiness', the system atrophies. "Antifragility is beyond resilience or robustness," he writes. "The resilient resists shocks and stays the same; the antifragile gets better."
If the trajectory of AI-augmented cultures is towards greater homogeneity, then we will need to be stretched, agitated and 'shocked' into discovering the genuinely new. In such a context, finding ways to seek and incorporate divergent intelligence becomes an existential pursuit. In this pursuit, we will be helped by more brains that are wired differently: for non-linear thinking, seemingly random creative leaps, heightened and attuned senses, visual thinking, unconventional perspectives and deep compassion.
So much wonderful research and work has been done in this area over the past decade, with evidence to show that organisations that actively recruit and embrace neurodiverse talent and communities unlock exactly this type of intelligence. UK intelligence agency GCHQ has long recognised its value and is rightly proud of what it describes as its 'mix of minds'. "Without neurodiversity, we wouldn't be GCHQ," it states. They've been a pioneer in recruiting and supporting neurodiverse talent for to solve problems in novel ways. Microsoft and SAP are also leading the way with their neurodiversity programmes.
Research from York University and the University of Toronto exploring the ethical advantage of autistic employees in the workplace found that “autistic adults are not just more likely to intervene when they witness dysfunction or misconduct in an organisational context; they are also less likely to engage in unethical behaviour in general due to lower levels of moral disengagement.” As we embark on the fraught task of setting global AI standards, we have an opportunity to include divergent intelligence in the shaping of more ethical and just AI. Not as accommodation, but as necessity.
Logitech has actively built internal and external neurodiversity panels to help test and innovate new products and initiatives, reasoning that products designed for those with sensorial needs and in need of more cognitive support will not only be inclusive but unlock design that benefits everyone [full transparency — this is led by my wife Rosie]. Including neurodiverse insight into mainstream design will help train AI on a broader range of human needs and preferences.
SAP's Autism at Work programme has been exploring how AI tools can be customised for neurodiverse employees: noise-cancelling features, visual task management, pattern recognition tools that play to autistic strengths. More interesting might be the reverse question: how might neurodivergent employees help us design better AI? There's growing recognition that neurodivergent minds could be vital to AI development — spotting unexpected angles and ‘edge cases’ in systems built for neurotypical norms that others might miss. Organisations like social enterprise Orchvate are already demonstrating this by training neurodivergent annotators to create and validate training data for AI models. The World Economic Forum argues for neurodivergent-led audits to stress-test AI systems for ethical blind spots, turning lived experience into design expertise.
To embrace these remarkable intelligences, we're going to need to prize them and create the conditions for them to flourish. Organisations say they want disruptive innovation but still design for conformity. Creating genuine space for AI informed by divergent intelligence will mean deliberately designing against our instinct to optimise everything for efficiency. Are we brave enough to try?
Relational: The intelligence that emerges between humans
Fifteen years ago, Google shared the results of their three-year internal study, Project Aristotle, which found that the highest-performing teams were the ones with greatest psychological safety. In those teams, people could speak openly, make mistakes, and raise uncomfortable points without fear of reprisal. In my experience, building that kind of trust takes deliberate effort and time, relying on small relational moments that strengthen connection and belonging. It's often the embodied exchanges that happen in the margins that make the difference to these too — the way we read a room before we speak, our willingness and ability to sit with discomfort and tension.
The link between the gut and decision-making is turning up some surprising influences on social behaviour-beyond our eating habits and into our sense of fairness and how we treat others. The vagus nerve — the information highway between our brain and our guts —primarily sends information from the gut to the brain, conveying information that affects mood, cognition and emotional state. It's a remarkable and elegant example of how our bodies can be a vital source of perception and insight for things we might not fully understand yet, and in an age where we can no longer trust everything we read or see this embodied intelligence becomes even more vital. Bessel van der Kolk's trauma research reveals that emotions, traumatic experiences and memories lodge within our physical being beyond conscious awareness, establishing a scientific foundation showing intelligence and knowledge extends far beyond mental processes into our bodily existence. Yet modern organisational and digital life increasingly favours disembodied forms of knowing that can dull this capacity, which is why moments when we do tap into it matter more than ever.
We saw this play out recently in a key decision-making moment at the start of an AI transformation programme for one of our clients. The group was sense-making around scenarios (drawing on some extraordinarily helpful AI analysis, modelling and creative inputs). As we came to the point of commitment, we spotted an interesting dynamic emerging; not enough objections were being raised. Were the group really 100% behind the direction and commitments needed, or were there unspoken reservations which if not voiced would erode true commitment? Using an embodied decision-making process where not all group members had to support a proposal to the same degree, we asked the group to walk and stand to signal their degree of agreement and voice their ‘barriers to commitment’. The ensuing conversation aired existential fears, practical implications and creative solutions. The group had to make sense of this complexity together, listening to the intention and emotions behind the words, sit with discomfort and being prepared to compromise. This was a slower, powerful and uncommon moment of collective intelligence, sensemaking and trust-deepening for the group which allowed them to ‘hold hands’ as they stepped into the inevitable rollercoaster ride of their AI transformation.
For me this is relational intelligence in action: the embodied ability to sense unspoken dynamics combined with the collective capacity to work through them together. But as the pace of work accelerates it feels like time and space for these moments are quietly disappearing. Teams frequently work across time zones, increasingly supported by AI tools that help organise, analyse, and communicate faster than ever. One of our clients said they turned up at a meeting recently to find that everyone else had sent their AI notetaking bot to represent them! A study across Taiwan, Indonesia, the US, and Malaysia found that as employees spend more time interacting with AI systems than with colleagues, workplaces become more “asocial”. People feel lonelier, even as their appetite for human connection increases. We seem to be finding more efficient ways to work together, but fewer ways to actually be together.
As AI takes on more of the cognitive load, we risk neglecting (or worse, losing) the relational and collaborative muscles that allow groups to think and feel together. Relational intelligence sits at this mix of collective and embodied ways of knowing.
So how do we deliberately cultivate this vital, relational intelligence in an age of AI? While practices aren't yet mainstream, some organisations are starting to notice what's being lost — research around the impact of AI and loneliness is one signal. The wave of return-to-office mandates is another, though the intent behind them is often disputed.
What would it look like to get explicit about this intention? To say “this decision stays human and in-person”, not because we're anti-AI, but because we need to sense what's unspoken, to read the room before we commit?
There's a growing field of somatic leadership training that teaches executives to develop “embodied intelligence” as a core leadership capacity. These practices help leaders recognise their bodies’ signals as data and essential skills for navigating the complexity of modern leadership. What if organisations treated embodied, collective awareness and intelligence the way they treat financial literacy or strategic thinking — as a foundational (not optional) way of working for all — so that AI handles the cognitive load while we deepen the relational work that makes us human?
Ecological: The intelligence from living systems
For simplicity of framing so far in this essay I've written about the human x AI interface, within the boundaries of the organisation. It's a convenient and narrow-boundary focus, one to which many of us can relate, but it's also dangerously incomplete. If we're looking for a world that is more sustainable, just, joyful and liberating for humans, then simply using human intelligence (and AI built on human intelligence) will not get us where we want. AI has been trained on every documented piece of our human history, framed by our human-centric view of the world and predominately built by men.
We also have to recognise that the whole physical and economic system that underpins gen AI is extractive. Vast amounts of material resources, human labour and data are extracted to construct and sustain large-scale AI systems, which demand vast amounts of energy to run — challenging our capacity to decarbonise energy systems and stabilise the climate. We are very poor at learning from our mistakes, as the metacrisis demonstrates — why then might we expect AI to choose to learn from a more enlightened approach?
The more-than-human world would be a great place to start: inherently regenerative natural systems where principles of reciprocity, resilience and sufficiency are core to flourishing and thriving ecosystems — qualities our societies are struggling to recover. What ecological intelligence (in its broadest sense) could we tap into to help us understand that 'winning' means enhancing the whole, not just maximising the part?
I spoke to Phil Tovey, Director of Nature-Centric Approaches at ASRA and former Head of Futures at England’s Department for Environment, Farming and Rural Affairs (DEFRA), who has spent the last decade looking at the ecological, planetary, existential and nonhuman biological futures and risks we face. He believes that organisations are now at an existential tipping point: “If you are sat there as an organisation and not thinking about radical transformation, then you are just pushing the car off the cliff. The organisations that will be successful are ones that are fluid, deeply porous, can change identity in radical ways, assimilate into their ecologies, and not be afraid to re-emerge as something completely different."
So how do we elevate the importance of ecological intelligence and bring it into relationship with our AI tools? Do we need to fundamentally rethink and shift the questions we ask of it, and what might that look like?
A clue to this might be found in the Earth Species Project who are using Large Language Models (LLMs) to decode animal communication in order to build better understanding and relationship with the natural world, in order for ‘the rest of life to thrive’. Key to this approach is revealing that humanity is not at the centre of communication and intelligence on Earth. Imagine
Our friends at Animals In the Room are experimenting with methods to overcome the psychological barriers we have erected to recognising the agency and interests of non-human animals. How might entering the sensory world and choices of animals — as subjects of their own lives, not objects to be manipulated for utility — alter our decision-making processes in agriculture, biotech and conservation? Could AI be trained to support this kind of work to help model and represent the interests of living systems that typically have no voice in decision-making?
Some inspiring examples can also be found through the work of Janine Benyus, biologist and author, who coined the term biomimicry - the practice of looking to (not extracting from) nature's "3.8 billion years of R&D" to solve design challenges from regenerative ways of manufacturing, water sourcing and distribution, to organisational structures. Architect Mick Pearce designed the Eastgate Centre in Harare to mimic how termite mounds regulate temperature (drawing in cool night air through vents and releasing heat through chimneys) eliminating the need for conventional air conditioning. The building uses 90% less energy for cooling than comparable structures.
These examples offer a powerful new lens for working with AI. It means asking different questions. Instead of "how do we optimise this process?", organisations might choose to ask, "how do we build systems that regenerate rather than extract?”. Instead of "how do we maximise short-term output and win?", we might ask "what does resilience look like for the whole system of which we are a part?". This is the intent and intelligence we need if AI is to be anything other than an accelerant for the same extractive, short-termist thinking that got us into the metacrisis in the first place.
So where do we go from here?
This isn't easy or particularly comfortable. What I'm proposing — deliberately creating space for divergent thinking, embodied relating and ecological wisdom — will take effort, and runs counter to everything AI promises us: effortless scale, speed, efficiency and optimisation. At a time where there is a temptation to simplify everything, I'm suggesting we make room for mess, weirdness and a reverence for real life.
This isn’t an either/or choice. It could and should be 2+2=5. The metacrisis we're facing didn't emerge from a lack of cognitive intelligence. More of the same thinking, even supercharged by AI, won't get us where we need to go. What might get us there is building healthier relationships with AI and deliberately working in the spaces it can't reach — the embodied sense when something's off, the creative leap that breaks the pattern, the slow work of building trust, the wisdom of systems that have been regenerating for millions of years. If we invest time in resourcing these intelligences alongside AI, I’m convinced that we will build organisations capable of genuine regeneration.