Written by Nthanda Manduwi
Editor-in-Chief, VoD
“Imagine an AI that feels more like a friend than a tool – infinitely knowledgeable, kind, and ready to help.”
This is how Mustafa Suleyman, co-founder of DeepMind and now head of Microsoft’s consumer AI, frames the future. He urges us to think of AI “as something like a new digital species” – essentially digital companions on life’s journey.
This isn’t science fiction talk. Suleyman has been riding the wave of AI since before most of us had smartphones. At DeepMind he helped build AIs that mastered games and medicine, and today he leads Microsoft’s Copilot efforts. But Suleyman isn’t just a builder – he’s a vocal guardian. He balances the wonder of what AI can do (tutors and translators in every pocket) with a sober call to steer its power wisely.
Suleyman’s rise has been anything but ordinary. Born in 1984 to a Syrian father (a London taxi driver) and an English mother, he grew up in North London and studied philosophy at Oxford. At 19 he famously dropped out to found a Muslim youth helpline, driven by a desire to “improve the human condition”. In 2010 he joined Oxford friend Demis Hassabis and Shah Legg to co-found DeepMind. The startup set out to conquer general AI, and within a few years it became one of the world’s leading AI companies – backed by luminaries like Elon Musk and later sold to Google in 2014. Suleyman became DeepMind’s Chief Product Officer and later Head of Applied AI, integrating its breakthroughs into Google products.
Those early DeepMind days showcased Suleyman’s dual instincts. On one hand he reveled in achievements: he recalls the thrill when a simple AI learned Atari games like Space Invaders just by watching pixels, discovering strategies that even the developers hadn’t noticed. “That to me was both thrilling… and also petrifying,” he later explained, marveling that a tiny learning system could invent new ways to play. At the same time he pushed ethics and safety from day one. He championed a precautionary “do no harm” principle at DeepMind, launched ethics research labs, and even tried to make DeepMind a “global interest” company legally bound to the public good. (When Google balked, he eventually left DeepMind in 2019 to work on AI policy at Google.)
In 2022 Suleyman struck out on a new venture. With Reid Hoffman of LinkedIn, he launched Inflection AI – a lab with deep pockets (over $1 billion) and big ambitions to make AI tools more human-friendly. In 2023 Inflection unveiled Pi, a chatbot that “remembers” past conversations and even learns your style. Pi is billed as a “personal AI” – part coach, part therapist, part assistant. As Suleyman explains, Pi will have near-perfect IQ and “exceptional EQ”, so that it can advise you on everything from homework to health. He paints a vivid picture: “Imagine if everybody had a personalized tutor in their pocket and access to low-cost medical advice… a lawyer and a doctor, a business strategist and coach – all in your pocket 24 hours a day”. That’s the hope: anyone in the Global South with a smartphone could have on-demand expert help in education, medicine or law, in their own language.
Yet Suleyman is just as passionate about the flip side: what if things go wrong? Even Pi, as helpful as it sounds, raises questions. In interviews he candidly warns that “every organization is going to race to get their hands on [this] intelligence and that’s going to be incredibly destructive”. The same AI that spots cancer could be turned to target missiles. A mere “tiny group of people who wish to cause harm” might use advanced chatbots or generators to destabilize society. “That’s the challenge,” he says – how do we stop something that could “cause harm or potentially kill”? This tension – boundless promise vs. deep peril – is at the heart of Suleyman’s outlook.
Suleyman poured these concerns into his 2023 book The Coming Wave. He paints a fast-approaching world of “radical abundance” and “dangerous progress”. AI could collapse the cost of goods by teaching us new materials and energy grids; it could unlock cures and education for millions. But it could also spin beyond our control, building better versions of itself or bioweapons out of reach of regulators. In fact, Suleyman explicitly urges a precautionary approach: he prefers that we hold back some benefits now if needed, rather than rush headlong into the dark. “We’re actually starting to adopt a precautionary principle,” he said in debate, “approaching a do-no-harm principle, leaving some of the benefits on the tree … we might lose some gains now, but I think that’s the right trade-off”.
To operationalize this caution, Suleyman offers concrete plans. He champions open testing of models so everyone can spot flaws, and a “black box” ethos of sharing AI failures (as airlines do). He’s called for banning AI-generated deepfakes in elections and requiring strict audits if tech firms want to give AI too much autonomy. In short, he says, we have to decide how to shape the technology, because left unchecked, “it happens to us” instead of being guided by human values.
Suleyman’s warnings are global. He stresses that AI impacts everyone, in every country – not just Silicon Valley. For the upcoming generation, he offers both caution and inspiration. He notes that future AIs will “speak every language” and process vast amounts of local knowledge. This means AI can help overcome barriers of education and communication: a farmer in Nigeria or India could chat with an AI tutor in their mother tongue. And in terms of resources, Suleyman is optimistic: he believes AI-driven tech could slash energy costs by 10–100×. If that happens, entire regions could finally harness cheap power – desalinate water in deserts, cool homes in hot climates, and grow crops in arid lands. In his words, that would make “life much easier and much cheaper for everybody on the planet”.
At the same time, he is adamant on ethics. He points out that “the coming wave should be adapted to human needs”, not to serve only a few. To that end, Suleyman helped establish DeepMind Ethics & Society, co-chairs the Partnership on AI, and speaks widely about inclusive governance. He warns that bringing in voices from the Global South will mean difficult conversations – values he calls “very offensive” to Western staff – but insists this diversity is non-negotiable.
As he put it to a conference: if we can’t sit with people who “hold these views”, from different cultures or even adversarial governments, we have no chance to solve global problems.
Throughout his journey, Suleyman has balanced two roles: technologist and philosopher. He admits he used to be “petrified” by AI’s threats, but over time he adjusted his stance: AI is inevitable now, so “we have to wrap our arms around it and guide it”. In other words, the future is a tightrope walk, not a runaway train – and we can steer if we act collectively. He still sounds the alarm on risks, but now with a sense of urgency to act. As he explains, the more people engage and shape AI today, the more “empowering” it becomes, because then “these benefits far, far outweigh the potential risks”.
Suleyman’s message is both hopeful and practical. He sees AI as a tool that can serve students, entrepreneurs and farmers – a new digital ally to help overcome old challenges. Yet he also urges vigilance: governments, tech leaders and citizens everywhere must stay involved. In his view, we decide whether the digital species serves humanity or not. By insisting on openness, languages for all, and prioritizing human welfare (from safe clean energy to fair health advice), he offers a vision where technology truly amplifies people.
Mustafa Suleyman is quietly becoming one of AI’s most influential voices. He built parts of the future and now warns us to handle it responsibly. In his words, “technology should amplify the best of us… [and] make us happier and healthier” – but only “always on our terms, democratically decided, with benefits widely distributed”. That philosophy – seeing AI as a partner built for our needs – could make all the difference in a high-tech world that includes every corner of the globe.
Click the ⤢ Icon to View the Magazine in full Screen for Best View
Designed as a seasonal publication, Voice of Development brings together research, reporting, and analysis meant to be read deliberately and revisited over time. Winter 2026 is a starting point: an attempt to answer, with clarity and restraint, what AIs can actually do—and what they cannot do.
Disclaimer: VoD Capsules are AI-generated. They synthesize publicly available evidence from reputable institutions (UN, World Bank, AfDB, OECD, academic work, and other such official data sources). Always consult the original reports and primary data for verification.