Be part of our every day and weekly newsletters for the most recent updates and unique content material on industry-leading AI protection. Study Extra
A phase on CBS weekly in-depth TV information program 60 Minutes final evening (additionally shared on YouTube right here) supplied an inside take a look at Google’s DeepMind and the imaginative and prescient of its co-founder and Nobel Prize-winning CEO, legendary AI researcher Demis Hassabis.
The interview traced DeepMind’s speedy progress in synthetic intelligence and its ambition to attain synthetic basic intelligence (AGI)—a machine intelligence with human-like versatility and superhuman scale.
Hassabis described at the moment’s AI trajectory as being on an “exponential curve of enchancment,” fueled by rising curiosity, expertise, and sources getting into the sphere.
Two years after a previous 60 Minutes interview heralded the chatbot period, Hassabis and DeepMind are actually pursuing extra succesful methods designed not solely to grasp language, but additionally the bodily world round them.
The interview got here after Google’s Cloud Subsequent 2025 convention earlier this month, by which the search large launched a bunch of recent AI fashions and options centered round its Gemini 2.5 multimodal AI mannequin household. Google got here out of that convention showing to have taken a lead in comparison with different tech firms at offering highly effective AI for enterprise use instances on the most reasonably priced value factors, surpassing OpenAI.
Extra particulars on Google DeepMind’s ‘Venture Astra’
One of many phase’s focal factors was Venture Astra, DeepMind’s next-generation chatbot that goes past textual content. Astra is designed to interpret the visible world in actual time.
In a single demo, it recognized work, inferred emotional states, and created a narrative round a Hopper portray with the road: “Solely the move of concepts shifting onward.”
When requested if it was rising bored, Astra replied thoughtfully, revealing a level of sensitivity to tone and interpersonal nuance.
Product supervisor Bibbo Shu underscored Astra’s distinctive design: an AI that may “see, hear, and chat about something”—a marked step towards embodied AI methods.
Gemini: Towards actionable AI
The published additionally featured Gemini, DeepMind’s AI system being educated not solely to interpret the world but additionally to behave in it—finishing duties like reserving tickets and purchasing on-line.
Hassabis stated Gemini is a step towards AGI: an AI with a human-like capability to navigate and function in complicated environments.
The 60 Minutes crew tried out a prototype embedded in glasses, demonstrating real-time visible recognition and audio responses. May it additionally trace at an upcoming return of the pioneering but in the end off-putting early augmented actuality glasses often known as Google Glass, which debuted in 2012 earlier than being retired in 2015?
Whereas particular Gemini mannequin variations like Gemini 2.5 Professional or Flash weren’t talked about within the phase, Google’s broader AI ecosystem has lately launched these fashions for enterprise use, which can mirror parallel improvement efforts.
These integrations assist Google’s rising ambitions in utilized AI, although they fall outdoors the scope of what was straight coated within the interview.
AGI as quickly as 2030?
When requested for a timeline, Hassabis projected AGI might arrive as quickly as 2030, with methods that perceive their environments “in very nuanced and deep methods.” He prompt that such methods may very well be seamlessly embedded into on a regular basis life, from wearables to residence assistants.
The interview additionally addressed the potential of self-awareness in AI. Hassabis stated present methods should not aware, however that future fashions might exhibit indicators of self-understanding. Nonetheless, he emphasised the philosophical and organic divide: even when machines mimic aware habits, they aren’t made from the identical “squishy carbon matter” as people.
Hassabis additionally predicted main developments in robotics, saying breakthroughs might come within the subsequent few years. The phase featured robots finishing duties with obscure directions—like figuring out a inexperienced block shaped by mixing yellow and blue—suggesting rising reasoning skills in bodily methods.
Accomplishments and security issues
The phase revisited DeepMind’s landmark achievement with AlphaFold, the AI mannequin that predicted the construction of over 200 million proteins.
Hassabis and colleague John Jumper have been awarded the 2024 Nobel Prize in Chemistry for this work. Hassabis emphasised that this advance might speed up drug improvement, probably shrinking timelines from a decade to only weeks. “I believe someday perhaps we will treatment all illness with the assistance of AI,” he stated.
Regardless of the optimism, Hassabis voiced clear issues. He cited two main dangers: the misuse of AI by unhealthy actors and the rising autonomy of methods past human management. He emphasised the significance of constructing in guardrails and worth methods—instructing AI as one may train a toddler. He additionally referred to as for worldwide cooperation, noting that AI’s affect will contact each nation and tradition.
“Certainly one of my huge worries,” he stated, “is that the race for AI dominance might develop into a race to the underside for security.” He harassed the necessity for main gamers and nation-states to coordinate on moral improvement and oversight.
The phase ended with a meditation on the long run: a world the place AI instruments might remodel virtually each human endeavor—and finally reshape how we take into consideration information, consciousness, and even the that means of life. As Hassabis put it, “We want new nice philosophers to return about… to grasp the implications of this method.”