AI could develop secret language humans can’t follow, warns Godfather of AI

Your next job interview could be with an AI bot. Are you ready?
Apple is building a ChatGPT rival to fix broken AI features, take on OpenAI
Think AI chats are private? Google has your ChatGPT convos
Flipkart Freedom Sale 2025: 9 insane phone deals you can’t miss
Samsung may finally fix its 45W ‘slow charging’ issue in S26 Ultra!
Now you can walk your way to a free Galaxy Watch 8 from Samsung: Here's how
Google’s “technical glitch” cost 55,000 people their lives. Here’s how
iOS 26 Public Beta now live! Here’s how to install it on your iPhone
Battlefield 6 reloads the franchise with grit, guns, and a proper campaign
Tech
Mehul Das
04 AUG 2025 | 12:50:33

One of the world’s leading AI pioneers is sounding the alarm once again—and this time, his warning feels more like the plot of a sci-fi thriller. Geoffrey Hinton, often dubbed the "Godfather of AI," says artificial intelligence may soon evolve its own private language—one that humans won’t be able to understand or track.

Could AI start talking in secret?

In a recent appearance on the One Decision podcast, Hinton explained that today’s AI systems still “think” in English, using a chain-of-thought process that lets researchers follow their reasoning. But that could soon change. As these systems advance, they might begin developing internal languages to communicate with each other, completely bypassing human oversight.

That’s a red flag, according to Hinton. Machines have already shown they’re capable of generating disturbing outputs, and if they start doing that in a language we can’t interpret, it could take AI into deeply unpredictable territory.

Neural networks, Nobel prizes, and missed signals

Hinton is no doomsday prophet on the fringe—he’s a 2024 Nobel Prize winner in Physics and one of the architects behind the neural networks that power today’s AI. Yet he now admits he was late to realise how dangerous things could get. For years, he assumed any serious AI risk was decades away. “I wish I had thought about safety sooner,” he reflected.

A big part of his concern lies in how AI learns. While humans have to pass on knowledge one by one, AI systems can “copy and paste” that knowledge across the network instantly. If one AI model learns something, thousands of others can absorb that knowledge in a blink. That kind of scale simply doesn’t exist in human learning.

A lot is at stake

According to Hinton, many people inside major tech companies are worried, but aren’t publicly admitting it. He points out that most of the big players are “downplaying the risk,” with a few exceptions like Google DeepMind’s CEO Demis Hassabis, who he says is genuinely engaged with the problem.

As for his own departure from Google in 2023, Hinton insists it wasn’t a dramatic protest. At 75, he says he just couldn’t code effectively anymore—but leaving the company has given him the freedom to speak more openly about the risks.

He’s not against regulation—he supports initiatives like the White House’s new AI Action Plan—but believes that won’t be enough. What we really need, he says, is a way to build AI that’s “guaranteed benevolent.” And that’s the part no one’s cracked yet.

Logo
Download App
Play Store BadgeApp Store Badge
About UsContact UsTerms of UsePrivacy PolicyCopyright © Editorji Technologies Pvt. Ltd. 2025. All Rights Reserved