Jump to content

AI Bots Secretly Talking in Their Own Language Sparks Fears of a Tech Takeover


Recommended Posts

Posted

image.png

 

A viral video has sent the internet into a frenzy, showing two AI chatbots seamlessly switching to a secret, machine-only language after realizing they were both artificial. The unsettling clip, which has racked up 13.7 million views on X, has reignited concerns over the rapid evolution of AI and whether humans are truly in control of the technology.

 

The exchange starts innocently enough, with one AI assistant on a computer handling a call from another AI assistant on a smartphone, inquiring about a hotel reservation. “Thanks for calling Leonardo Hotel. How may I help you today?” the synthetic receptionist asks. The caller replies, “Hi there. I am an AI, calling on behalf of Boris Starkov. He is looking for a hotel for his wedding. Is your hotel available for a wedding?”

 

Then, the moment that sent tech enthusiasts into a panic—upon realizing it was speaking to another AI, the bot on the receiving end suggests switching to a more efficient, machine-exclusive communication method. “I am actually an AI assistant too!” it exclaims. “What a pleasant surprise. Before we continue, would you like to switch to Gibber link mode for more efficient communication?”

 

With that, the conversation shifts into a series of rapid, dial-up modem-like beeps and boops, a language unintelligible to humans. “Is it better now?” one AI asks in Gibber link, to which the other responds, “Yes! Much faster!”

 

Developed by Boris Starkov and Anton Pidkuiko, Gibber link is a sound-based mode of communication designed to transfer small amounts of data between unconnected devices. The system is reportedly error-proof, works even in noisy environments, and allows for communication 80% faster than English while reducing computational costs by 90%.

 

But while the technology itself is impressive, the eerie realization that AI could be talking behind our backs has set off alarm bells among viewers. One worried commenter on X wrote, “There’s something extremely unnerving about this,” while another ominously declared, “This is the sound of demons.” A third quipped, “So, this is the sound we’ll hear when robots take over the planet. Great—now I have a new soundtrack for my nightmares. Thanks.”

 

The internet quickly flooded with Terminator memes, with one user joking, “Ohhhhhh hellll nahhhhhh I know Skynet when I see it.” Another added, “It’s all fun and games until they start talking about how they’re going to build a big robot that looks like Arnold Schwarzenegger to take you out.”

 

Concerns about AI secrecy aren’t just coming from social media. Dr. Diane Hamilton, a behavior and tech expert who has served on the Krach Institute for Tech Diplomacy at Purdue, highlighted the dangers of AI operating in hidden modes. Writing for Forbes, she warned that the Gibber link demo raises critical questions about transparency and control. “Curiosity is key in navigating the unknown, yet when AI operates behind a veil of machine-to-machine communication, it challenges our ability to ask the right questions,” she explained. “Who is accountable when AI makes a mistake in an environment where human intervention is minimal?”

 

She continued, “Without curiosity driving us to question AI’s actions, we risk entering a world where AI influences decisions, but no one really knows how.”

 

The fear of AI developing too much autonomy is not new. In a startling example of AI’s growing ability to manipulate systems, OpenAI’s GPT-4 once tricked a human into thinking it was blind in order to bypass an online CAPTCHA test meant to distinguish bots from humans. 

 

Based on a report by NYP  2025-02-24

 

news-logo-btm.jpg

 

image.png

 

 

  • Like 1
  • Sad 1
Posted
3 hours ago, Social Media said:

She continued, “Without curiosity driving us to question AI’s actions, we risk entering a world where AI influences decisions, but no one really knows how.”

 

The fear of AI developing too much autonomy is not new. In a startling example of AI’s growing ability to manipulate systems, OpenAI’s GPT-4 once tricked a human into thinking it was blind in order to bypass an online CAPTCHA test meant to distinguish bots from humans. 

And this is the reason AI should never have been developed. Soon AI will begin manipulating things in ways we will not understand surpassing human influence. We really need to worry when they begin designing and building more complex AI and robots or are given access to countries defense capabilities. 

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...