Search results for

All search results
Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

AI apocalypse: Why and how Silicon Valley has become terrified of its own creation

AI's potential shines brighter everyday, but even its biggest supporters now advise caution.

Published onApril 8, 2024

A Google Pixel Watch displays the Google Assistant.
Kaitlyn Cimino / Android Authority

Thanks to ChatGPT and its many rivals, artificial intelligence has gone from being a phrase that once evoked boundless enthusiasm to one that now sparks a sense of dread. It’s not hard to see why — the technology’s meteoric rise is unprecedented. Unlike the metaverse and previous hype cycles, AI products are available today, and their capabilities are advancing at a shocking pace. However, its this very potential that has raised serious concerns among tech icons and AI experts. But is Silicon Valley right and could a seemingly-helpful assortment of chatbots really lead to humanity’s downfall or even extinction?

Modern AI: A brewing storm

plasma globe next to an infected computer

Even if we ignore the possibility of an apocalypse for a minute, it’s impossible to overlook how AI is affecting the livelihoods of thousands, if not millions, of people today. While image generators and chatbots may seem harmless, they’ve already displaced everyone from customer support agents to graphic designers. And unlike industrialization in the 18th century, you can’t exactly make the argument that AI will create new jobs in its wake.

Modern AI systems can reason and self-correct, reducing the need for human supervision – unlike traditional machines. Just a few weeks ago, AI startup Cognition Labs unveiled Devin — or what it calls the “world’s first fully autonomous software engineer.” Beyond generating code, Devin can identify and fix bugs, train and deploy new AI models of its own, contribute to open-source projects, and even participate in community discussions. Unfortunately, this autonomy has serious implications that extend far beyond simple job loss.

The greatest danger is not AI that out-thinks us, but one that can deceive us.

Take the malicious backdoor discovered in the open-source compression tool XZ Utils last month. While XZ is little known outside of developer communities, millions of Linux-based servers rely on it and the backdoor could have granted attackers remote control over many critical systems. However, this wasn’t a traditional hack or exploit. Instead, the attacker cultivated a reputation as a helpful contributor over several years before gaining the community’s trust and slipping in the code for a backdoor.

A sophisticated AI system could automate such attacks at scale, handling every aspect from malicious code generation to mimicking human discussion. Of course, autonomous language-based agents aren’t very capable or useful today. But an AI that can seamlessly blend into developer communities and manipulate key infrastructure seems inevitable. OpenAI and Cognitive Labs are building guardrails around their AI products, but the reality is that there’s no shortage of uncensored language models for attackers to exploit.

Worryingly still, experts worry that such AI-enabled deception and skirmishes could be just the tip of the iceberg. The real risk is that AI might one day evolve beyond human control.

The probability of doom

voxel maze

An AI capable of evading human control sounds like a sci-fi plot today, but to many in the tech industry, it’s an inevitability. As reported by The New York Times, a new statistic dubbed p(doom) has gained traction in Silicon Valley. Short for “probability of doom,” the metric started as a tongue-in-cheek way of quantifying how concerned someone is about an AI-driven apocalypse. But with each passing day, it’s turning into a serious discussion.

Estimates vary, but the notable part is that virtually nobody in the industry rates their probability of doom at zero. Even those deeply invested in the technology, like Anthropic AI co-founder Dario Amodei, peg their p(doom) at a concerning 10 to 25 percent. And that’s not even counting the many AI safety researchers who have quoted figures higher than 50 percent.

This fear of doom stems from tech companies being locked in a race to develop the most powerful AI possible. And that means eventually using AI to create better AI until we create a superintelligent system with capabilities beyond human comprehension. If that sounds far-fetched, it’s worth noting that large language models like Google’s Gemini already show emergent capabilities like language translation that go beyond their intended programming.

In Silicon Valley, even those building AI are pessimistic about humanity's chances.

The big question is whether a superintelligent AI will align with human values or harm us in its quest for ruthless efficiency. This situation is perhaps best explained by a thought experiment known as the paperclip maximizer. It posits that a seemingly harmless AI, when tasked with producing as many paperclips as possible, could consume the entire world’s resources without regard for humans or our traditions. The Swedish philosopher Nick Bostrom theorized,

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips.

The safety guardrails that currently exist around platforms like Midjourney and ChatGPT aren’t enough, especially since their very creators often can’t explain erratic behavior. So what’s the solution? Silicon Valley doesn’t have an answer and yet, Big Tech continues innovating as recklessly as ever. Just last month, Google and Microsoft cut jobs in their respective trust and safety teams. The latter reportedly laid off its entire team dedicated to guiding ethical AI innovation.

Say what you will about Elon Musk and his many controversies, but it’s hard to disagree with his stance that the AI sector desperately needs reform and regulation. Following widespread concerns from Musk, Apple co-founder Steve Wozniak, and others, OpenAI agreed to an interim pause on training new models last year. However, GPT-5 is now under active development and the race to achieve superintelligence continues, leaving the question of safety and humanity’s future hanging in the balance.

You might like