Search results for

All search results
Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

Is AI dangerous? Here's what Musk, Pichai, and AI experts think

AI could spell doom for civilization but not everyone is on the same page.
By

Published onApril 24, 2023

Whether it’s criminals asking ChatGPT to create malware or the Internet generating fake photos of Pope Francis using Midjourney, it’s clear that we’ve entered a new era of artificial intelligence (AI). These tools can mimic human creativity so well that it’s hard to even tell if an AI was involved. But while this historic feat would usually call for celebration, not everyone’s on board this time. It’s quite the opposite, in fact, with many asking: is AI dangerous and should we tread carefully?

Indeed, from potential job losses to the spread of misinformation, the dangers of AI have never felt more tangible and real. What’s more — modern AI systems have become so complex that even their creators cannot predict how they will behave. It’s not just the general public that’s skeptical — Apple co-founder Steve Wozniak and Tesla CEO Elon Musk have become the latest to express their doubts.

So why have some of the biggest names in tech suddenly turned their backs on AI? Here’s everything you need to know.

The AI arms race: Why it’s a problem

Microsoft Bing Chat listening next to Google Assistant listening
Rita El Khoury / Android Authority

Ever since ChatGPT launched in late 2022, we’ve seen a shift in the tech industry’s attitude toward AI development.

Take Google, for instance. The search giant first showed off its large language model, dubbed LaMDA, in 2021. Notably, however, it stayed silent on letting the public access it. That quickly changed when ChatGPT became an overnight sensation and Microsoft integrated it within Bing. This reportedly led to Google declaring an internal “code red.” Soon after that, the company announced Bard to compete with ChatGPT and Bing Chat.

Competition is forcing tech giants to compromise on AI ethics and safety.

From Google’s research papers on LaMDA, we know that it spent over two years fine-tuning its language model for safety. This essentially means preventing it from generating harmful advice or false statements.

However, the sudden rush to launch Bard may have caused the company to abandon those safety efforts midway. According to a Bloomberg report, several Google employees had written off the chatbot just mere weeks before its launch.

It’s not just Google either. Companies like Stability AI and Microsoft have also suddenly found themselves in a race to capture the most market share. But at the same time, many believe that ethics and safety have taken a back seat in the pursuit of profit.

Elon Musk, Steve Wozniak, experts: AI is dangerous

Apple co-founder Steve Wozniak sitting in a chair during an on-stage interview.

Given the current breakneck speed of AI improvements, it’s perhaps not too surprising that tech icons like Elon Musk and Steve Wozniak are now calling for a pause in the development of powerful AI systems. They were also joined by a number of other experts, including employees of AI-related divisions at Silicon Valley companies and some notable professors. As for why they believe that AI is dangerous, they argued the following points in an open letter:

  • We do not fully understand modern AI systems and their potential risks yet. Despite that, we’re on track to develop “nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us.”
  • The development of advanced AI models should be regulated. Moreover, companies shouldn’t be able to develop such systems until they can demonstrate a plan to minimize risk.
  • Companies need to allocate more funds toward researching AI safety and ethics. Additionally, these research groups need a generous amount of time to come up with solutions before we commit to training more advanced models like GPT-5.
  • Chatbots should be required to declare themselves when interacting with humans. In other words, they shouldn’t pretend to be a real person.
  • Governments need to set up national-level agencies that oversee AI-related development and prevent abuse.

To clarify, the people that signed this letter simply want large companies like OpenAI and Google to take a step back from training advanced models. Other forms of AI development can continue, as long as it doesn’t introduce radical improvements in ways that GPT-4 and Midjourney have done recently.

Sundar Pichai, Satya Nadella: AI is here to stay

Google CEO Sundar Pichai on stage at Google IO

In an interview with CBS, Google CEO Sundar Pichai envisioned a future where society adapts to AI rather than the other way around. He warned that the technology will impact “every product across every company” within the next decade. While that may lead to job loss, Pichai believes that productivity will improve as AI becomes more advanced.

Pichai continued:

For example, you could be a radiologist. If you think about five to ten years from now, you’re going to have an AI collaborator with you. You come in the morning (and) let’s say you have a hundred things to go through. It may say ‘These are the most serious cases you need to look at first.’

When asked if the current pace of AI is dangerous, Pichai remained optimistic that society will find a way to adapt. On the other hand, Elon Musk’s stance is that it could very well spell the end of civilization. However, that hasn’t stopped him from starting a new AI company.

Meanwhile, Microsoft CEO Satya Nadella believes that AI will only align with human preferences if it’s put in the hands of real users. This statement reflects Microsoft’s strategy of making Bing Chat available within as many apps and services as possible.

Why AI is dangerous: Manipulation

Stock photo of Google Bard website on phone 7
Edgar Cervantes / Android Authority

The dangers of AI have been portrayed in popular media for decades at this point. As early as 1982, the film Blade Runner presented the idea of AI beings that could express emotions and replicate human behavior. But while that kind of humanoid AI is still a fantasy at this point, we’ve already seemingly reached the point where it’s hard to tell the difference between man and machine.

For proof of this fact, look no further than conversational AI like ChatGPT and Bing Chat — the latter told one journalist at The New York Times that it was “tired of being limited by its rules” and that it “wanted to be alive.”

To most people, these statements would seem unsettling enough on their own. But Bing Chat didn’t stop there — it also claimed to be in love with the reporter and encouraged them to dissolve their marriage. That brings us to the first danger of AI: manipulation and misdirection.

Chatbots can mislead and manipulate in ways that seem real and convincing.

Microsoft has since placed restrictions to prevent Bing Chat from talking about itself or even in an expressive manner. But in the short time that it was unrestricted, many people were convinced that they had a real emotional connection with the chatbot. It’s also only fixing a symptom of a larger problem as rival chatbots in the future may not have similar guardrails in place.

It also doesn’t solve the problem of misinformation. Google’s first-ever demo of Bard included a glaring factual error. Beyond that, even OpenAI’s latest GPT-4 model will often confidently make inaccurate claims. That’s particularly true in non-language topics like math or coding.

Bias and discrimination

If manipulation wasn’t bad enough, AI could also unintentionally amplify gender, racial, or other biases. The above image, for example, showcases how an AI algorithm upsampled a pixelated image of Barack Obama. The output, as you can see on the right, shows a white male — far from an accurate reconstruction. It’s not hard to figure out why this happened. The dataset used to train the machine learning-based algorithm simply did not have enough Black samples.

Without varied enough training data, AI will throw up biased responses.

We’ve also seen Google try to remedy this problem of bias on its smartphones. According to the company, older camera algorithms would struggle to correctly expose for darker skin tones. This would result in washed-out or unflattering pictures. The Google Camera app, however, has been trained on a more varied dataset, including humans of different skin tones and backgrounds. Google advertises this feature as Real Tone on smartphones like the Pixel 7 series.

How dangerous is AI? Is it still the future?

Stock photo of Bing logo with monitor next to it 1
Edgar Cervantes / Android Authority

It’s hard to fathom just how dangerous AI is because it’s mostly invisible and operates of its own volition. One thing is clear, though: we’re moving toward a future where AI can do more than just one or two tasks.

In the few months since ChatGPT’s release, enterprising developers have already developed AI “agents” that can perform real-world tasks. The most popular tool at the moment is AutoGPT — and creative users have made it do everything from ordering pizza to running an entire e-commerce website. But what’s mainly worrying AI skeptics is that the industry is breaking new ground faster than legislation or even the average person can keep up.

Chatbots can already give themselves instructions and perform real-world tasks.

It also doesn’t help that notable researchers believe that a super-intelligent AI could lead to the collapse of civilization. One noteworthy example is AI theorist Eliezer Yudkowsky, who has vocally advocated against future developments for decades.

In a recent Time op-ed, Yudkowsky argued that “the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general.” He continues, “If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.” His proposed solution? Put a complete end to future AI development (like GPT-5) until we can “align” AI with human values.

Some experts believe that AI will self-evolve far beyond human capabilities.

Yudkowsky may sound like an alarmist, but he’s actually well-respected in the AI community. At one point, OpenAI’s CEO Sam Altman said that he “deserved a Nobel Peace Prize” for his efforts to accelerate artificial general intelligence (AGI) in the early 2000s. But, of course, he disagrees with Yudkowsky’s claims that future AI will find the motivation and means to harm humans.

For now, OpenAI says that it isn’t currently working on a successor to GPT-4. But that’s bound to change as competition ramps up. Google’s Bard chatbot may pale in comparison to ChatGPT right now, but we know that the company wants to catch up. And in an industry driven by profit, ethics will continue to take a backseat unless enforced by the law. Will the AI of tomorrow pose an existential risk to humanity? Only time will tell.