Search results for

All search results
Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

AI insiders call for for industry safety and whistleblower protection (Updated: OpenAI's response)

Current and former employees of AI companies raise concerns over industry safety in an open letter.

Published onJune 4, 2024

OpenAI on website on smartphone stock photo (1)
Edgar Cervantes / Android Authority
  • AI experts, including former OpenAI employees, released an open letter calling for better safety measures and whistleblower protections in the AI industry.
  • Signatories propose eliminating non-disparagement clauses, implementing anonymous reporting systems, and fostering a culture of open criticism and transparency.
  • OpenAI has responded to the letter detailing the steps it’s taking to address these concerns.

Update, June 4, 2024 (5:29 PM ET): OpenAI has reached out to Android Authority, emphasizing the company’s commitment to providing capable and safe AI systems. Here is the official statement from the OpenAI spokesperson:

We’re proud of our track record providing the most capable and safest AI systems and believe in our scientific approach to addressing risk. We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society, and other communities around the world. This is also why we have avenues for employees to express their concerns, including an anonymous integrity hotline and a Safety and Security Committee led by members of our board and safety leaders from the company.

Regarding its safety track record, OpenAI points out that it has a history of not releasing technology until necessary safeguards are in place. The company reiterated its support for government regulation and its participation in voluntary AI safety commitments.

In response to concerns about retaliation, the spokesperson confirmed that the company has released all former employees from non-disparagement agreements and removed such clauses from standard departure paperwork.

However, OpenAI also emphasized the importance of maintaining confidentiality, stating that while it values open dialogue, discussions must not jeopardize the security of its technology. The company expressed concerns about potential disclosures to “bad actors” who could misuse the technology and harm the United States’ leading position in AI development.

Original article, June 4, 2024 (3:00 PM ET): A group of current and former employees from top AI companies like OpenAI and Google DeepMind have banded together to voice their concerns about the need for stronger safety measures in the rapidly growing field of AI. The letter titled ‘,’ signed by over a dozen AI insiders, points out that while AI has the potential to bring incredible benefits to humanity, there are also some serious risks involved.

These risks range from widening existing inequalities to the spread of misinformation and even the possibility of outcomes like a rogue AI causing human extinction. The signatories emphasized that these concerns are shared not just by them but also by governments, other AI experts, and even the companies themselves. Here’s the letter in full:

In a nutshell, they’re saying that AI companies might be a little too focused on making money and not focused enough on making sure their technology is safe. They believe that the current approach of letting companies self-regulate and voluntarily share information about their AI systems isn’t enough to tackle the complex and potentially far-reaching risks involved.

To address this, the employees have suggested a few ideas. They think AI companies should promise not to punish employees who raise concerns, create anonymous ways for people to report problems, and encourage open discussion about the risks of AI. They also think that current and former employees should be able to talk openly about their concerns as long as they don’t spill any company secrets.

This call for action comes after some recent controversies in the AI world, like the disbandment of OpenAI’s safety team and the departure of some key figures who were big on safety. Interestingly, the letter has the backing of Geoffrey Hinton, a well-respected AI pioneer who recently left Google so he could speak more freely about the potential dangers of AI.

This open letter is a not-so-gentle reminder that AI is developing so fast that the rules and regulations haven’t quite caught up yet. As AI gets more powerful and shows up in more places, it’s becoming super important to make sure it’s safe and transparent. These AI insiders are taking a stand for accountability and protection for those who speak up, hoping to ensure that as we continue to develop AI, we’re doing it in a way that’s good for everyone.

Got a tip? Talk to us! Email our staff at You can stay anonymous or get credit for the info, it's your choice.

You might like