Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

OpenAI reportedly made a potentially scary AI breakthrough

Staff researchers had warned the previous Board about the potential of this breakthrough, thus triggering Altman's exit.
By
November 23, 2023
OpenAI on website on smartphone stock photo (2)
Edgar Cervantes / Android Authority
TL;DR
  • OpenAI staff researchers had warned its previous Board of Directors about a “powerful artificial intelligence discovery that could threaten humanity.”
  • This is believed to have been the tipping point for Sam Altman’s exit as CEO (before his eventual reappointment).
  • OpenAI internally mentioned Project Q*, which some employees believe is a breakthrough for artificial general intelligence.

The drama around OpenAI, the creators of ChatGPT, seems to have ended with the reappointment of Sam Altman as the CEO. Or has it? A new report now brings to light a “powerful artificial intelligence discovery that could threaten humanity,” which is believed to be one of the triggers that led to Altman’s ouster.

Reuters published a report on a letter several staff researchers wrote to the previous Board of Directors at OpenAI. This letter contained an ominous warning about the AI discovery. The company declined to comment when the publication contacted OpenAI about the letter. Subsequently, it acknowledged in an internal message to staffers that OpenAI has a project called Q* (Q-Star) and a letter that staff researchers had sent to the previous Board of Directors.

The staff message was sent by Mira Murati, who was once the Chief Technology Officer at OpenAI and the interim CEO of the company for two days. It’s not clear what role she held at the time of sending the staff message. The message alerted staff to media stories without commenting on their accuracy.

According to Reuters, some employees believe Q* could be a breakthrough in OpenAI’s search for artificial general intelligence (AGI). An AGI is an autonomous system that surpasses humans in most economically valuable tasks. The new Q* model allegedly solved certain mathematical problems, making researchers optimistic about its future success.

The letter by staff researchers to the previous Board had allegedly flagged the AI’s prowess and potential danger. The researchers had also flagged the work by an “AI scientist team,” which was formed by combining “Code Gen” and “Math Gen” teams at OpenAI. This combined team was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work.

According to a report from The Information, Sam Altman has agreed to an internal investigation into the alleged conduct that caused the chain of events. As their public-facing reasoning for Altman’s firing, the previous Board had said that Altman hadn’t been “consistently candid” with board members. However, the previous Board did not elaborate their reasoning to the public, investors, or even the interim CEOs.

We’ll keep you updated on the next round of development at OpenAI, especially if they concern AI getting too smart for humanity.