OpenAI Flagged Frightening Discovery Prior to CEO Exile

Before OpenAI’s CEO Sam Altman temporarily stepped away, a group of the company’s researchers penned a letter to the board of directors. This letter raised alarms about a new artificial intelligence breakthrough they believed could pose risks to humanity. This information was shared by two individuals with knowledge of the situation, as reported by Reuters.

This letter, along with a particular AI algorithm, were pivotal in the events leading up to the board’s decision to remove Altman, a renowned figure in the field of generative AI, according to the sources.

Just before his eventual return on a Tuesday evening, a significant number of OpenAI employees, numbering over 700, had considered resigning and moving to Microsoft, showing support for Altman. The letter was cited as one of several reasons by the board for dismissing Altman. Among these reasons were concerns about the premature commercialization of technological advancements without fully grasping their implications.

Reuters was not able to obtain a copy of the letter for review, and the staff members who authored the letter did not reply to requests for comments.

Following inquiries from Reuters, OpenAI, which chose not to comment publicly, acknowledged internally to its staff about a project named Q* and a pre-weekend letter to the board, as per one of the sources.

A spokesperson for OpenAI stated that Mira Murati, a senior executive at the company, informed employees about certain media reports without confirming their veracity.

Some individuals within OpenAI view Q* (spoken as Q-Star) as a potential major advancement in the company’s quest to develop artificial general intelligence (AGI), as told to Reuters. AGI, as defined by OpenAI, refers to autonomous systems capable of outperforming humans in most economically valuable tasks.

With extensive computing power, the new model has demonstrated the ability to solve specific mathematical problems, mentioned one source who wished to remain anonymous due to not being authorized to speak for the company.

Although the model is currently solving mathematics at a level comparable to grade-school students, its success in these tasks has led researchers to be very optimistic about Q*’s future prospects, the source added. However, Reuters was not able to independently verify the claimed capabilities of Q*.

Researchers in the field of generative AI view mathematics as a key area of development. Presently, generative AI excels in tasks like writing and language translation by statistically predicting subsequent words, but answers to the same question can vary significantly.

Achieving proficiency in mathematics, where there is typically only one correct answer, would suggest that AI has developed greater reasoning skills akin to human intelligence. This could have significant applications in new scientific research, as believed by AI researchers.

Unlike standard calculators which are limited to a set number of operations, AGI has the ability to generalize, learn, and comprehend. In their letter, researchers highlighted both the capabilities and potential risks of AI, though the specific safety concerns mentioned were not detailed by the sources.

The concept of highly intelligent machines posing risks, such as potentially deciding that the destruction of humanity is in their best interest, has been a long-standing discussion among computer scientists.

Additionally, researchers have brought attention to the work of an “AI scientist” team, confirmed by multiple sources. This team, formed by merging earlier “Code Gen” and “Math Gen” groups, is focused on enhancing existing AI models to improve their reasoning abilities and eventually undertake scientific research, shared one of the informants.

Under Altman’s leadership, ChatGPT became one of the fastest-growing software applications in history. His efforts attracted significant investment and computing resources from Microsoft, advancing the company’s pursuit of AGI.

Besides revealing a range of new tools earlier in the month, Altman hinted at a summit in San Francisco that significant progress was on the horizon.

“Four times in OpenAI’s history, the most recent being just a few weeks ago, I’ve been in the room where we’ve pushed back the veil of ignorance and advanced the frontier of discovery. Experiencing this is a professional highlight of my career,” he expressed at the Asia-Pacific Economic Cooperation summit.

The day following his speech, the board decided to terminate Altman’s position.

Daily True News

Daily True News

Leave a Reply

Your email address will not be published. Required fields are marked *