OpenAI CEO Sam Altman, on November 17 was abruptly dismissed by the OpenAI board of directors, Prior to his dismissal several staff researchers wrote an ‘undisclosed’ letter to the board of directors warning of a discovery of a powerful artificial intelligence that as they mentioned, poses a potential threat to humanity.
Altman's termination by the board was influenced by a series of concerns, and the reported letter serves as just one element in a comprehensive list of grievances. There were apprehensions about Altman's inclination to commercialise advancements without fully grasping the potential consequences.
Altman was terminated by the board due to an alleged lack of transparency. An internal memo to OpenAI staff on Saturday clarified that the decision wasn't related to financial, business, safety, or security/privacy issues but rather stemmed from a "breakdown in communication" that prompted the board's decision.
OpenAI, while not providing specific comments, communicated internally about a project named Q*, addressing media stories without confirming their accuracy. Some within OpenAI view that Q* (pronounced Q-Star) might represent a significant advancement in their pursuit of achieving artificial general intelligence (AGI), which the company defines as autonomous systems surpassing humans in economically valuable tasks.
In their letter to the board, researchers highlighted both the capabilities and potential dangers of AI, without specifying the exact safety concerns mentioned. The new model, with extensive computing resources, successfully tackled specific mathematical problems. While its performance was equivalent to grade-school students, excelling in these tests has left researchers optimistic about Q*'s potential future success.
Researchers identify mathematics as a key frontier in advancing generative AI. Unlike tasks with varying answers, mastering math implies enhanced reasoning capabilities for AI, resembling human intelligence. This potential has significant value in fields such as novel scientific research, as noted by AI researchers. Calculators on one hand, can solve a limited number of operations, but AGI can generalise, learn and comprehend.
The broader concern, shared among computer scientists, revolves around the potential risks associated with highly intelligent machines, including the possibility that they might perceive the destruction of humanity as advantageous.
Researchers have raised concerns about the work of an "AI scientist" team, a group whose existence has been confirmed by multiple sources. This team, formed by merging the "Code Gen" and "Math Gen" teams, is focused on optimising current AI models to enhance their reasoning abilities and, ultimately, enable them to engage in scientific tasks, according to one source.
(Inputs from other Agencies)
©️ Copyright 2023. All Rights Reserved Powered by Vygr Media.