Former OpenAI researcher says AGI could be achieved by 2027 but laments that shiny products get precedence over security
What to expect from the prospect of Artificial General Intelligence (AGI) in the next decade, according to a former OpenAI researcher.
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
What you need to know
Generative AIis a big deal in the tech landscape right now. We’ve seen artificial intelligence make companies likeMicrosoft the world’s most valuable company with over $3 trillion in market valuation. Market analysts attribute the exponential growth tothe Redmond giant’s early lead and adoption of the technology. EvenNVIDIA is on the verge of hitting its iPhone moment with AIafter recently overtaking Apple and becoming the second-most valuable company in the world due to high GPU demand for AI advances.
Microsoft and OpenAI are arguably among the top tech firms that are heavily invested in AI. However, their partnership has stirred up controversies, with insiders indicating Microsoft has turned into “a glorified IT department for the hot startup.” In contrast, billionaire Elon Musk says OpenAI has seemingly transformed intoa closed-source de facto subsidiary for Microsoft.
It’s no secret that both tech companies have a complicated partnership and the latest controversies affecting OpenAI aren’t helping the situation. Afterlaunching GPT-4o, a handful of high-level employees left OpenAI. While the explanation behind their departure remains slim at best, Jan Leike former super alignment lead indicated that he was worried about the trajectory AI advances were taking at the company. He further stated that the firm was seeminglyprioritizing the development of shiny products as security and privacy took a backseat.
To this end, it’s impossible to tell the trajectory AI will take in the next few years, though NVIDIA CEO Jensen Huang indicates that we might be on the brink ofhitting the next AI wave. The CEO further states that robotics is the next big thing, with self-driving cars and humanoid robots dominating the category.
But as it now seems, we might have a bit of insight into what the future might hold for us, according to a former OpenAI researcher who recently published a 165-page report highlighting the rapid growth and adoption of AI, security, and more (viaBusiness Insider).
Leopold Aschenbrenner worked as a researcher for OpenAI’s super alignment team but was fired for leaking critical information about the company’s preparedness for general artificial intelligence. However, Aschenbrenner states that the information he shared was “totally normal” since it was based on publicly available information. He suspects the company was just looking for a way to get rid of him.
The researcher is among the OpenAI employees who refused to sign the letter asking for Sam Altman’s reinstatement as CEO after he was fired by the board of directors last year. Aschenbrenner believes this contributed to his firing. This is in the wake of former board members alleging that two OpenAI staffers had reached out to the board withclaims of psychological abuse from the CEO, which generally contributed to a toxic atmosphere at the company. The former board members also indicated that OpenAI staffers who didn’t necessarily support Altman’s imminent return as CEO signed the letter as the “feared” retaliation.
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
OpenAI might get to the superintelligence benchmark sooner than we expected
According to Aschenbrenner’s report, the AI progression will take an upward trajectory. It’s no secret that Sam Altman has asoft spot for superintelligencebased on how passionately he speaks about the topic during interviews. In January, the CEO admitted that OpenAI is actively exploring advances that could eventually help it unlock this incredible feat. However, he didn’t disclose whether the company was taking a radical or incremental trajectory while chasing it down.
As you may know, superintelligence means having a system with cognitive abilities that surpass human reasoning. However, there’s concern building around this benchmark and what it could mean for humanity. An AI researcher revealed thatthere’s a 99.9% probability it could end humanity, according to p(doom), and the only way to avoid this outcome is to stop building AI in the first place. Interestingly, Sam Altman admittedthere’s no big red button to stop the progression of AI.
With the emergence of new flagship AI models like GPT-4o with reasoning capabilities across text, audio, and more, it doesn’t seem like the progression will stop soon. Computational power and algorithmic efficiency trends show AI will continue to experience rapid growth. However, there arecritical concerns about power supplywithOpenAI looking into nuclear fusion as a plausible alternativefor the foreseeable future.
Aschenbrenner says AI development could scale to greater heights by 2027 and surpass the capabilities of human AI researchers and engineers. These predictions aren’t entirely farfetched, with GPT-4 (referred to as mildly embarrassing at best) already surpassing professional analysts and advanced AI models inforecasting future earnings trends without access to qualitative data. Microsoft CTO Kevin Scott shared similar sentiments and foreseesnewer AI models capable of passing PhD qualifying examinations.
The report also indicates that more corporations will join the AI fray and invest trillions of dollars in developing systems to support AI advances, including data centers, GPUs, and more. This is amid reports ofMicrosoft and OpenAI investing over $100 billion in a project dubbed Stargateto free themselves from an overreliance on NVIDIA for GPUS.
Security, Privacy, and Regulation remain core priorities as AI advances
Reports suggest AI will eventually become smarter than people, take over their jobs, andturn work into a hobby. There’s a rising concern about the implications this might have on humanity. EvenOpenAI CEO Sam Altman sees a need for an independent international agencyto ensure all AI advances are safe and regulated like airlines to avert “catastrophic outcomes.”
Perhaps more interesting is that Aschenbrenner’s report suggests that only a few hundred people understand AI’s impact on the future. He added that most of them work in AI labs in San Francisco (potentially referring to OpenAI staffers).
Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You’ll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.