What is Artificial General Intelligence (AGI)? Explained

As research advances in the AI field, the concept of AGI (Artificial General Intelligence) is starting to move from theory to reality. There is no doubt that we are living in the initial phase of the AI age, with manyGenerative AIapplications already available. But what is the next step? Is AGI the future? To understand what is Artificial General Intelligence (AGI), and how it could impact humanity, read on.

What is Artificial General Intelligence or AGI?

What is Artificial General Intelligence or AGI?

As far as the definition of AGI is concerned, there is no broad consensus about what it means among researchers and AI labs. However, the term AGI – Artificial General Intelligence – is chiefly understood as a type of AI system that canmatch or exceed human capabilities, especially in cognitive tasks.

Many AI labs have their own definition of AGI. In February 2023,OpenAIexplained what AGI means to them: “AI systems that are generally smarter than humans“. The company wants to develop AGI that can benefit all of humanity. In the same breath, OpenAI acknowledges the “serious” risks associated with a technology like AGI that has the potential of “misuse, drastic accidents, and societal disruption“.

Shane Legg, one of the DeepMind (now Google DeepMind) co-founders, and currently the Chief AGI scientist at the same company, coined the term AGI along with Ben Goertzel, his fellow researcher. Legg says that AGI is a very broad thing. DeepMind’spositionon AGI is that such a system “must not only be able to do a range of tasks, it must also be able to learn how to do those tasks, assess its performance, and ask for assistance when needed.”

The Google DeepMind team has chalked out different levels of AGI: emerging, competent, expert, virtuoso, and superhuman. Currently, the frontier AI models have shown some emergent behavior, believes DeepMind researchers.

Characteristics of AGI

Characteristics of AGI

Just like there is no broad consensus on the definition of AGI, its characteristics are also not well-defined. However, AI researchers say that a human-level AGI should be able to reason like humans. And, make decisions even under uncertainty. It should have knowledge on just about anything, including common sense understanding.

In addition, an AGI system should be able to plan andacquire new skills. It should solve open-ended questions and communicate in natural language. Cognitive scientists argue that AGI systems should also havetraits like imaginationto form novel ideas and concepts. AGI characteristics also include physical traits such as the ability tosee, hear, move and act.

To test whether AI models have reached AGI, there are many tests, including the famousTuring Test. Named after the computer scientist Alan Turing, the test is a way to see if an AI system can mimic human conversation well enough. This is so that a person can’t tell if they are talking to a machine.

Many believe that current AI chatbots have already passed the Turing test. However, the test also proposes that a machine should exhibit intelligent behavior equivalent to humans. Others tests includeThe Employment Test, proposed by John Nilsson. It says that a machine should be able to accomplish a crucial job, similar to humans.

Steve Wozniak, the co-founder of Apple, has also proposedThe Coffee Testto evaluate an intelligent AI system. He says that a sufficiently intelligent AI system should be able to find the coffee machine, add water and coffee, and complete the brewing process without requiring any human input.

Levels of AGI

OpenAI believes that AGI won’t be achieved in one shot instead, there will be multiple ascending levels of progress to finally achieving AGI.Bloombergrecently reported that OpenAI has come up with five levels of progress toward realizing the AGI goal.

The first one isConversational AIwhere we currently are with chatbots likeChatGPT, Claude, and Gemini. The second level isReasoning AI, where AI models can reason like humans. We are not here yet. The third level isAutonomous AI, where AI agents can perform actions autonomously on the user’s behalf.

The fourth level isInnovating AI, where intelligent AI systems can innovate and improve themselves. And finally, the fifth level isOrganizational AI, where an AI system can perform actions and accomplish tasks of an entire organization without requiring humans in the loop. This kind of AI system can fail, learn, improve, and work together with multiple agents to carry out tasks in parallel.

AGI Progress and Timescale: How Close are We to Achieve It?

Sam Altman, the CEO of OpenAI,believesthat we can reach the fifth level — Organizational AI — in the next ten years. Many experts have different predictions on when we can realize the AGI dream. Ben Goertzel predicts that we could achieve AGI in the next few decades, possibly in the 2030s.

Geoffrey Hinton, popularly known as the “godfather of AI” initially expressed uncertainty on the timeline of AGI. Now, he believes that a general-purpose AImight be just 20 years away.

François Chollet, a prominent researcher at Google DeepMind and creator of Keras, is of the opinion that AGI won’t be possible by scaling current technologies like LLMs. He has even developed a new benchmark calledARC-AGIand started a public competition for current AI models to solve it. Chollet argues that AGI development has stalled, and we need new ideas to make progress.

Yann LeCun, the chief AI scientist at Meta also says that LLMs have limitations and arenot sufficient to achieve AGIas they lack intelligence and reasoning capabilities.

Existential Risk From AGI

While AI development is in full swing across the world, many experts believe that achieving AGI would imperil humanity. OpenAI itself admits the serious risks associated with the technology. Geoffrey Hinton, after quitting Google, toldCBS Newsthat “it’s not inconceivable” when asked if AI could wipe out humanity. He further stressed finding ways to control far more intelligent AI systems.

Since an AGI system could match human capabilities, it may lead tomass unemploymentin many industries. This will only exacerbate economic hardship around the world. OpenAI has already published a paper detailing whichjobs could be replaced by ChatGPT. Apart from that, such a powerful system posesrisks of misuseor unintended consequences if not aligned with human values.

Elon Musk has also raised the alarm about the dangers of AGI and that AGI development should primarily align with human interests. Last year, Musk along with prominent stalwarts of the industry called for apause on giant AI experiments.

Ilya Sutskever, OpenAI co-founder and former chief scientist, left the company to start a new startup calledSafe Superintelligence. He says, “AI is a double-edged sword: it has the potential to solve many of our problems, but it also creates new ones. The future is going to be good for AI regardless, but it would be nice if it were good for humans as well.“

Ilya is now working to align powerful AI systems with human values to prevent catastrophic outcomes for humanity. Timnit Gebru, a former AI researcher at Google, was terminated from the company for publishing a paper highlighting the risks associated withlarge language models (LLMs). She argues that instead of asking what AGI is, we should ask “why we should build it“.

AGI can potentially impact the social structure of society with widespread job loss. This could further entrench inequality and lead to conflict and scarcity. Thus, begging the question – should we even build it? There are many questions and ethical concerns that need to be addressed before we start developing an AGI system. What are your thoughts on AGI? Let us know in the comments below.

Arjun Sha

Passionate about Windows, ChromeOS, Android, security and privacy issues. Have a penchant to solve everyday computing problems.

Add new comment

Name

Email ID

Δ

01

02