stub AGI-22 Highlights the Progress in Developing Artificial General Intelligence - Unite.AI
Connect with us

Artificial General Intelligence

AGI-22 Highlights the Progress in Developing Artificial General Intelligence

mm

Published

 on

I recently attended the 15th annual conference on Artificial Generally Intelligence (AGI-22) that was held in Seattle this August, in an attempt to familiarize myself with new developments that could lead to the eventual creation of an Artificial General Intelligence (AGI).

An AGI is a type of advanced AI that can generalize across multiple domains and is not narrow in scope. Examples of narrow AI include an autonomous vehicle, a chatbot, a chess bot, or any other AI which is designed for a single purpose. An AGI in comparison would be able to flexibly alternate between any of the above or any other field of expertise. It consists of a speculative type of AI that would take advantage of nascent algorithms such as transfer learning, and evolutionary learning, while also exploiting legacy algorithms such as deep reinforcement learning.

During the opening keynote session Ben Goertzel an AI researcher, CEO and founder of SingularityNET, and leader of the OpenCog Foundation spoke about the state of the industry. He seemed enthusiastic about the future direction of AGI stating that, “We are years away rather than decades away”. This would place the eventual launch of an AGI at approximately 2029, the same year that Ray Kurzweil one of the world's leading inventors, thinkers, and futurists famously predicted the emergence of an AI that achieves human level intelligence.

The theory goes that once this type of intelligence is reached, the AI would immediately and continuously self-improve to rapidly surpass human intelligence in what is known as superintelligence.

Another speaker Charles J.  Simon, the Founder & CEO of Future AI stated in a separate session, “AGI emergence will be gradual”, and “AGI is inevitable and will arrive sooner than most people think, it could be a couple of years”.

Even will this bullish sentiment, there are significant roadblocks in space. Ben Goertzel also acknowledged that to achieve AGI, “We need an infusion of new ideas, not just scaling up neural networks”. This is a sentiment that has been shared by Gary Marcus who is known for stating that, “deep Learning has hit a wall”.

Some of the core challenges to creating an AGI include figuring out a rewards system that can scale intelligence in a maximally informed way. Moravec's paradox reflects the current problem with achieving AGI with our current technology. This paradox states that adaptations that are intuitive to a one year old such as learning how to walk, and simulate reality are far more difficult to program in an AI than what humans perceive as difficult.

For humans it is the polar opposite, mastering chess or executing complex mathematical formulas can require a lifetime to master, yet these are two reasonably easy tasks for narrow AIs.

One of the solutions to this paradox may be evolutionary learning also known as evolutionary algorithms.  This essentially enables an AI to search for complex solutions by mimicking the process of biological evolution.

In a separate Q & A, Ben Goertzel stated that, “AGI is not inevitable, but it is highly probable.” This is the same conclusion that I have reached, but the line between inevitability and probability blurs.

During the conference there were many papers that were presented, one of the notable papers that was discussed was Polynomial Functors: A General Theory of Interaction by David Spivak of the Topos Institute in Berkeley, CA and Nelson Niu of the University of Washington, in Seattle, WA. This paper discusses a mathematical category called Poly that may influence the future direction of AI when it comes to intimate relationships with dynamic processes, decision-making, and the storage and transformation of data. It remains to be seen how this will influence AGI research, but it could be one of the missing components that could lead us to AGI.

Of course there were other papers that were more speculative such as the Versatility-Efficiency Index (VEI): Towards a Comprehensive Definition of IQ for AGI Agents by Mohammadreza Alidoust. The idea is to construct an alternative way for measuring the intelligence level of intelligent systems, a type of IQ test to measure AGI agents in a computational way.

Two notable companies who may make breakthroughs in this underlying technology are OpenAI and DeepMind, both of which were notably absent. It might be for fear that AGI is not taken seriously by the AI community, but they are the two the companies that are most likely to make the first breakthrough in this field. This is especially true since OpenAI's stated mission is to conduct fundamental, long-term research toward the creation of a safe AGI.

While there were no major revolutionary breakthroughs to reveal at the conference, it is clear that AGI is preoccupying many researchers and it is something that the AI community should pay more attention to. After all, an AGI might be the solution to solving humanity's multiple existential threats.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.