stub Open-Source Auto-Gpt & BabyAGI Integrate Recursion Into AI Applications - Unite.AI
Connect with us

Artificial Intelligence

Open-Source Auto-Gpt & BabyAGI Integrate Recursion Into AI Applications

mm
Updated on

Recent developments involving Auto-GPT and BabyAGI have demonstrated the impressive potential of autonomous agents, generating considerable enthusiasm within the AI research and software development spheres. These agents, based on large language models (LLMs), are capable of performing intricate task sequences in response to user prompts. By employing a variety of resources such as internet and local file access, other APIs, and basic memory structures, these agents display early advancements in integrating recursion into AI applications.

What is BabyAGI?

BabyAGI, introduced by Yohei Nakajima via Twitter on March 28, 2023, is a streamlined iteration of the original Task-Driven Autonomous Agent. Utilizing OpenAI's natural language processing (NLP) abilities and Pinecone for storing and retrieving task results in context, BabyAGI provides an efficient and user-friendly experience. With a concise 140 lines of code, BabyAGI is easy to comprehend and expand upon.

The name BabyAGI is indeed significant as these tools persistently propel society toward AI systems that, while not yet achieving Artificial General Intelligence (AGI), are exponentially increasing in power. The AI ecosystem experiences new advancements daily, and with future breakthroughs and the potential for a version of GPT capable of prompting itself to tackle complex problems, these systems now give users the impression of interacting with AGIs.

What is Auto-GPT?

Auto-GPT is an AI agent designed to accomplish goals expressed in natural language by dividing them into smaller sub-tasks and utilizing resources like the internet and other tools in an automated loop. This agent employs OpenAI's GPT-4 or GPT-3.5 APIs and stands out as one of the pioneering applications that use GPT-4 to carry out autonomous tasks.

Unlike interactive systems such as ChatGPT, which depend on manual instructions for each task, Auto-GPT sets new goals for itself to achieve a larger objective, without necessarily requiring human intervention. Capable of generating responses to prompts to fulfill a specific task, Auto-GPT can also create and modify its own prompts for recursive instances based on newly acquired information.

What this Means Moving Forward

Although still in the experimental phase and with some limitations, agents are poised to boost productivity gains facilitated by the decreasing costs of AI hardware and software. According to ARK Invest's research, AI software could potentially produce up to $14 trillion in revenue and $90 trillion in enterprise value by 2030. As foundational models like GPT-4 continue to progress, numerous companies are opting to train their own smaller, specialized models. While foundational models have a broad range of applications, smaller specialized models offer advantages such as reduced inference costs.

Moreover, many businesses concerned about copyright issues and data governance are choosing to develop their proprietary models using a mix of public and private data. A notable example is a 2.7 billion parameter LLM trained on PubMed biomedical data, which achieved promising results on the US Medical Licensing Exam's (USMLE) question-and-answer test. The training cost was approximately $38,000 on the MosaicML platform, with a compute duration of 6.25 days. In contrast, the final training run of GPT-3 is estimated to have cost nearly $5 million in compute.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.