To make new leaps in advancing artificial intelligence, AI would, as author Jun Wu puts it in Forbes, have to ‘learn to learn’. What would that mean?
As Wu explains, “humans have the unique ability to learn from any situation or surrounding.” Humans can adapt their process of learning. To be able to have such a flexible quality AI needs Artificial General Intelligence – it would have to learn about the learning process, what is called Meta-Learning.
There is one very specific contrast in the learning process between humans and artificial intelligence. While the human capacity for learning is limited, AI has many more resources such as its computational power. Human brainpower has its limits and it also has limited time to learn. But, while AI “learns from more data than the data our human brains use, processing these vast amounts of data requires immense computational power.”
Wu explains that“as the complexity of AI’s tasks grows, there’s also an exponential increase in computational power.” This would mean that even if the cost of computational power is low, “exponential increase is never the scenario that we want.” This is the main reason that at the moment “AI is designed to be specific-purpose learners,” making their learning process more efficient.
But as AI started to learn more, “learning to learn” it started to “infer from data with increasing complexity.” To avoid the exponential increase in computational power, a more efficient learning path had to be devised, and AI had to remember that path.
The whole problem got even more complex when researchers and technologists started to assign multi-tasking problems to AI. To be able to do that, AI “needs to be able to evaluate independent sets of data in parallel. It also needs to relate pieces of data and infers connections on that data.” As one task is being done, AI needs to update its knowledge so that it can apply it in other situations. “Since tasks are interrelated, the evaluations for the tasks will need to be done by the whole network.”
Google developed one such model, MultiModel, which is an AI system that “learned to perform eight different tasks simultaneously. MultiModel can detect objects in images, provide captions, recognize speech, translate between four pairs of languages, and perform grammatical constituency parsing.
While Google’s achievement is a big leap forward, AI still needs to make further strides so that it can become a general-purpose learner. To be able to achieve this it would need to further develop meta-reasoning and meta-learning. As Wu explains, “meta-reasoning focuses on the efficient use of cognitive resources. Meta-learning focuses on human’s unique ability to efficiently use limited cognitive resources and limited data to learn.”
Currently, there are studies being conducted to figure out the gaps between human cognition and the way AI learns such as awareness of internal states, the accuracy of memory or confidence.
All this means that “becoming an artificial generalized learner requires extensive research on how humans learn as well as research on how AI can mimic the way that humans learn. To adapt to new situations such as having the ability to “multitask”, and the ability to make “strategic decisions” with limited resources, are just a few of the hurdles that AI researchers will overcome along the way.”
- Acronis SCS and Leading Academics Partner to Develop AI-based Risk Scoring Model
- Developers Create Open Source Software To Help AI Researchers Reduce Carbon Footprint
- How AI Will Impact Both Cybersecurity and Cyber Attacks
- Researchers Create Robot That Displays Basic Empathy to a Robot Partner
- What Is K-Means Clustering?