Connect with us

Artificial General Intelligence

Microsoft Invests $1 Billion in OpenAI to Develop Artificial General Intelligence (AGI)

Published

 on

Microsoft Invests $1 Billion in OpenAI to Develop Artificial General Intelligence (AGI)

Microsoft has announced that it is investing $1 billion in OpenAI, a San Francisco-based startup and research lab founded by Elon Musk and Sam Altman in 2015. Musk is no longer part of the company. 

OpenAI’s main goal is to create artificial general intelligence (AGI) that is similar to humans in the ability to reason and solve unfamiliar problems. They want to focus on the good possibilities that can come from AI, and some of their recent developments include robot dexterity, gaming bots, and AI writing. They developed a model that was able to have writing skills that were comparable to humans. The company decided against releasing the technology because of the possibility of fake news and impersonations.

The new investment from Microsoft will provide cloud computing services to OpenAI, and they will start to collaborate and create new technologies together. OpenAI will also license technology to Microsoft, and Microsoft will then begin to commercialize it. 

In a press release after the announcement of the investment, Greg Brockman, Chief Technology Officer of OpenAI, commented on the new collaboration. 

“We believe that the creation of beneficial AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity” 

Microsoft CEO Satya Nadella spoke about the new partnership between Microsoft and OpenAI and how it will keep “AI safety front and center” and “everyone will benefit.” 

OpenAI began as a nonprofit research lab in 2015. It wanted to compete with companies like Google and Amazon, but they set out to develop AI safely and democratically. Eventually, they decided to set up a for-profit firm after needing money to continue. OpenAI has been making big claims in order to receive funding, and they now are a capped-profit entity where investors can only receive 100 times their investment. 

Artificial general intelligence would be able to operate better than humans in many tasks. It would undoubtedly change society, and the outcome would depend on whether it was done properly and safely. It can bring huge advancements to food production, medicine, the energy sector, and endless other fields. At the same time, it can be incredibly dangerous. Experts in the field of artificial intelligence are constantly warning of the possible dangers. Artificial general intelligence is one of the main areas they are referring to. Some reasons are obvious, and others not so much. It can be weaponized by states, it might not follow human instructions, or the instructions might not be specific enough resulting in the AI doing something harmful, and it could be used by a small group of powerful individuals or companies that will take wealth inequality to an entirely new level. Because of this, many believe that artificial general intelligence needs regulation and safety features. This is one of OpenAI’s biggest concerns. 

Greg Brockman spoke on this aspect of AGI during the announcement. 

“To accomplish our mission of ensuring that AGI (whether built by us or not) benefits all of humanity, we’ll need to ensure that AGI is deployed safely and securely; that society is well-prepared for its implications; and that its economic upside is widely shared.”

Nobody knows exactly when we will have AGI. Scientists, researchers, and other experts all have different predictions. Some believe that humans will achieve AGI within 10 years while others are saying it will not be until 2099. 

Either way, we will eventually have it. When humans finally do cross the line and achieve artificial general intelligence, it will change most of what we are familiar with, and it will be the next step in our development. The new partnership between Microsoft and OpenAI continue to take us down this path.

 

Spread the love

Deep Learning Specialization on Coursera

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Artificial General Intelligence

Facebook’s AI Takes on Hanabi Game

Published

on

Facebook’s AI Takes on Hanabi Game

Facebook AI Research (FAIR) has developed a new AI that produced extremely impressive results when put up against Hanabi. The new development is a major step forward for Facebook’s AI. 

Hanabi is a card game similar to Solitaire. While most games that are used for this technology place AI against humans directly, specifically chess or Go, Hanabi requires players to work with each other towards a common goal. 

Facebook employed bots to work together in the game until they outperformed previously used AI systems. The most recent best AI system achieved a score of 23.92 out of 25 in the game, while the new one reached 24.61 out of 25. 

Back in February, A Hanabi benchmark was proposed by researchers from Google, DeepMind, Carnegie Mellon University, and Oxford. They also included the creation of additional AI capable of playing the game, and they called it “a new frontier for AI research.” 

Researchers are excited about the new development since the same AI used to help the bots could possibly be used in other areas. One possible use is to improve the way that virtual assistants interact with people. 

Noam Brown, a Facebook AI researcher, spoke about the new AI system. 

“One of the really exciting things about this is that the improvement we’re observing is really orthogonal to the improvements that are being observed with deep reinforcement learning: You can add this on top of any strategy, and it will make it much stronger,” Bown said in an interview he gave to VentureBeat. “We’re seeing that the results are far beyond what we or other researchers expected. In fact, the benefits that we get from search are stronger than the benefits that have been gained through all of the deep reinforcement learning algorithms that have been used in the past.”

The new development with Facebook’s AI comes at a time when researchers are continuing to create software capable of going up against some of the most complex games. In 2016, Google’s DeepMind’s AI system beat the best human players in the Chinese board game Go. 

Hanabi is now considered the best game for testing AI since it is built around teamwork and strategy, a major milestone for AI to reach. When used in this environment, AI can improve and become more sophisticated.

Adam Lerer is a Facebook researcher and contributor to the paper. 

“One of the reasons we’re moving to these cooperative games is that I think we’re kind of at the point where there’s no games left at least in terms of competitive games,” he said. 

Hanabi has teams of two to five players who are given random cards. The cards are different colors and contain different numbers, and the teams place them on a table, by color and in the correct numerical order. 

Players are not able to see their own cards, but their teammates can. Players are permitted to give hints to others. For example, a teammate can give a hint about colors, leading to the other to play or discard the card. 

One of the more complex aspects of the game is that a player has to figure out the clues and what they mean. This part of the game is difficult for a bot to figure out with the information that they have. 

The bots were able to build a strategy due to the techniques and reinforcement learning that Facebook used. Facebook believes that this technology could be used in other applications like robotics, self-driving vehicles, and other systems. 

“This is something that comes very naturally to humans, this idea of being able to put yourself in the shoes of another person and understand why they’re taking the actions they’re taking, what they’re thinking, and even if they don’t know certain things. But it’s something that AI has historically really struggled with,” he said. “There’s been this long debate about whether primates have theory of mind and at what age do humans babies develop theory of mind, and I think it’s really fascinating to finally be seeing this sort of behavior in AI. And I think that that’s going to be really important if we want to deploy AI in the real world to interact with humans because humans expect this behavior.”

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artificial General Intelligence

Go Champion Quits Because of AI

Published

on

Go Champion Quits Because of AI

Lee Se-dol, the first and only human to beat Google’s algorithm at the Chinese strategy game Go, has decided to quit due to artificial intelligence (AI). According to the South Korean champion, machines “cannot be defeated.

Back in 2016. Lee Se-dol took part in a five-match competition with Google’s artificial intelligence program AlphaGo, which caused a big publicity boom surrounding the game. It was also during that time when the fears of machines and their endless learning capacity increased. 

Prior to the matchups, Lee publicly stated that he would beat AlphaGo in a “landslide.” After the major losses, he went on to publicly apologize to the public. 

“I failed,” he said. “I feel sorry that the match is over and it ended like this. I wanted it to end well.”

In those matches, Lee Se-dol only defeated the AI once. Since then, the algorithm has gotten even better and teaches itself. That algorithm crushed its predecessor 100 games to none, and it is called AlphaGo Zero. 

Lee spoke to Yonhap news agency about his decision and the future of machines.

“Even if I become the number one, there is an entity that cannot be defeated,” he said. 

“With the debut of AI in Go games, I’ve realised that I’m not at the top even if I become the number one.”

AlphaGo Zero improved by playing against itself continuously, and it only took three days of paying at superhuman speeds to drastically surpass its predecessor. At that time, DeepMind said that AlphaGo was likely the strongest Go player to ever exist. 

According to a statement given to The Verge, DeepMind’s CEO Demis Hassabis praised Lee as having “true warrior spirit,” and went on to say that “On behalf of the whole AlphaGo team at DeepMind, I’d like to congratulate Lee Se-dol for his legendary decade at the top of the game, and wish him the very best for the future…I know Lee will be remembered as one of the greatest Go players of his generation.”

Lee will go on to participate in other ventures dealing with AI, and in December he will go against HanDol, a South Korean AI program. HanDol has outperformed the top five players in the country.

He will be given a two-stone advantage in the first game, but he believes he will still lose. 

“Even with a two-stone advantage, I feel like I will lose the first game to HanDol. These days, I don’t follow Go news. I wanted to play comfortably against HanDol as I have already retired, though I will do my best,” he said.

Go was created in China around 3,000 years ago, and it has continued to be played since. It is most popular in China, Japan, and South Korea. The game consists of a square board with a 19X19 grid, and players take turns placing black or white stones on it. The winner is whoever takes the most territory wins. 

While the rules sound simple, the game is actually extremely complex. Some say that there are more combinations of move configurations than atoms in the universe. 

Lee began to play Go when he was five, and became a pro at the age of 12. 

Even though he is a master player, Lee has said that his AlphaGo win was the result of a bug that appeared after his play. 

“My white 78 was not a move that should be countered straightforwardly,” he said.

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artificial General Intelligence

How Can Artificial Intelligence Learn About The Learning Process?

mm

Published

on

How Can Artificial Intelligence Learn About The Learning Process?

To make new leaps in advancing artificial intelligence, AI would, as author Jun Wu puts it in Forbes, have to ‘learn to learn’. What would that mean?

As Wu explains, “humans have the unique ability to learn from any situation or surrounding.” Humans can adapt their process of learning. To be able to have such a flexible quality AI needs Artificial General Intelligence – it would have to learn about the learning process, what is called Meta-Learning.

There is one very specific contrast in the learning process between humans and artificial intelligence. While the human capacity for learning is limited, AI has many more resources such as its computational power. Human brainpower has its limits and it also has limited time to learn. But, while AI “learns from more data than the data our human brains use, processing these vast amounts of data requires immense computational power.”

Wu explains that“as the complexity of AI’s tasks grows, there’s also an exponential increase in computational power.” This would mean that even if the cost of computational power is low, “exponential increase is never the scenario that we want.” This is the main reason that at the moment “AI is designed to be specific-purpose learners,” making their learning process more efficient.

But as AI started to learn more, “learning to learn” it started to “infer from data with increasing complexity.” To avoid the exponential increase in computational power, a more efficient learning path had to be devised, and AI had to remember that path.

The whole problem got even more complex when researchers and technologists started to assign multi-tasking problems to AI. To be able to do that, AI “needs to be able to evaluate independent sets of data in parallel. It also needs to relate pieces of data and infers connections on that data.” As one task is being done, AI needs to update its knowledge so that it can apply it in other situations. “Since tasks are interrelated, the evaluations for the tasks will need to be done by the whole network.”

Google developed one such model, MultiModel, which is an AI system that “learned to perform eight different tasks simultaneously. MultiModel can detect objects in images, provide captions, recognize speech, translate between four pairs of languages, and perform grammatical constituency parsing.

While Google’s achievement is a big leap forward, AI still needs to make further strides so that it can become a general-purpose learner. To be able to achieve this it would need to further develop meta-reasoning and meta-learning. As Wu explains, “meta-reasoning focuses on the efficient use of cognitive resources. Meta-learning focuses on human’s unique ability to efficiently use limited cognitive resources and limited data to learn.”

Currently, there are studies being conducted to figure out the gaps between human cognition and the way AI learns such as awareness of internal states, the accuracy of memory or confidence.

All this means that “becoming an artificial generalized learner requires extensive research on how humans learn as well as research on how AI can mimic the way that humans learn. To adapt to new situations such as having the ability to “multitask”, and the ability to make “strategic decisions” with limited resources, are just a few of the hurdles that AI researchers will overcome along the way.”

Spread the love

Deep Learning Specialization on Coursera
Continue Reading