Connect with us

Artificial General Intelligence

Go Champion Quits Because of AI

Published

 on

Go Champion Quits Because of AI

Lee Se-dol, the first and only human to beat Google’s algorithm at the Chinese strategy game Go, has decided to quit due to artificial intelligence (AI). According to the South Korean champion, machines “cannot be defeated.

Back in 2016. Lee Se-dol took part in a five-match competition with Google’s artificial intelligence program AlphaGo, which caused a big publicity boom surrounding the game. It was also during that time when the fears of machines and their endless learning capacity increased. 

Prior to the matchups, Lee publicly stated that he would beat AlphaGo in a “landslide.” After the major losses, he went on to publicly apologize to the public. 

“I failed,” he said. “I feel sorry that the match is over and it ended like this. I wanted it to end well.”

In those matches, Lee Se-dol only defeated the AI once. Since then, the algorithm has gotten even better and teaches itself. That algorithm crushed its predecessor 100 games to none, and it is called AlphaGo Zero. 

Lee spoke to Yonhap news agency about his decision and the future of machines.

“Even if I become the number one, there is an entity that cannot be defeated,” he said. 

“With the debut of AI in Go games, I’ve realised that I’m not at the top even if I become the number one.”

AlphaGo Zero improved by playing against itself continuously, and it only took three days of paying at superhuman speeds to drastically surpass its predecessor. At that time, DeepMind said that AlphaGo was likely the strongest Go player to ever exist. 

According to a statement given to The Verge, DeepMind’s CEO Demis Hassabis praised Lee as having “true warrior spirit,” and went on to say that “On behalf of the whole AlphaGo team at DeepMind, I’d like to congratulate Lee Se-dol for his legendary decade at the top of the game, and wish him the very best for the future…I know Lee will be remembered as one of the greatest Go players of his generation.”

Lee will go on to participate in other ventures dealing with AI, and in December he will go against HanDol, a South Korean AI program. HanDol has outperformed the top five players in the country.

He will be given a two-stone advantage in the first game, but he believes he will still lose. 

“Even with a two-stone advantage, I feel like I will lose the first game to HanDol. These days, I don’t follow Go news. I wanted to play comfortably against HanDol as I have already retired, though I will do my best,” he said.

Go was created in China around 3,000 years ago, and it has continued to be played since. It is most popular in China, Japan, and South Korea. The game consists of a square board with a 19X19 grid, and players take turns placing black or white stones on it. The winner is whoever takes the most territory wins. 

While the rules sound simple, the game is actually extremely complex. Some say that there are more combinations of move configurations than atoms in the universe. 

Lee began to play Go when he was five, and became a pro at the age of 12. 

Even though he is a master player, Lee has said that his AlphaGo win was the result of a bug that appeared after his play. 

“My white 78 was not a move that should be countered straightforwardly,” he said.

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Artificial General Intelligence

Vahid Behzadan, Director of Secured and Assured Intelligent Learning (SAIL) Lab – Interview Series

mm

Published

on

Vahid Behzadan, Director of Secured and Assured Intelligent Learning (SAIL) Lab - Interview Series

Vahid is an Assistant Professor of Computer Science and Data Science at the University of New Haven. He is also director of the Secure and Assured Intelligent Learning (SAIL) Lab

His research interests include safety and security of intelligent systems, psychological modeling of AI safety problems, security of complex adaptive systems, game theory, multi-agent systems, and cyber-security.

You have an extensive background in cybersecurity and keeping AI safe. Can you share your journey in how you became attracted to both fields?

My research trajectory has been fueled by two core interests of mine: finding out how things break, and learning about the mechanics of the human mind. I have been actively involved in cybersecurity since my early teen years, and consequently built my early research agenda around the classical problems of this domain. Few years into my graduate studies, I stumbled upon a rare opportunity to change my area of research. At that time, I had just come across the early works of Szegedy and Goodfellow on adversarial example attacks, and found the idea of attacking machine learning very intriguing. As I looked deeper into this problem, I came to learn about the more general field of AI safety and security, and found it to encompass many of my core interests, such as cybersecurity, cognitive sciences, economics, and philosophy. I also came to believe that research in this area is not only fascinating, but also vital for ensuring the long-term benefits and safety of the AI revolution.

 

You’re the director of the Secure and Assured Intelligent Learning (SAIL) Lab which works towards laying concrete foundations for the safety and security of intelligent machines. Could you go into some details regarding work undertaken by SAIL?

At SAIL, my students and I work on problems that lie in the intersection of security, AI, and complex systems. The primary focus of our research is on investigating the safety and security of intelligent systems, from both the theoretical and the applied perspectives. On the theoretical side, we are currently investigating the value-alignment problem in multi-agent settings and are developing mathematical tools to evaluate and optimize the objectives of AI agents with regards to stability and robust alignments. On the practical side, some of our projects explore the security vulnerabilities of the cutting-edge AI technologies, such as autonomous vehicles and algorithmic trading, and aim to develop techniques for evaluating and improving the resilience of such technologies to adversarial attacks.

We also work on the applications of machine learning in cybersecurity, such as automated penetration testing, early detection of intrusion attempts, and automated threat intelligence collection and analysis from open sources of data such as social media.

 

You recently led an effort to propose the modeling of AI safety problems as psychopathological disorders. Could you explain what this is?

This project addresses the rapidly growing complexity of AI agents and systems: it is already very difficult to diagnose, predict, and control unsafe behaviors of reinforcement learning agents in non-trivial settings by simply looking at their low-level configurations. In this work, we emphasize the need for higher-level abstractions in investigating such problems. Inspired by the scientific approaches to behavioral problems in humans, we propose psychopathology as a useful high-level abstraction for modeling and analyzing emergent deleterious behaviors in AI and AGI. As a proof of concept, we study the AI safety problem of reward hacking in an RL agent learning to play the classic game of Snake. We show that if we add a “drug” seed to the environment, the agent learns a sub-optimal behavior that can be described via neuroscientific models of addiction. This work also proposes control methodologies based on the treatment approaches used in psychiatry. For instance, we propose the use of artificially-generated reward signals as analogues of medication therapy for modifying the deleterious behavior of agents.

 

Do you have any concerns with AI safety when it comes to autonomous vehicles?

Autonomous vehicles are becoming prominent examples of deploying AI in cyber-physical systems. Considering the fundamental susceptibility of current machine learning technologies to mistakes and adversarial attacks, I am deeply concerned about the safety and security of even semi-autonomous vehicles. Also, the field of autonomous driving suffers from a serious lack of safety standards and evaluation protocols. However, I remain hopeful. Similar to natural intelligence, AI will also be prone to making mistakes. Yet, the objective of self-driving cars can still be satisfied if the rates and impact of such mistakes are made to be lower than those of human drivers. We are witnessing growing efforts to address these issues in the industry and academia, as well as the governments.

 

Hacking street signs with stickers or using other means can confuse the computer vision module of an autonomous vehicle. How big of an issue do you believe this is?

These stickers, and Adversarial Examples in general, give rise to fundamental challenges in the robustness of machine learning models. To quote George E. P. Box, “all models are wrong, but some are useful”. Adversarial examples exploit this “wrong”ness of models, which is due to their abstractive nature, as well as the limitations of sampled data upon which they are trained. Recent efforts in the domain of adversarial machine learning have resulted in tremendous strides towards increasing the resilience of deep learning models to such attacks. From a security point of view, there will always be a way to fool machine learning models. However, the practical objective of securing machine learning models is to increase the cost of implementing such attacks to the point of economic infeasibility.

 

Your focus is on the safety and security features of both deep learning and deep reinforcement learning. Why is this so important?

Reinforcement Learning (RL) is the prominent method of applying machine learning to control problems, which by definition involve the manipulation of their environment. Therefore, I believe systems that are based on RL have significantly higher risks of causing major damages in the real-world compared to other machine learning methods such as classification. This problem is further exacerbated with the integration of Deep learning in RL, which enables the adoption of RL in highly complex settings. Also, it is my opinion that the RL framework is closely related to the underlying mechanisms of cognition in human intelligence, and studying its safety and vulnerabilities can lead to better insights into the limits of decision-making in our minds.

 

Do you believe that we are close to achieving Artificial General Intelligence (AGI)?

This is a notoriously hard question to answer. I believe that we currently have the building blocks of some architectures that can facilitate the emergence of AGI. However, it may take a few more years or decades to improve upon these architectures and enhance the cost-efficiency of training and maintaining these architectures. Over the coming years, our agents are going to grow more intelligent at a rapidly growing rate. I don’t think the emergence of AGI will be announced in the form of a [scientifically valid] headline, but as the result of gradual progress. Also, I think we still do not have a widely accepted methodology to test and detect the existence of an AGI, and this may delay our realization of the first instances of AGI.

 

How do we maintain safety in an AGI system that is capable of thinking for itself and will most likely be exponentially more intelligent than humans?

I believe that the grant unified theory of intelligent behavior is economics and the study of how agents act and interact to achieve what they want. The decisions and actions of humans are determined by their objectives, their information, and the available resources. Societies and collaborative efforts are emergent from its benefits for individual members of such groups. Another example is the criminal code, that deters certain decisions by attaching a high cost to actions that may harm the society. In the same way, I believe that controlling the incentives and resources can enable the emergence a state of equilibrium between humans and instances of AGI. Currently, the AI safety community investigates this thesis under the umbrella of value-alignment problems.

 

One of the areas you closely follow is counterterrorism. Do you have concerns with terrorists taking over AI or AGI systems?

There are numerous concerns about the misuse of AI technologies. In the case of terrorist operations, the major concern is the ease with which terrorists can develop and carry out autonomous attacks. A growing number of my colleagues are actively warning against the risks of developing autonomous weapons (see https://autonomousweapons.org/ ). One of the main problems with AI-enabled weaponry is in the difficulty of controlling the underlying technology: AI is at the forefront of open-source research, and anyone with access to the internet and consumer-grade hardware can develop harmful AI systems. I suspect that the emergence of autonomous weapons is inevitable, and believe that there will soon be a need for new technological solutions to counter such weapons. This can result in a cat-and-mouse cycle that fuels the evolution of AI-enabled weapons, which may give rise to serious existential risks in the long-term.

 

What can we do to keep AI systems safe from these adversarial agents?

The first and foremost step is education: All AI engineers and practitioners need to learn about the vulnerabilities of AI technologies, and consider the relevant risks in the design and implementation of their systems. As for more technical recommendations, there are various proposals and solution concepts that can be employed. For example, training machine learning agents in adversarial settings can improve their resilience and robustness against evasion and policy manipulation attacks (e.g., see my paper titled “Whatever Does Not Kill Deep Reinforcement Learning, Makes it Stronger“). Another solution is to directly account for the risk of adversarial attacks in the architecture of the agent (e.g., Bayesian approaches to risk modeling). There is however a major gap in this area, and it’s the need for universal metrics and methodologies for evaluating the robustness of AI agents against adversarial attacks. Current solutions are mostly ad hoc, and fail to provide general measures of resilience against all types of attacks.

 

Is there anything else that you would like to share about any of these topics?

In 2014, Scully et al. published a paper at the NeurIPS conference with a very enlightening topic: “Machine Learning: The High-Interest Credit Card of Technical Debt“. Even with all the advancements of the field in the past few years, this statement has yet to lose its validity. Current state of AI and machine learning is nothing short of awe-inspiring, but we are yet to fill a significant number of major gaps in both the foundation and the engineering dimensions of AI. This fact, in my opinion, is the most important takeaway of our conversation. I of course do not mean to discourage the commercial adoption of AI technologies, but only wish to enable the engineering community to account for the risks and limits of current AI technologies in their decisions.

I really enjoyed learning about the safety and security challenges about different types of AI systems. This is trully something that individuals, corporations, and governments need to become aware of. Readers who wish to learn more should visit Secure and Assured Intelligent Learning (SAIL) Lab.

Spread the love
Continue Reading

Artificial General Intelligence

How we can Benefit from Advancing Artificial General Intelligence (AGI)

mm

Published

on

How we can Benefit from Advancing Artificial General Intelligence (AGI)

Creating an Artificial General Intelligence (AGI) is the ultimate endpoint for many AI specialists.  An AGI agent could be leveraged to tackle a myriad of the world’s problems. For instance, you could introduce a problem to an AGI agent and the AGI could use deep reinforcement learning combined with its newly introduced emergent consciousness to make real-life decisions.

The difference between an AGI and a regular algorithm is the ability for the AGI to ask itself the important questions. An AGI can formulate the end solution that it wishes to arrive at, simulate hypothetical ways of getting there, and then make an informed decision on which simulated reality best matches the goals that were set.

The debate on how an AGI can emerge has been around since the term “artificial intelligence” was first introduced at the Dartmouth conference in 1956. Since then many companies have attempted to tackle the AGI challenge, OpenAI is probably the most recognized company. OpenAI was launched as a non-profit on December 11, 2015 with its mission statement being to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

The OpenAI mission statement clearly outlines the potential gains that an AGI can offer society. Suddenly issues which were too sophisticated for humans and regular AI systems, are now able to be tackled.

The potential benefits of releasing an AGI are astronomical. You could state a goal of curing all forms of cancer, the AGI could then connect itself to the internet to scan all the current research in every language. The AGI can initiate the problem of formulating solutions, and then simulate all potential outcomes. It would be connecting the benefits of consciousness that currently humans possess, with the infinite knowledge of the cloud, use deep learning for pattern recognition of this big data, and use reinforcement learning to simulate different environments/outcomes. All of this combined with a consciousness that never requires a rest period and can be 100% focused on the task at hand.

The potential downsides of AGI of course cannot be understated, you can have an AGI which has the goal of continuously upgrading itself and could then swallow everything in its path in order to maximize the computing resources and atoms that it needs to forever upgrade its system. This theory was explored in detail by Professor Nick Bostrom in the Paperclip Maximizer argument, in this scenario a misconfigured AGI is instructed to produce paperclips and does so until nothing is left, literally every resource on earth has been consumed to maximize the production of paperclips.

A more pragmatic viewpoint is that an AGI could be controlled by a rogue state or a corporation with poor ethics. This entity could program the AGI to maximize profits, and in this case with poor programming and zero remorse it could choose to bankrupt competitors, destroy supply chains, hack the stock market, liquidate bank accounts, etc.

Therefore, a code of ethics needs to be programmed in an AGI from the onset. A code of ethics has been debated by many minds and the concept was first introduced to the general population in the form of the 3 laws of robotics by author Isaac Asimov.

How we can Benefit from Advancing Artificial General Intelligence (AGI)

There are some problems with the 3 laws of robotics as the laws can be interpreted in different ways. We previously discussed programming ethics into an AGI, in our interview with Charles J. Simon, Author of Will Computers Revolt?

April 7, 2020, is the day that Brain Simulator II was released to the public. This version of the brain simulator enables experimentation into diverse AI algorithms to create an end-to-end AGI system with modules for vision, hearing, robotic control, learning, internal modeling, and even planning, imagination, and forethought.

“New, unique algorithms that directly address cognition are the key to helping AI evolve into AGI,” Simon explains.

“Brain Simulator II combines vision and touch into a single mental model and is making progress toward the comprehension of causality and the passage of time,” Simon notes. “As the modules are enhanced, progressively more intelligence will emerge.”

Brain Simulator II bridged Artificial Neural Networks (ANN) and Symbolic AI techniques to create new possibilities. It creates an array of millions of neurons interconnected by any number of synapses.

This enables various entities to research possibilities for AGI development.

Anyone interested in Brain Simulator II can follow along or participate in the development process by downloading the software, suggesting new features, and (for advanced developers) even adding custom modules. You can also follow its creator Charles Simon on Twitter.

In the meantime, society has been recently disrupted with the COVID-19 virus. Had we an AGI system in place we could have used this AGI to quickly identify how to stop the spread of COVID-19, and more importantly how to treat COVID-19 patients. While it may be too late for an AGI to help with this outbreak, in future outbreaks an AGI could be the best tool in our arsenal.

Spread the love
Continue Reading

Artificial General Intelligence

Charles J. Simon, Author, Will Computers Revolt? – Interview Series

mm

Published

on

Charles J. Simon, Author, Will Computers Revolt? - Interview Series

Charles J. Simon, BSEE, MSCS, nationally-recognized entrepreneur, software developer and manager. With a broad management and technical expertise and degrees in both Electrical Engineering and Computer Sciences Mr. Simon has many years of computer experience in industry including pioneering work in AI and CAD (two generations of CAD).

He is also the author of ‘Will Computers Revolt‘, which offers an in-depth view at the future possibility of Artificial General Intelligence (AGI).

What was it that originally attracted you to AI, and specifically to AGI?

I’ve been fascinated by the question, “Can machines think?” ever since I first read Alan Turing’s seminal 1950 paper which begins with that question. So far, the answer is clearly, “No,” but there is no scientific reason why not. I joined the AI community with the initial neural network boom in the late 1980s and since then AI has made great strides. But the intervening thirty years haven’t brought understanding to our machines, an ability which would catapult numerous apps to new levels of usefulness.

 

You stated that you share the option of MIT AI expert Rodney Brooks who says, ‘that without interaction with an environment – without a robotic body as you will – machines will never exhibit AGI.’ This is basically stating that with insufficient inputs from a robotic body, the AI will never develop AGI capabilities. Outside of computer vision, what types of inputs are needed to develop AGI?

Today’s AI needs to be augmented with basic concepts like the physical existence of objects in a reality, the passage of time, cause and effect—concepts clear to any three-year-old. A toddler uses multiple senses to learn these concepts by touching and manipulating toys, moving through the home, learning language, etc. While it is possible to create an AGI with more limited senses, just as there are deaf people and blind people who are perfectly intelligent but more senses and abilities to interact makes solving the AGI problem easier.

For completeness my simulator can provide senses of smell and taste. It remains to be seen if these will also prove important to AGI.

 

You stated that ‘A Key Requirement for intelligence is an environment which is external to the intelligence’. The example you gave is that ‘it is unreasonable to expect IBM’s Watson to “understand” anything if it has no underlying idea of what a “thing” is’. This clearly plays in the current limitations of narrow AI, especially natural language processing. How can AI developers best overcome this current limitation of AI?

A key factor is storing knowledge which is not specifically verbal, visual, or tactile but as abstract “Things” which can have verbal, visual, and tactile attributes. Consider something as simple as the phrase, “a red ball”. You know what these words mean because of your visual and tactile experiences. You also know the meaning of related actions like throwing, bouncing, kicking, etc. which all come to mind to some extent when you hear the phrase. Any AI system which is specifically word-based or specifically image-based will miss out on the other levels of understanding.

I have implemented a Universal Knowledge Store which stores any kind of information in a brain-like structure where Things are analogous to neurons and have many attribute references to other Things—references are analogous to synapses. Thus, red and ball are individual Things and a red ball is a Thing which has attribute references to the red Thing and the ball Thing. Both red and ball have references to the corresponding Things for the words “red” and “ball”, each of which, in turn, have references to other Things which define how the words are heard, spoken, read, or spelled as well as possible action Things.

 

You’ve reached the conclusion that brain simulation of general intelligence is a long way off while AGI may be (relatively) just around the corner. Based on this statement, should we move on from attempting to emulate or create a simulation of the human brain, and just focus on AGI?

Today’s deep learning and related technologies are great for appropriate applications but will not spontaneously lead to understanding. To take the next steps, we need to add techniques specifically targeted at solving the problems which are within the capacity of any three-year-old.

Taking advantage of the intrinsic abilities of our computers can be orders of magnitude more efficient than the biological equivalent or any simulation of it. For example, your brain can store information in the chemistry of biological synapses over several iterations requiring 10-100 milliseconds. A computer can simply store the new synapse value in a single memory cycle, a billion times faster.

In developing AGI software, I have done both biological neural simulation and more efficient algorithms.  Carrying forward with the Universal Knowledge Store, when simulated in simulated biological neurons, each Thing requires a minimum of 10 neurons and usually many more. This puts the capacity of the human brain somewhere between ten and a hundred million Things. But perhaps an AGI would appear intelligent if it comprehends only one million Things—well within the scope of today’s high-end desktop computers.

 

A key unknown is how much of the robot’s time should be allocated to processing and reacting to the world versus time spent imagining and planning. Can you briefly explain the importance of imagination to an AGI?

We can imagine many things and then only act on the ones we like, those which further our internal goals, if you will. The real power of imagination is being able to predict the future—a three-year-old can figure out which sequences of motion will lead her to a goal in another room and an adult can speculate on which words will have the greatest impact on others.

An AGI similarly will benefit from going beyond being purely reactive to speculating on various complex actions and choosing the best.

 

You believe that Asimov’s three laws of robotics are too simple and ambiguous. In your book you shared some ideas for recommended laws to be programmed in robots. Which laws do you feel are most important for a robot to follow?

Charles J. Simon, Author, Will Computers Revolt? - Interview Series

New “laws of robotics” will evolve over years as AGI emerges. I propose a few starters:

  1. Maximize internal knowledge and understanding of the environment.
  2. Share that knowledge accurately with others (both AGI and human).
  3. Maximize the well-being of both AGIs and humans as a whole—not just as an individual.

 

You have some issues with the Turing Test and the concept behind it. Can you explain how you believe the Turing Test is flawed?

The Turing Test has served us well for fifty years as an ad-hoc definition of general intelligence but as AGI nears, we need to hone the definition and we need a clearer definition. The Turing Test is actually a test of how human one is, not how intelligent one is. The longer a computer can maintain the deception, the better it performs on the test. Obviously, asking the question, “Are you a computer?” and related proxy questions such as, “What is your favorite food?”   are dead giveaways unless the AGI is programmed to deceive—a dubious objective at best.

Further, the Turing Test has motivated AI development into areas of limited value with (for example) chatbots with vast flexibility in responses but no underlying comprehension.

 

What would you do differently in your version of the Turing Test?

Better questions could probe specifically into the understanding of time, space, cause-and-effect, forethought, etc. rather than random questions without any particular basis in psychology, neuroscience, or AI. Here are some examples:

  1. What do you see right now? If you stepped back three feet, what differences would you see?
  2. If I [action], what would your reaction be?
  3. if you [action], what will my likely reactions be?
  4. Can you name three things which are like [object]?

Then, rather than evaluating responses as to whether they are indistinguishable from human responses, they should be evaluated in terms of whether or not they are reasonable responses (intelligent) based on the experience of the entity being tested.

 

You’ve stated that when faced with demands to perform some short-term destructive activity, properly programmed AGIs will simply refuse. How can we ensure that the AGI is properly programmed to begin with?

Decision-making is goal-based. In combination with an imagination, you (or an AGI) consider the outcome of different possible actions and choose the one which best achieves the goals. In humans, our goals are set by evolved instincts and our experience; an AGI’s goals are entirely up to the developers. We need to ensure that the goals of an AGI align with the goals of humanity as opposed to the personal goals of an individual. [Three possible goals as listed above.]

 

You’ve stated that it’s inevitable that humans will create an AGI, what’s your best estimate for a timeline?

Facets of AGI will begin to emerge within the coming decade but we won’t all agree that AGI has arrived. Eventually, we will agree that AGI has arrived when they exceed most human abilities by a substantial margin. This will take two or three decades longer.

 

For all the talks of AGI will it be real consciousness as we know it?

Consciousness manifests in a set of behaviors (which we can observe) which are based on an internal sensation (which we can’t observe).  AGIs will manifest the behaviors; they need to in order to make intelligent decisions. But I contend that our internal sensation is largely dependent on our sensory hardware and instincts and so I can guarantee that whatever internal sensations an AGI might have, they will be different from a human’s.

The same can be said for emotions and our sense of free will. In making decisions, one’s belief in free will permeates every decision we make. If you don’t believe you have a choice, you simply react. For an AGI to make thoughtful decisions, it will likewise need to be aware of its own ability to make decisions.

Last question, do you believe that an AGI has more potential for good or bad?

I am optimistic that AGIs will help us to move forward as a species and bring us answers to many questions about the universe. The key will be for us to prepare and decide what our relationship will be with AGIs as we define their goals. If we decide to use the first AGIs as tools of conquest and enrichment, we shouldn’t be surprised if, down the road, they become their own tools of conquest and enrichment against us. If we choose that AGIs are tools of knowledge, exploration, and peace, then that’s what we’re likely to get in return. The choice is up to us.

Thank you for a fantastic interview exploring the future potential of building an AGI. For readers who wish to learn more they may read ‘Will Computers Revolt‘ or visit Charle’s website futureai.guru.

Spread the love
Continue Reading