Connect with us

Artificial General Intelligence

How we can Benefit from Advancing Artificial General Intelligence (AGI)

mm

Published

 on

Creating an Artificial General Intelligence (AGI) is the ultimate endpoint for many AI specialists.  An AGI agent could be leveraged to tackle a myriad of the world’s problems. For instance, you could introduce a problem to an AGI agent and the AGI could use deep reinforcement learning combined with its newly introduced emergent consciousness to make real-life decisions.

The difference between an AGI and a regular algorithm is the ability for the AGI to ask itself the important questions. An AGI can formulate the end solution that it wishes to arrive at, simulate hypothetical ways of getting there, and then make an informed decision on which simulated reality best matches the goals that were set.

The debate on how an AGI can emerge has been around since the term “artificial intelligence” was first introduced at the Dartmouth conference in 1956. Since then many companies have attempted to tackle the AGI challenge, OpenAI is probably the most recognized company. OpenAI was launched as a non-profit on December 11, 2015 with its mission statement being to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

The OpenAI mission statement clearly outlines the potential gains that an AGI can offer society. Suddenly issues which were too sophisticated for humans and regular AI systems, are now able to be tackled.

The potential benefits of releasing an AGI are astronomical. You could state a goal of curing all forms of cancer, the AGI could then connect itself to the internet to scan all the current research in every language. The AGI can initiate the problem of formulating solutions, and then simulate all potential outcomes. It would be connecting the benefits of consciousness that currently humans possess, with the infinite knowledge of the cloud, use deep learning for pattern recognition of this big data, and use reinforcement learning to simulate different environments/outcomes. All of this combined with a consciousness that never requires a rest period and can be 100% focused on the task at hand.

The potential downsides of AGI of course cannot be understated, you can have an AGI which has the goal of continuously upgrading itself and could then swallow everything in its path in order to maximize the computing resources and atoms that it needs to forever upgrade its system. This theory was explored in detail by Professor Nick Bostrom in the Paperclip Maximizer argument, in this scenario a misconfigured AGI is instructed to produce paperclips and does so until nothing is left, literally every resource on earth has been consumed to maximize the production of paperclips.

A more pragmatic viewpoint is that an AGI could be controlled by a rogue state or a corporation with poor ethics. This entity could program the AGI to maximize profits, and in this case with poor programming and zero remorse it could choose to bankrupt competitors, destroy supply chains, hack the stock market, liquidate bank accounts, etc.

Therefore, a code of ethics needs to be programmed in an AGI from the onset. A code of ethics has been debated by many minds and the concept was first introduced to the general population in the form of the 3 laws of robotics by author Isaac Asimov.

There are some problems with the 3 laws of robotics as the laws can be interpreted in different ways. We previously discussed programming ethics into an AGI, in our interview with Charles J. Simon, Author of Will Computers Revolt?

April 7, 2020, is the day that Brain Simulator II was released to the public. This version of the brain simulator enables experimentation into diverse AI algorithms to create an end-to-end AGI system with modules for vision, hearing, robotic control, learning, internal modeling, and even planning, imagination, and forethought.

“New, unique algorithms that directly address cognition are the key to helping AI evolve into AGI,” Simon explains.

“Brain Simulator II combines vision and touch into a single mental model and is making progress toward the comprehension of causality and the passage of time,” Simon notes. “As the modules are enhanced, progressively more intelligence will emerge.”

Brain Simulator II bridged Artificial Neural Networks (ANN) and Symbolic AI techniques to create new possibilities. It creates an array of millions of neurons interconnected by any number of synapses.

This enables various entities to research possibilities for AGI development.

Anyone interested in Brain Simulator II can follow along or participate in the development process by downloading the software, suggesting new features, and (for advanced developers) even adding custom modules. You can also follow its creator Charles Simon on Twitter.

In the meantime, society has been recently disrupted with the COVID-19 virus. Had we an AGI system in place we could have used this AGI to quickly identify how to stop the spread of COVID-19, and more importantly how to treat COVID-19 patients. While it may be too late for an AGI to help with this outbreak, in future outbreaks an AGI could be the best tool in our arsenal.

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is also the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.ai

Artificial General Intelligence

Are we Living in an Artificial Intelligence Simulation?

mm

Published

on

The existential question that we should be asking ourselves, is are we living in a simulated universe?

The idea that we are living in a simulated reality may seem unconventional and irrational to the general public, but it is a belief shared by many of the brightest minds of our time including Neil deGrasse Tyson, Ray Kurzweil and Elon Musk. Elon Musk famously asked the question ‘What’s outside the simulation?’ in a podcast with Lex Fridman a research scientist at MIT.

To understand how we could be living in a simulation, one needs to explore the simulation hypothesis or simulation theory which proposes that all of reality, including the Earth and the universe, is in fact an artificial simulation.

While the idea dates back as far as the 17th-century and was initially proposed by philosopher René Descartes, the idea started to gain mainstream interest when Professor Nick Bostrom of Oxford University, wrote a seminal paper in 2003 titled “Are you Living in a Computer Simulation?”

Nick Bostrom has since doubled down on his claims and uses probabilistic analysis to prove his point. There are many interviews where he details his views in detail including this talk at Google headquarters.

We will explore the concept of how a simulation can be created, who would create it, and why anyone would create it.

How a Simulation Would be Created

If you analyze the history of video games, there is a clear innovation curve in the quality of games. In 1982 Atari Inc released Pong, players could compete by playing a tennis style game featuring simple two-dimensional graphics.

Video games quickly evolved. The 80s featured 2D graphics, the 90s featured 3D graphics, and since then we have been introduced to Virtual Reality (VR).

The accelerated rate of progress when it comes to VR cannot be understated. Initially VR suffered from many issues including giving users headaches, eye strain, dizziness, and nausea. While some of these issues still exist, VR now offers immersive educational, gaming, and travel experiences.

It is not difficult to extrapolate that based on the current rate of progress in 50 years, or even 500 years, VR will become indistinguishable from reality. A gamer could immerse themselves in a simulated setting and may at some point find it difficult to distinguish reality from fiction. The gamer/user could become so immersed in the fictional reality, that they do not realize that they are simply a character in a simulation.

Who Would Create the Simulation?

How we create a simulation can be extrapolated based on exponential technological advances as described by ‘The Law of Accelerating Returns‘. Meanwhile, who would create these simulations is a challenging puzzle. There are many different scenarios that have been proposed, all are equally valid as there is no current way of testing or validating these theories.

Nick Bostrom has proposed an advanced civilization may choose to run “ancestor simulations”.  These are essentially simulations that are indistinguishable from reality, with the goal of simulating human ancestors. The number of simulated realities could run into infinity. This is not a far stretch once you consider that the entire purpose of Deep Reinforcement Learning is to train an Artificial Neural Network to improve itself in a simulated setting.

If we analyze this from a purely AI point of view, we could be simulating different realities to discover the truth about a series of events. You could create a simulation where North Korea is divided from South Korea, and one simulation where both Koreas remain unified. Each small change in a simulation, could have long-term implications.

Other theories abound, that the simulations are created by advanced AI or even an alien species. The truth is completely unknown, but it is interesting to speculate on who would be running such simulations.

How it Works

There are multiple arguments about how a simulated universe would work. Would the entire history of planet earth, all 4.5 billion years be simulated? Or would the simulation simply begin at an undefined starting point such as the year AD 1? This would implicate that to save computing resources the simulation would simply create archaeological, and geological history for us to study. Then again, a random starting point may defeat the purpose of a simulation which may be designed to learn the nature of evolutionary forces, and how lifeforms react to cataclysmic events such as the five major extinctions, including the one which wiped out the dinosaurs 65 millions years ago.

A more likely scenario, is that the simulation would simply begin when the first modern humans began moving outside of Africa starting 70,000 to 100,000 years ago. The human (simulated) perception of time differs from the time experienced in a computer, especially when you factor in quantum computing.

A quantum computer would enable time to be non-linear,  we could experience the perception of time, without the actual passage of time. Even without the power of quantum computing, OpenAI successfully used large scale deep reinforcement learning to enable a robotic hand to teach itself to manipulate a rubik’s cube. It was able to solve the rubik’s cube by practicing for the equivalent of 13,000 years inside a computer simulation.

Why People Believe

When you consider the wide spectrum of those who believe or acknowledge that there is a probability that we live in a simulation, a common denominator is present. Believers have a deep belief in science, in technological progress, in exponential thinking, and most of them are super successful.

If you are Elon Musk, what is more likely, that out of 7.7 billion he is the first person taking humans to Mars, or are the odds higher that he is living in a simulation? This might be why Elon Musk has openly stated that “There’s a billion to one chance we’re living in base reality.”

One of the more compelling arguments is from George Hotz the enigmatic hacker and founder of autonomous vehicle technology startup Comma.ai. His engaging presentation at the popular SXSW 2019 conference had attendees believing for an hour that they were living inside a simulation. What we can conclude with certainty, is that we should keep an open mind.

 

Spread the love
Continue Reading

Artificial General Intelligence

Is Hanson’s Robotics Sophia Robot using AI or is it a Marketing Stunt?

mm

Published

on

If you’ve been following AI for any period of time you have probably heard of Hanson Robotics humanoid Robot Sophia. From a marketing point of view Sophia has been transformational, she has had a romantic encounter with Will Smith, she has been featured on The Tonight Show with Jimmy Fallon, as well as countless other media appearances. There was even justified global controversy when Saudi Arabia which denies women equal rights granted Sophia citizenship.

Something that may seem odd, is that Sophia is rarely discussed in serious AI debates, even while she is busy scheduling public appearances, and being showcased at blockchain conferences. To understand the reasoning behind this, an exploration of the history of its two eccentric representatives needs to be undertaken.

Who is David Hanson?

David Hanson is the founder and CEO of Hanson Robotics.

David grew up in Dallas, Texas reading the works of Isaac Asimov and Philip K. Dick. Isaac Asimov is a science fiction writer who contributed to the popularization of robotics by writing 37 science fiction short stories and six novels featuring positronic robots from 1940 to 1993. The movie starring Will Smith I,Robot was based on one of these short stories.  While the physical appearance of Sophia closely matches the covers, and different illustrations of these works of science fiction, she was modeled after Audrey Hepburn and Hanson’s wife.

David pursued his passion for art and creativity from a young age. He has a Bachelor of Fine Arts from the Rhode Island School of Design in film/animation/video, and a Ph.D. from the University of Texas at Dallas in interactive arts and engineering,

He then pursued a career as an Imagineer at Walt Disney. While working at Disney he worked on creating sculptures and robotic technologies for theme parks.

As a fine artist David exhibited at art museums including the Reina Sophia, Tokyo Modern, and the Cooper Hewitt Design Museums. Hanson’s large figurative sculptures stand prominently in the Atlantis resort,Universal Studios Islands of Adventure, and several Disney theme parks.

In 1995, David designed a humanoid head in his own likeness,  which was operated remotely by a human. This remote humanoid operation is a precursor to Sophia and is instrumental in understanding that the technology behind Sophia may be more of an illusion than what those in the AI community may qualify as AI or even machine learning.

David fully understands the importance of having a humanoid robot that has an appearance that is both non-threatening, and welcoming. Credit should absolutely be given to David for creating a robotic humanoid that has been able to capture the human imagination with very limited and scripted interactions with humans.

It is clear from reviewing David’s background, that he has been instrumental in the aesthetics of Sophia. The question remains what type of AI is being used with Sophia?  And is this AI on a path towards AGI (Artificial General Intelligence) as claimed by its other eccentric spokesman Ben Goertzel?

Who is Ben Goertzel?

Ben Goertzel is a brilliant full-stack AI researcher and the chief scientist and chairman of AI software company Novamente LLC; chairman of the OpenCog Foundation; and advisor to Singularity University.  He was formerly Chief Scientist of Hanson Robotics, the company that created Sophia. He is currently CEO & founder of SingularityNET.

Ben is someone who at first appears to be an eccentric genius, and when you watch him speak it is clear that he is well informed. He shares the same views as his friend Ray Kurzweil  and these views are shared in Ray’s seminal book The Singularity is Near. Ben believes that AGI is fast approaching, and as Ray predicts that 2045 will be the approximate timeline of the singularity, an event marked when human intelligence and nonbiological intelligence will merge.

The singularity is such a focal point to Ben’s existence, that he created SingularityNET in 2017. As described on the company’s website:

SingularityNET is a full-stack AI solution powered by a decentralized protocol. We gathered the leading minds in machine learning and blockchain to democratize access to AI technology. Now anyone can take advantage of a global network of AI algorithms, services, and agents.

SingularityNET raised funds in 2017 in what is called an Initial Coin Offering (ICO).  The timing of the raise was excellent as it was during the ICO craze,  a total of $36 million was raised in less than 60 seconds. Investors would receive AGI tokens, the AGI token would in theory offer the following benefits:

The AGI Token is a crucial aspect of SingularityNET, and it can be utilized in a variety of ways. It will allow for transactions between the network participants, enable the AI Agents to transact value with each other, empower the network to incentivize actions that the community deems ‘benevolent’ and will allow for the governance of the network itself.

Herein is why Ben Goertzel is often speaking at cryptocurrency and blockchain events.  The AGI token was the fundraise for SingularityNET, and the association to Sophia is quite simple. Sophia is shown at these events to keep investors interested in the project. This is how the relationship between SingularityNET and Sophia is described:

SingularityNET was born from a collective will to distribute the power of AI. Sophia, the world’s most expressive robot, is one of our first use cases. Today, she uses multiple AI modules to see, hear, and respond empathetically. Many of her underlying AI modules will be available open-source on SingularityNET.

In other words, SingularityNET associates itself with Sophia to raise funds, and Sophia may at some point use an AI module hosted on SingularityNET.  While Sophia appears to be using some forms of AI, it appears to be very basic. Nonetheless, Sophia is a platform with the ability to have AI modules swapped in or out. This means that her current level of AI is not indicative of future performance.

Is Sophia Scripted?

When watching Sophia on stage there are indicators that we might be spellbound by a well orchestrated magic trick. Ben is especially very well versed at speaking quickly, he enchants you with his intelligence, and gives Sophia very little actual free association speaking time.

If Sophia was as intelligent as claimed, you would want to give her the bulk of the speaking engagement, and investors would be lining up at the door.

Sophia is often wheeled in, which indicates a lack of mobility. She also seems to lack awareness of her surroundings, she is unable to focus her attention on any one object. She blinks a lot, randomly smiles, and offers other random facial expressions.

There is also a lack of input technology. When it comes to building an AGI there is common consensus that input devices are important to form an emergent consciousness. A notion of  “self,”  is needed as related knowledge and functions are developed gradually according to the system’s experience. Based on Sophia’s lack of mobility and input mechanisms this seems to be something that is ignored. Her only input seems to be auditory, with possibly some type of basic computer vision.

There is also the problem that all of her conversations are pre-scripted. If you want to book Sophia for an event, you need to send five questions which need to be pre-approved by the organizers. The questions need to be asked in a specific order. This signifies that based on the preset questions, Sophia is simply parroting pre-canned responses. This is why the answers she gives are always so interesting, they are designed to evoke emotion in the audience, and the answers are delivered by a human using Sophia as a channel.

In other words, Sophia may be using at most computer vision, voice recognition technology, and perhaps some form of Natural Language Processing (NLP), but there is no indicator that she is actually analyzing the meaning behind what is said, or that she understands the meaning behind her answers.  Amazon’s Alexa, and Apple’s Siri are much more advanced AI systems, and neither company would claim that either systems are anywhere near an AGI system.

It’s an interesting social experiment to understand how humans communicate and interact with humanoid robots, but at no time is there any indication that Sophia could even be remotely considered intelligent or self-aware.

In an interview with The Verge, Ben acknowledges that audiences may be overestimating Sophia’s abilities:

“If I tell people I’m using probabilistic logic to do reasoning on how best to prune the backward chaining inference trees that arise in our logic engine, they have no idea what I’m talking about. But if I show them a beautiful smiling robot face, then they get the feeling that AGI may indeed be nearby and viable”.

He then continues to state the following:

“None of this is what I would call AGI, but nor is it simple to get working,  and it is absolutely cutting-edge in terms of dynamic integration of perception, action, and dialogue.”

What are the technologies being used by Sophia? According to Ben’s Blog:

  1. a purely script-based “timeline editor” (used for preprogrammed speeches, and occasionally for media interactions that come with pre-specified questions);
  2. a “sophisticated chat-bot” — that chooses from a large palette of templatized responses based on context and a limited level of understanding (and that also sometimes gives a response grabbed from an online resource, or generated stochastically).
  3. OpenCog, a sophisticated cognitive architecture created with AGI in mind, but still mostly in R&D phase (though also being used for practical value in some domains such as biomedical informatics, see Mozi Health and a bunch of SingularityNET applications to be rolled out this fall).

It is due to mixed and confusing communications regarding her technologies, and the references to AGI, that Sophia continues to be adopted by a mainstream audience that may be deceived in believing that Sophia is more intelligent than she actually is.

Sophia is for the most part ignored by an AI community that understands that the current state of AI is far more advanced than what Sophia is capable of illustrating.  What that AI community may be overlooking is the power of rapid exponential technological growth as described in Kurzweil’s “Law of Accelerating Returns“.  While Sophia’s AI is currently far from AGI, with Sophia capable of hosting any type of AI module, she has the ability to have her neural network upgraded or replaced at any time. We should therefore not be surprised if at the end of this journey, Sophia achieves true AGI.

 

Spread the love
Continue Reading

Artificial General Intelligence

Vahid Behzadan, Director of Secured and Assured Intelligent Learning (SAIL) Lab – Interview Series

mm

Published

on

Vahid is an Assistant Professor of Computer Science and Data Science at the University of New Haven. He is also director of the Secure and Assured Intelligent Learning (SAIL) Lab

His research interests include safety and security of intelligent systems, psychological modeling of AI safety problems, security of complex adaptive systems, game theory, multi-agent systems, and cyber-security.

You have an extensive background in cybersecurity and keeping AI safe. Can you share your journey in how you became attracted to both fields?

My research trajectory has been fueled by two core interests of mine: finding out how things break, and learning about the mechanics of the human mind. I have been actively involved in cybersecurity since my early teen years, and consequently built my early research agenda around the classical problems of this domain. Few years into my graduate studies, I stumbled upon a rare opportunity to change my area of research. At that time, I had just come across the early works of Szegedy and Goodfellow on adversarial example attacks, and found the idea of attacking machine learning very intriguing. As I looked deeper into this problem, I came to learn about the more general field of AI safety and security, and found it to encompass many of my core interests, such as cybersecurity, cognitive sciences, economics, and philosophy. I also came to believe that research in this area is not only fascinating, but also vital for ensuring the long-term benefits and safety of the AI revolution.

 

You’re the director of the Secure and Assured Intelligent Learning (SAIL) Lab which works towards laying concrete foundations for the safety and security of intelligent machines. Could you go into some details regarding work undertaken by SAIL?

At SAIL, my students and I work on problems that lie in the intersection of security, AI, and complex systems. The primary focus of our research is on investigating the safety and security of intelligent systems, from both the theoretical and the applied perspectives. On the theoretical side, we are currently investigating the value-alignment problem in multi-agent settings and are developing mathematical tools to evaluate and optimize the objectives of AI agents with regards to stability and robust alignments. On the practical side, some of our projects explore the security vulnerabilities of the cutting-edge AI technologies, such as autonomous vehicles and algorithmic trading, and aim to develop techniques for evaluating and improving the resilience of such technologies to adversarial attacks.

We also work on the applications of machine learning in cybersecurity, such as automated penetration testing, early detection of intrusion attempts, and automated threat intelligence collection and analysis from open sources of data such as social media.

 

You recently led an effort to propose the modeling of AI safety problems as psychopathological disorders. Could you explain what this is?

This project addresses the rapidly growing complexity of AI agents and systems: it is already very difficult to diagnose, predict, and control unsafe behaviors of reinforcement learning agents in non-trivial settings by simply looking at their low-level configurations. In this work, we emphasize the need for higher-level abstractions in investigating such problems. Inspired by the scientific approaches to behavioral problems in humans, we propose psychopathology as a useful high-level abstraction for modeling and analyzing emergent deleterious behaviors in AI and AGI. As a proof of concept, we study the AI safety problem of reward hacking in an RL agent learning to play the classic game of Snake. We show that if we add a “drug” seed to the environment, the agent learns a sub-optimal behavior that can be described via neuroscientific models of addiction. This work also proposes control methodologies based on the treatment approaches used in psychiatry. For instance, we propose the use of artificially-generated reward signals as analogues of medication therapy for modifying the deleterious behavior of agents.

 

Do you have any concerns with AI safety when it comes to autonomous vehicles?

Autonomous vehicles are becoming prominent examples of deploying AI in cyber-physical systems. Considering the fundamental susceptibility of current machine learning technologies to mistakes and adversarial attacks, I am deeply concerned about the safety and security of even semi-autonomous vehicles. Also, the field of autonomous driving suffers from a serious lack of safety standards and evaluation protocols. However, I remain hopeful. Similar to natural intelligence, AI will also be prone to making mistakes. Yet, the objective of self-driving cars can still be satisfied if the rates and impact of such mistakes are made to be lower than those of human drivers. We are witnessing growing efforts to address these issues in the industry and academia, as well as the governments.

 

Hacking street signs with stickers or using other means can confuse the computer vision module of an autonomous vehicle. How big of an issue do you believe this is?

These stickers, and Adversarial Examples in general, give rise to fundamental challenges in the robustness of machine learning models. To quote George E. P. Box, “all models are wrong, but some are useful”. Adversarial examples exploit this “wrong”ness of models, which is due to their abstractive nature, as well as the limitations of sampled data upon which they are trained. Recent efforts in the domain of adversarial machine learning have resulted in tremendous strides towards increasing the resilience of deep learning models to such attacks. From a security point of view, there will always be a way to fool machine learning models. However, the practical objective of securing machine learning models is to increase the cost of implementing such attacks to the point of economic infeasibility.

 

Your focus is on the safety and security features of both deep learning and deep reinforcement learning. Why is this so important?

Reinforcement Learning (RL) is the prominent method of applying machine learning to control problems, which by definition involve the manipulation of their environment. Therefore, I believe systems that are based on RL have significantly higher risks of causing major damages in the real-world compared to other machine learning methods such as classification. This problem is further exacerbated with the integration of Deep learning in RL, which enables the adoption of RL in highly complex settings. Also, it is my opinion that the RL framework is closely related to the underlying mechanisms of cognition in human intelligence, and studying its safety and vulnerabilities can lead to better insights into the limits of decision-making in our minds.

 

Do you believe that we are close to achieving Artificial General Intelligence (AGI)?

This is a notoriously hard question to answer. I believe that we currently have the building blocks of some architectures that can facilitate the emergence of AGI. However, it may take a few more years or decades to improve upon these architectures and enhance the cost-efficiency of training and maintaining these architectures. Over the coming years, our agents are going to grow more intelligent at a rapidly growing rate. I don’t think the emergence of AGI will be announced in the form of a [scientifically valid] headline, but as the result of gradual progress. Also, I think we still do not have a widely accepted methodology to test and detect the existence of an AGI, and this may delay our realization of the first instances of AGI.

 

How do we maintain safety in an AGI system that is capable of thinking for itself and will most likely be exponentially more intelligent than humans?

I believe that the grant unified theory of intelligent behavior is economics and the study of how agents act and interact to achieve what they want. The decisions and actions of humans are determined by their objectives, their information, and the available resources. Societies and collaborative efforts are emergent from its benefits for individual members of such groups. Another example is the criminal code, that deters certain decisions by attaching a high cost to actions that may harm the society. In the same way, I believe that controlling the incentives and resources can enable the emergence a state of equilibrium between humans and instances of AGI. Currently, the AI safety community investigates this thesis under the umbrella of value-alignment problems.

 

One of the areas you closely follow is counterterrorism. Do you have concerns with terrorists taking over AI or AGI systems?

There are numerous concerns about the misuse of AI technologies. In the case of terrorist operations, the major concern is the ease with which terrorists can develop and carry out autonomous attacks. A growing number of my colleagues are actively warning against the risks of developing autonomous weapons (see https://autonomousweapons.org/ ). One of the main problems with AI-enabled weaponry is in the difficulty of controlling the underlying technology: AI is at the forefront of open-source research, and anyone with access to the internet and consumer-grade hardware can develop harmful AI systems. I suspect that the emergence of autonomous weapons is inevitable, and believe that there will soon be a need for new technological solutions to counter such weapons. This can result in a cat-and-mouse cycle that fuels the evolution of AI-enabled weapons, which may give rise to serious existential risks in the long-term.

 

What can we do to keep AI systems safe from these adversarial agents?

The first and foremost step is education: All AI engineers and practitioners need to learn about the vulnerabilities of AI technologies, and consider the relevant risks in the design and implementation of their systems. As for more technical recommendations, there are various proposals and solution concepts that can be employed. For example, training machine learning agents in adversarial settings can improve their resilience and robustness against evasion and policy manipulation attacks (e.g., see my paper titled “Whatever Does Not Kill Deep Reinforcement Learning, Makes it Stronger“). Another solution is to directly account for the risk of adversarial attacks in the architecture of the agent (e.g., Bayesian approaches to risk modeling). There is however a major gap in this area, and it’s the need for universal metrics and methodologies for evaluating the robustness of AI agents against adversarial attacks. Current solutions are mostly ad hoc, and fail to provide general measures of resilience against all types of attacks.

 

Is there anything else that you would like to share about any of these topics?

In 2014, Scully et al. published a paper at the NeurIPS conference with a very enlightening topic: “Machine Learning: The High-Interest Credit Card of Technical Debt“. Even with all the advancements of the field in the past few years, this statement has yet to lose its validity. Current state of AI and machine learning is nothing short of awe-inspiring, but we are yet to fill a significant number of major gaps in both the foundation and the engineering dimensions of AI. This fact, in my opinion, is the most important takeaway of our conversation. I of course do not mean to discourage the commercial adoption of AI technologies, but only wish to enable the engineering community to account for the risks and limits of current AI technologies in their decisions.

I really enjoyed learning about the safety and security challenges about different types of AI systems. This is trully something that individuals, corporations, and governments need to become aware of. Readers who wish to learn more should visit Secure and Assured Intelligent Learning (SAIL) Lab.

Spread the love
Continue Reading