Connect with us

Interviews

Charles J. Simon, Author, Will Computers Revolt? – Interview Series

mm

Published

 on

Charles J. Simon, Author, Will Computers Revolt? - Interview Series

Charles J. Simon, BSEE, MSCS, nationally-recognized entrepreneur, software developer and manager. With a broad management and technical expertise and degrees in both Electrical Engineering and Computer Sciences Mr. Simon has many years of computer experience in industry including pioneering work in AI and CAD (two generations of CAD).

He is also the author of ‘Will Computers Revolt‘, which offers an in-depth view at the future possibility of Artificial General Intelligence (AGI).

What was it that originally attracted you to AI, and specifically to AGI?

I’ve been fascinated by the question, “Can machines think?” ever since I first read Alan Turing’s seminal 1950 paper which begins with that question. So far, the answer is clearly, “No,” but there is no scientific reason why not. I joined the AI community with the initial neural network boom in the late 1980s and since then AI has made great strides. But the intervening thirty years haven’t brought understanding to our machines, an ability which would catapult numerous apps to new levels of usefulness.

 

You stated that you share the option of MIT AI expert Rodney Brooks who says, ‘that without interaction with an environment – without a robotic body as you will – machines will never exhibit AGI.’ This is basically stating that with insufficient inputs from a robotic body, the AI will never develop AGI capabilities. Outside of computer vision, what types of inputs are needed to develop AGI?

Today’s AI needs to be augmented with basic concepts like the physical existence of objects in a reality, the passage of time, cause and effect—concepts clear to any three-year-old. A toddler uses multiple senses to learn these concepts by touching and manipulating toys, moving through the home, learning language, etc. While it is possible to create an AGI with more limited senses, just as there are deaf people and blind people who are perfectly intelligent but more senses and abilities to interact makes solving the AGI problem easier.

For completeness my simulator can provide senses of smell and taste. It remains to be seen if these will also prove important to AGI.

 

You stated that ‘A Key Requirement for intelligence is an environment which is external to the intelligence’. The example you gave is that ‘it is unreasonable to expect IBM’s Watson to “understand” anything if it has no underlying idea of what a “thing” is’. This clearly plays in the current limitations of narrow AI, especially natural language processing. How can AI developers best overcome this current limitation of AI?

A key factor is storing knowledge which is not specifically verbal, visual, or tactile but as abstract “Things” which can have verbal, visual, and tactile attributes. Consider something as simple as the phrase, “a red ball”. You know what these words mean because of your visual and tactile experiences. You also know the meaning of related actions like throwing, bouncing, kicking, etc. which all come to mind to some extent when you hear the phrase. Any AI system which is specifically word-based or specifically image-based will miss out on the other levels of understanding.

I have implemented a Universal Knowledge Store which stores any kind of information in a brain-like structure where Things are analogous to neurons and have many attribute references to other Things—references are analogous to synapses. Thus, red and ball are individual Things and a red ball is a Thing which has attribute references to the red Thing and the ball Thing. Both red and ball have references to the corresponding Things for the words “red” and “ball”, each of which, in turn, have references to other Things which define how the words are heard, spoken, read, or spelled as well as possible action Things.

 

You’ve reached the conclusion that brain simulation of general intelligence is a long way off while AGI may be (relatively) just around the corner. Based on this statement, should we move on from attempting to emulate or create a simulation of the human brain, and just focus on AGI?

Today’s deep learning and related technologies are great for appropriate applications but will not spontaneously lead to understanding. To take the next steps, we need to add techniques specifically targeted at solving the problems which are within the capacity of any three-year-old.

Taking advantage of the intrinsic abilities of our computers can be orders of magnitude more efficient than the biological equivalent or any simulation of it. For example, your brain can store information in the chemistry of biological synapses over several iterations requiring 10-100 milliseconds. A computer can simply store the new synapse value in a single memory cycle, a billion times faster.

In developing AGI software, I have done both biological neural simulation and more efficient algorithms.  Carrying forward with the Universal Knowledge Store, when simulated in simulated biological neurons, each Thing requires a minimum of 10 neurons and usually many more. This puts the capacity of the human brain somewhere between ten and a hundred million Things. But perhaps an AGI would appear intelligent if it comprehends only one million Things—well within the scope of today’s high-end desktop computers.

 

A key unknown is how much of the robot’s time should be allocated to processing and reacting to the world versus time spent imagining and planning. Can you briefly explain the importance of imagination to an AGI?

We can imagine many things and then only act on the ones we like, those which further our internal goals, if you will. The real power of imagination is being able to predict the future—a three-year-old can figure out which sequences of motion will lead her to a goal in another room and an adult can speculate on which words will have the greatest impact on others.

An AGI similarly will benefit from going beyond being purely reactive to speculating on various complex actions and choosing the best.

 

You believe that Asimov’s three laws of robotics are too simple and ambiguous. In your book you shared some ideas for recommended laws to be programmed in robots. Which laws do you feel are most important for a robot to follow?

Charles J. Simon, Author, Will Computers Revolt? - Interview Series

New “laws of robotics” will evolve over years as AGI emerges. I propose a few starters:

  1. Maximize internal knowledge and understanding of the environment.
  2. Share that knowledge accurately with others (both AGI and human).
  3. Maximize the well-being of both AGIs and humans as a whole—not just as an individual.

 

You have some issues with the Turing Test and the concept behind it. Can you explain how you believe the Turing Test is flawed?

The Turing Test has served us well for fifty years as an ad-hoc definition of general intelligence but as AGI nears, we need to hone the definition and we need a clearer definition. The Turing Test is actually a test of how human one is, not how intelligent one is. The longer a computer can maintain the deception, the better it performs on the test. Obviously, asking the question, “Are you a computer?” and related proxy questions such as, “What is your favorite food?”   are dead giveaways unless the AGI is programmed to deceive—a dubious objective at best.

Further, the Turing Test has motivated AI development into areas of limited value with (for example) chatbots with vast flexibility in responses but no underlying comprehension.

 

What would you do differently in your version of the Turing Test?

Better questions could probe specifically into the understanding of time, space, cause-and-effect, forethought, etc. rather than random questions without any particular basis in psychology, neuroscience, or AI. Here are some examples:

  1. What do you see right now? If you stepped back three feet, what differences would you see?
  2. If I [action], what would your reaction be?
  3. if you [action], what will my likely reactions be?
  4. Can you name three things which are like [object]?

Then, rather than evaluating responses as to whether they are indistinguishable from human responses, they should be evaluated in terms of whether or not they are reasonable responses (intelligent) based on the experience of the entity being tested.

 

You’ve stated that when faced with demands to perform some short-term destructive activity, properly programmed AGIs will simply refuse. How can we ensure that the AGI is properly programmed to begin with?

Decision-making is goal-based. In combination with an imagination, you (or an AGI) consider the outcome of different possible actions and choose the one which best achieves the goals. In humans, our goals are set by evolved instincts and our experience; an AGI’s goals are entirely up to the developers. We need to ensure that the goals of an AGI align with the goals of humanity as opposed to the personal goals of an individual. [Three possible goals as listed above.]

 

You’ve stated that it’s inevitable that humans will create an AGI, what’s your best estimate for a timeline?

Facets of AGI will begin to emerge within the coming decade but we won’t all agree that AGI has arrived. Eventually, we will agree that AGI has arrived when they exceed most human abilities by a substantial margin. This will take two or three decades longer.

 

For all the talks of AGI will it be real consciousness as we know it?

Consciousness manifests in a set of behaviors (which we can observe) which are based on an internal sensation (which we can’t observe).  AGIs will manifest the behaviors; they need to in order to make intelligent decisions. But I contend that our internal sensation is largely dependent on our sensory hardware and instincts and so I can guarantee that whatever internal sensations an AGI might have, they will be different from a human’s.

The same can be said for emotions and our sense of free will. In making decisions, one’s belief in free will permeates every decision we make. If you don’t believe you have a choice, you simply react. For an AGI to make thoughtful decisions, it will likewise need to be aware of its own ability to make decisions.

Last question, do you believe that an AGI has more potential for good or bad?

I am optimistic that AGIs will help us to move forward as a species and bring us answers to many questions about the universe. The key will be for us to prepare and decide what our relationship will be with AGIs as we define their goals. If we decide to use the first AGIs as tools of conquest and enrichment, we shouldn’t be surprised if, down the road, they become their own tools of conquest and enrichment against us. If we choose that AGIs are tools of knowledge, exploration, and peace, then that’s what we’re likely to get in return. The choice is up to us.

Thank you for a fantastic interview exploring the future potential of building an AGI. For readers who wish to learn more they may read ‘Will Computers Revolt‘ or visit Charle’s website futureai.guru.

Spread the love

Antoine Tardif is a futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is also the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.ai

Artificial General Intelligence

Vahid Behzadan, Director of Secured and Assured Intelligent Learning (SAIL) Lab – Interview Series

mm

Published

on

Vahid Behzadan, Director of Secured and Assured Intelligent Learning (SAIL) Lab - Interview Series

Vahid is an Assistant Professor of Computer Science and Data Science at the University of New Haven. He is also director of the Secure and Assured Intelligent Learning (SAIL) Lab

His research interests include safety and security of intelligent systems, psychological modeling of AI safety problems, security of complex adaptive systems, game theory, multi-agent systems, and cyber-security.

You have an extensive background in cybersecurity and keeping AI safe. Can you share your journey in how you became attracted to both fields?

My research trajectory has been fueled by two core interests of mine: finding out how things break, and learning about the mechanics of the human mind. I have been actively involved in cybersecurity since my early teen years, and consequently built my early research agenda around the classical problems of this domain. Few years into my graduate studies, I stumbled upon a rare opportunity to change my area of research. At that time, I had just come across the early works of Szegedy and Goodfellow on adversarial example attacks, and found the idea of attacking machine learning very intriguing. As I looked deeper into this problem, I came to learn about the more general field of AI safety and security, and found it to encompass many of my core interests, such as cybersecurity, cognitive sciences, economics, and philosophy. I also came to believe that research in this area is not only fascinating, but also vital for ensuring the long-term benefits and safety of the AI revolution.

 

You’re the director of the Secure and Assured Intelligent Learning (SAIL) Lab which works towards laying concrete foundations for the safety and security of intelligent machines. Could you go into some details regarding work undertaken by SAIL?

At SAIL, my students and I work on problems that lie in the intersection of security, AI, and complex systems. The primary focus of our research is on investigating the safety and security of intelligent systems, from both the theoretical and the applied perspectives. On the theoretical side, we are currently investigating the value-alignment problem in multi-agent settings and are developing mathematical tools to evaluate and optimize the objectives of AI agents with regards to stability and robust alignments. On the practical side, some of our projects explore the security vulnerabilities of the cutting-edge AI technologies, such as autonomous vehicles and algorithmic trading, and aim to develop techniques for evaluating and improving the resilience of such technologies to adversarial attacks.

We also work on the applications of machine learning in cybersecurity, such as automated penetration testing, early detection of intrusion attempts, and automated threat intelligence collection and analysis from open sources of data such as social media.

 

You recently led an effort to propose the modeling of AI safety problems as psychopathological disorders. Could you explain what this is?

This project addresses the rapidly growing complexity of AI agents and systems: it is already very difficult to diagnose, predict, and control unsafe behaviors of reinforcement learning agents in non-trivial settings by simply looking at their low-level configurations. In this work, we emphasize the need for higher-level abstractions in investigating such problems. Inspired by the scientific approaches to behavioral problems in humans, we propose psychopathology as a useful high-level abstraction for modeling and analyzing emergent deleterious behaviors in AI and AGI. As a proof of concept, we study the AI safety problem of reward hacking in an RL agent learning to play the classic game of Snake. We show that if we add a “drug” seed to the environment, the agent learns a sub-optimal behavior that can be described via neuroscientific models of addiction. This work also proposes control methodologies based on the treatment approaches used in psychiatry. For instance, we propose the use of artificially-generated reward signals as analogues of medication therapy for modifying the deleterious behavior of agents.

 

Do you have any concerns with AI safety when it comes to autonomous vehicles?

Autonomous vehicles are becoming prominent examples of deploying AI in cyber-physical systems. Considering the fundamental susceptibility of current machine learning technologies to mistakes and adversarial attacks, I am deeply concerned about the safety and security of even semi-autonomous vehicles. Also, the field of autonomous driving suffers from a serious lack of safety standards and evaluation protocols. However, I remain hopeful. Similar to natural intelligence, AI will also be prone to making mistakes. Yet, the objective of self-driving cars can still be satisfied if the rates and impact of such mistakes are made to be lower than those of human drivers. We are witnessing growing efforts to address these issues in the industry and academia, as well as the governments.

 

Hacking street signs with stickers or using other means can confuse the computer vision module of an autonomous vehicle. How big of an issue do you believe this is?

These stickers, and Adversarial Examples in general, give rise to fundamental challenges in the robustness of machine learning models. To quote George E. P. Box, “all models are wrong, but some are useful”. Adversarial examples exploit this “wrong”ness of models, which is due to their abstractive nature, as well as the limitations of sampled data upon which they are trained. Recent efforts in the domain of adversarial machine learning have resulted in tremendous strides towards increasing the resilience of deep learning models to such attacks. From a security point of view, there will always be a way to fool machine learning models. However, the practical objective of securing machine learning models is to increase the cost of implementing such attacks to the point of economic infeasibility.

 

Your focus is on the safety and security features of both deep learning and deep reinforcement learning. Why is this so important?

Reinforcement Learning (RL) is the prominent method of applying machine learning to control problems, which by definition involve the manipulation of their environment. Therefore, I believe systems that are based on RL have significantly higher risks of causing major damages in the real-world compared to other machine learning methods such as classification. This problem is further exacerbated with the integration of Deep learning in RL, which enables the adoption of RL in highly complex settings. Also, it is my opinion that the RL framework is closely related to the underlying mechanisms of cognition in human intelligence, and studying its safety and vulnerabilities can lead to better insights into the limits of decision-making in our minds.

 

Do you believe that we are close to achieving Artificial General Intelligence (AGI)?

This is a notoriously hard question to answer. I believe that we currently have the building blocks of some architectures that can facilitate the emergence of AGI. However, it may take a few more years or decades to improve upon these architectures and enhance the cost-efficiency of training and maintaining these architectures. Over the coming years, our agents are going to grow more intelligent at a rapidly growing rate. I don’t think the emergence of AGI will be announced in the form of a [scientifically valid] headline, but as the result of gradual progress. Also, I think we still do not have a widely accepted methodology to test and detect the existence of an AGI, and this may delay our realization of the first instances of AGI.

 

How do we maintain safety in an AGI system that is capable of thinking for itself and will most likely be exponentially more intelligent than humans?

I believe that the grant unified theory of intelligent behavior is economics and the study of how agents act and interact to achieve what they want. The decisions and actions of humans are determined by their objectives, their information, and the available resources. Societies and collaborative efforts are emergent from its benefits for individual members of such groups. Another example is the criminal code, that deters certain decisions by attaching a high cost to actions that may harm the society. In the same way, I believe that controlling the incentives and resources can enable the emergence a state of equilibrium between humans and instances of AGI. Currently, the AI safety community investigates this thesis under the umbrella of value-alignment problems.

 

One of the areas you closely follow is counterterrorism. Do you have concerns with terrorists taking over AI or AGI systems?

There are numerous concerns about the misuse of AI technologies. In the case of terrorist operations, the major concern is the ease with which terrorists can develop and carry out autonomous attacks. A growing number of my colleagues are actively warning against the risks of developing autonomous weapons (see https://autonomousweapons.org/ ). One of the main problems with AI-enabled weaponry is in the difficulty of controlling the underlying technology: AI is at the forefront of open-source research, and anyone with access to the internet and consumer-grade hardware can develop harmful AI systems. I suspect that the emergence of autonomous weapons is inevitable, and believe that there will soon be a need for new technological solutions to counter such weapons. This can result in a cat-and-mouse cycle that fuels the evolution of AI-enabled weapons, which may give rise to serious existential risks in the long-term.

 

What can we do to keep AI systems safe from these adversarial agents?

The first and foremost step is education: All AI engineers and practitioners need to learn about the vulnerabilities of AI technologies, and consider the relevant risks in the design and implementation of their systems. As for more technical recommendations, there are various proposals and solution concepts that can be employed. For example, training machine learning agents in adversarial settings can improve their resilience and robustness against evasion and policy manipulation attacks (e.g., see my paper titled “Whatever Does Not Kill Deep Reinforcement Learning, Makes it Stronger“). Another solution is to directly account for the risk of adversarial attacks in the architecture of the agent (e.g., Bayesian approaches to risk modeling). There is however a major gap in this area, and it’s the need for universal metrics and methodologies for evaluating the robustness of AI agents against adversarial attacks. Current solutions are mostly ad hoc, and fail to provide general measures of resilience against all types of attacks.

 

Is there anything else that you would like to share about any of these topics?

In 2014, Scully et al. published a paper at the NeurIPS conference with a very enlightening topic: “Machine Learning: The High-Interest Credit Card of Technical Debt“. Even with all the advancements of the field in the past few years, this statement has yet to lose its validity. Current state of AI and machine learning is nothing short of awe-inspiring, but we are yet to fill a significant number of major gaps in both the foundation and the engineering dimensions of AI. This fact, in my opinion, is the most important takeaway of our conversation. I of course do not mean to discourage the commercial adoption of AI technologies, but only wish to enable the engineering community to account for the risks and limits of current AI technologies in their decisions.

I really enjoyed learning about the safety and security challenges about different types of AI systems. This is trully something that individuals, corporations, and governments need to become aware of. Readers who wish to learn more should visit Secure and Assured Intelligent Learning (SAIL) Lab.

Spread the love
Continue Reading

Artificial General Intelligence

How we can Benefit from Advancing Artificial General Intelligence (AGI)

mm

Published

on

How we can Benefit from Advancing Artificial General Intelligence (AGI)

Creating an Artificial General Intelligence (AGI) is the ultimate endpoint for many AI specialists.  An AGI agent could be leveraged to tackle a myriad of the world’s problems. For instance, you could introduce a problem to an AGI agent and the AGI could use deep reinforcement learning combined with its newly introduced emergent consciousness to make real-life decisions.

The difference between an AGI and a regular algorithm is the ability for the AGI to ask itself the important questions. An AGI can formulate the end solution that it wishes to arrive at, simulate hypothetical ways of getting there, and then make an informed decision on which simulated reality best matches the goals that were set.

The debate on how an AGI can emerge has been around since the term “artificial intelligence” was first introduced at the Dartmouth conference in 1956. Since then many companies have attempted to tackle the AGI challenge, OpenAI is probably the most recognized company. OpenAI was launched as a non-profit on December 11, 2015 with its mission statement being to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

The OpenAI mission statement clearly outlines the potential gains that an AGI can offer society. Suddenly issues which were too sophisticated for humans and regular AI systems, are now able to be tackled.

The potential benefits of releasing an AGI are astronomical. You could state a goal of curing all forms of cancer, the AGI could then connect itself to the internet to scan all the current research in every language. The AGI can initiate the problem of formulating solutions, and then simulate all potential outcomes. It would be connecting the benefits of consciousness that currently humans possess, with the infinite knowledge of the cloud, use deep learning for pattern recognition of this big data, and use reinforcement learning to simulate different environments/outcomes. All of this combined with a consciousness that never requires a rest period and can be 100% focused on the task at hand.

The potential downsides of AGI of course cannot be understated, you can have an AGI which has the goal of continuously upgrading itself and could then swallow everything in its path in order to maximize the computing resources and atoms that it needs to forever upgrade its system. This theory was explored in detail by Professor Nick Bostrom in the Paperclip Maximizer argument, in this scenario a misconfigured AGI is instructed to produce paperclips and does so until nothing is left, literally every resource on earth has been consumed to maximize the production of paperclips.

A more pragmatic viewpoint is that an AGI could be controlled by a rogue state or a corporation with poor ethics. This entity could program the AGI to maximize profits, and in this case with poor programming and zero remorse it could choose to bankrupt competitors, destroy supply chains, hack the stock market, liquidate bank accounts, etc.

Therefore, a code of ethics needs to be programmed in an AGI from the onset. A code of ethics has been debated by many minds and the concept was first introduced to the general population in the form of the 3 laws of robotics by author Isaac Asimov.

How we can Benefit from Advancing Artificial General Intelligence (AGI)

There are some problems with the 3 laws of robotics as the laws can be interpreted in different ways. We previously discussed programming ethics into an AGI, in our interview with Charles J. Simon, Author of Will Computers Revolt?

April 7, 2020, is the day that Brain Simulator II was released to the public. This version of the brain simulator enables experimentation into diverse AI algorithms to create an end-to-end AGI system with modules for vision, hearing, robotic control, learning, internal modeling, and even planning, imagination, and forethought.

“New, unique algorithms that directly address cognition are the key to helping AI evolve into AGI,” Simon explains.

“Brain Simulator II combines vision and touch into a single mental model and is making progress toward the comprehension of causality and the passage of time,” Simon notes. “As the modules are enhanced, progressively more intelligence will emerge.”

Brain Simulator II bridged Artificial Neural Networks (ANN) and Symbolic AI techniques to create new possibilities. It creates an array of millions of neurons interconnected by any number of synapses.

This enables various entities to research possibilities for AGI development.

Anyone interested in Brain Simulator II can follow along or participate in the development process by downloading the software, suggesting new features, and (for advanced developers) even adding custom modules. You can also follow its creator Charles Simon on Twitter.

In the meantime, society has been recently disrupted with the COVID-19 virus. Had we an AGI system in place we could have used this AGI to quickly identify how to stop the spread of COVID-19, and more importantly how to treat COVID-19 patients. While it may be too late for an AGI to help with this outbreak, in future outbreaks an AGI could be the best tool in our arsenal.

Spread the love
Continue Reading

Artificial General Intelligence

Noah Schwartz, Co-Founder & CEO of Quorum AI – Interview Series

mm

Published

on

Noah Schwartz, Co-Founder & CEO of Quorum AI - Interview Series

Noah is an AI systems architect. Prior to founding Quorum AI, Noah spent 12 years in academic research, first at the University of Southern California and most recently at Northwestern as the Assistant Chair of Neurobiology. His work focused on information processing in the brain and he has translated his research into products in augmented reality, brain-computer interfaces, computer vision, and embedded robotics control systems.

Your interest in AI and robotics started as a little boy. How were you first introduced to these technologies?

The initial spark came from science fiction movies and a love for electronics. I remember watching the movie, Tron, as an 8-year old, followed by Electric Dreams, Short Circuit, DARYL, War Games, and others over the next few years. Although it was presented through fiction, the very idea of artificial intelligence blew me away. And even though I was only 8-years old, I felt this immediate connection and an intense pull toward AI that has never diminished in the time since.

 

How did your passions for both evolve?

My interest in AI and robotics developed in parallel with a passion for the brain. My dad was a biology teacher and would teach me about the body, how everything worked, and how it was all connected. Looking at AI and looking at the brain felt like the same problem to me – or at least, they had the same ultimate question, which was, How is that working? I was interested in both, but I didn’t get much exposure to AI or robotics in school. For that reason, I initially pursued AI on my own time and studied biology and psychology in school.

When I got to college, I discovered the Parallel Distributed Processing (PDP) books, which was huge for me. They were my first introduction to actual AI, which then led me back to the classics such as Hebb, Rosenblatt, and even McCulloch and Pitts. I started building neural networks based on neuroanatomy and what I learned from biology and psychology classes in school. After graduating, I worked as a computer network engineer, building complex, wide-area-networks, and writing software to automate and manage traffic flow on those networks – kind of like building large brains. The work reignited my passion for AI and motivated me to head to grad school to study AI and neuroscience, and the rest is history.

 

Prior to founding Quorum AI, you spent 12 years in academic research, first at the University of Southern California and most recently at Northwestern as the Assistant Chair of Neurobiology. At the time your work focused on information processing in the brain. Could you walk us through some of this research?

In a broad sense, my research was trying to understand the question: How does the brain do what it does using only what it has available? For starters, I don’t subscribe to the idea that the brain is a type of computer (in the von Neumann sense). I see it as a massive network that mostly performs stimulus-response and signal-encoding operations. Within that massive network there are clear patterns of connectivity between functionally specialized areas. As we zoom in, we see that neurons don’t care what signal they’re carrying or what part of the brain they’re in – they operate based on very predictable rules. So if we want to understand the function of these specialized areas, we need to ask a few questions: (1) As an input travels through the network, how does that input converge with other inputs to produce a decision? (2) How does the structure of those specialized areas form as a result of experience? And (3) how do they continue to change as we use our brains and learn over time? My research tried to address these questions using a mixture of experimental research combined with information theory and modeling and simulation – something that could enable us to build artificial decision systems and AI. In neurobiology terms, I studied neuroplasticity and microanatomy of specialized areas like the visual cortex.

 

You then translated your work into augmented reality, and brain-computer interfaces. What were some of the products you worked on?

Around 2008, I was working on a project that we would now call augmented reality, but back then, it was just a system for tracking and predicting eye movements, and then using those predictions to update something on the screen. To make the system work in realtime, I built a biologically-inspired model that predicted where the viewer would based on their microsaccades – tiny eye movements that occur just before you move your eye. Using this model, I could predict where the viewer would look, then update the frame buffer in the graphics card while their eyes were still in motion. By the time their eyes reached that new location on the screen, the image was already updated. This ran on an ordinary desktop computer in 2008, without any lag. The tech was pretty amazing, but the project didn’t get through to the next round of funding, so it died.

In 2011, I made a more focused effort at product development and built a neural network that could perform feature discovery on streaming EEG data that we measured from the scalp. This is the core function of most brain-computer interface systems. The project was also an experiment in how small of a footprint could we get this running on? We had a headset that read a few channels of EEG data at 400Hz that were sent via Bluetooth to an Android phone for feature discovery and classification, then sent to an Arduino-powered controller that we retrofitted into an off-the-shelf RC car. When in use, an individual who was wearing the EEG headset could drive and steer the car by changing their thoughts from doing mental math to singing a song. The algorithm ran on the phone and created a personalized brain “fingerprint” for each user, enabling them to switch between a variety of robotic devices without having to retrain on each device. The tagline we came up with was “Brain Control Meets Plug-and-Play.”

In 2012, we extended the system so it operated in a much more distributed manner on smaller hardware. We used it to control a multi-segment, multi-joint robotic arm in which each segment was controlled by an independent processor that ran an embedded version of the AI. Instead of using a centralized controller to manipulate the arm, we allowed the segments to self-organize and reach their target in a swarm-like, distributed manner. In other words, like ants forming an ant bridge, the arm segments would cooperate to reach some target in space.

We continued moving in this same direction when we first launched Quorum AI – originally known as Quorum Robotics – back in 2013. We quickly realized that the system was awesome because of the algorithm and architecture, not the hardware, so in late 2014, we pivoted completely into software. Now, 8 years later, Quorum AI is coming full-circle, back to those robotics roots by applying our framework to the NASA Space Robotics Challenge.

 

Quitting your job as a professor to launch a start-up had to have been a difficult decision. What inspired you to do this?

It was a massive leap for me in a lot of ways, but once the opportunity came up and the path became clear, it was an easy decision. When you’re a professor, you think in multi-year timeframes and you work on very long-range research goals. Launching a start-up is the exact opposite of that. However, one thing that academic life and start-up life have in common is that both require you to learn and solve problems constantly. In a start-up, that could mean trying to re-engineer a solution to reduce product development risk or maybe studying a new vertical that could benefit from our tech. Working in AI is the closest thing to a “calling” as I’ve ever felt, so despite all the challenges and the ups and downs, I feel immensely lucky to be doing the work that I do.

 

You’ve since then developed Quorum AI, which develops realtime, distributed artificial intelligence for all devices and platforms. Could you elaborate on what exactly this AI platform does?

The platform is called the Environment for Virtual Agents (EVA), and it enables users to build, train, and deploy models using our Engram AI Engine. Engram is a flexible and portable wrapper that we built around our unsupervised learning algorithms. The algorithms are so efficient that they can learn in realtime, as the model is generating predictions. Because the algorithms are task-agnostic, there is no explicit input or output to the model, so predictions can be made in a Bayesian manner for any dimension without retraining and without suffering from catastrophic forgetting. The models are also transparent and decomposable, meaning they can be examined and broken apart into individual dimensions without losing what has been learned.

Once built, the models can be deployed through EVA to any type of platform, ranging from custom embedded hardware or up to the cloud. EVA (and the embeddable host software) also contain several tools to extend the functionality of each model. A few quick examples: Models can be shared between systems through a publication/subscription system, enabling distributed systems to achieve federated learning over both time and space. Models can also be deployed as autonomous agents to perform arbitrary tasks, and because the model is task-agnostic, the task can be changed during runtime without retraining. Each individual agent can be extended with a private “virtual” EVA, enabling the agent to simulate models of other agents in a scale-free manner. Finally, we’ve created some wrappers for deep learning and reinforcement learning (Keras-based) systems to enable these models to operate on the platform, in concert with more flexible Engram-based systems.

 

You’ve previously described the Quorum AI algorithms as “mathematical poetry”. What did you mean by this?

When you’re building a model, whether you’re modeling the brain or you’re modeling sales data for your enterprise, you start by taking an inventory of your data, then you try out known classes of models to try and approximate the system. In essence, you are creating rough sketches of the system to see what looks best. You don’t expect things to fit the data very well, and there’s some trial and error as you test different hypotheses about how the system works, but with some finesse, you can capture the data pretty well.

As I was modeling neuroplasticity in the brain, I started with the usual approach of mapping out all the molecular pathways, transition states, and dynamics that I thought would matter. But I found that when I reduced the system to its most basic components and arranged those components in a particular way, the model got more and more accurate until it fit the data almost perfectly. It was like every operator and variable in the equations were exactly what they needed to be, there was nothing extra, and everything was essential to fitting the data.

When I plugged the model into larger and larger simulations, like visual system development or face recognition, for instance, it was able to form extremely complicated connectivity patterns that matched what we see in the brain. Because the model was mathematical, those brain patterns could be understood through mathematical analysis, giving new insight into what the brain is learning. Since then, we’ve solved and simplified the differential equations that make up the model, improving computational efficiency by multiple orders of magnitude. It may not be actual poetry, but it sure felt like it!

 

Quorum AI’s platform toolkit enables devices to connect to one another to learn and share data without needing to communicate through cloud-based servers. What are the advantages of doing it this way versus using the cloud?

We give users the option of putting their AI anywhere they want, without compromising the functionality of the AI. The status quo in AI development is that companies are usually forced to compromise security, privacy, or functionality because their only option is to use cloud-based AI services. If companies do try to build their own AI in-house, it often requires a lot of money and time, and the ROI is rarely worth the risk. If companies want to deploy AI to individual devices that are not cloud-connected, the project quickly becomes impossible. As a result, AI adoption becomes a fantasy.

Our platform makes AI accessible and affordable, giving companies a way to explore AI development and adoption without the technical or financial overhead. And moreover, our platform enables users to go from development to deployment in one seamless step.

Our platform also integrates with and extends the shelf-life of other “legacy” models like deep learning or reinforcement learning, helping companies repurpose and integrate existing systems into newer applications. Similarly, because our algorithms and architectures are unique, our models are not black boxes, so anything that the system learns can be explored and interpreted by humans, and then extended to other areas of business.

 

It’s believed by some that Distributed Artificial Intelligence (DAI), could lead the way to Artificial General Intelligence (AGI). Do you subscribe to this theory?

I do, and not just because that’s the path we’ve set out for ourselves! When you look at the brain, it’s not a monolithic system. It’s made up of separate, distributed systems that each specialize in a narrow range of brain functions. We may not know what a particular system is doing, but we know that its decisions depend significantly on the type of information it’s receiving and how that information changes over time. (This is why neuroscience topics like the connectome are so popular.)

In my opinion, if we want to build AI that is flexible and that behaves and performs like the brain, then it makes sense to consider distributed architectures like those that we see in the brain. One could argue that deep learning architectures like multi-layer networks or CNNs can be found in the brain, and that’s true, but those architectures are based on what we knew about the brain 50 years ago.

The alternative to DAI is to continue iterating on monolithic, inflexible architectures that are tightly coupled to a single decision space, like those that we see in deep learning or reinforcement learning (or any supervised learning method, for that matter). I would suggest that these limitations are not just a matter of parameter tweaking or adding layers or data conditioning – these issues are fundamental to deep learning and reinforcement learning, at least as we define them today, so new approaches are required if we’re going to continue innovating and building the AI of tomorrow.

 

Do you believe that achieving AGI using DAI is more likely than reinforcement learning and/or deep learning methods that are currently being pursued by companies such as OpenAI and DeepMind?

Yes, although from what they’re blogging about, I suspect OpenAI and DeepMind are using more distributed architectures than they let on. We’re starting to hear more about multi-system challenges like transfer learning or federated/distributed learning, and coincidentally, about how deep learning and reinforcement learning approaches aren’t going to work for these challenges. We’re also starting to hear from pioneers like Yoshua Bengio about how biologically-inspired architectures could bridge the gap! I’ve been working on biologically-inspired AI for almost 20 years, so I feel very good about what we’ve learned at Quorum AI and how we’re using it to build what we believe is the next generation of AI that will overcome these limitations.

 

Is there anything else that you would like to share about Quorum AI?

We will be previewing our new platform for distributed and agent-based AI at the Federated and Distributed Machine Learning Conference in June 2020. During the talk, I plan to present some recent data on several topics, including sentiment analysis as a bridge to achieving empathic AI.

I would like to give a special thank you to Noah for these amazing answers, and I would recommend that you visit the Quorum to learn more.

Spread the love
Continue Reading