stub Charles Simon, Author of Brain Simulator II - Interview Series - Unite.AI
Connect with us

Interviews

Charles Simon, Author of Brain Simulator II – Interview Series

mm
Updated on

Charles Simon is the Author of Brain Simulator II, a companion book to the Brain Simulator II, a free, open source software project aimed at creating an end-to-end Artificial General Intelligence (AGI) system

The original Brain Simulator software was released in 1988, an enormity of time in the software world. How much of a leap forward is Brain Simulator II compared to its predecessor?

Today’s system is over a million times faster. The original was written in FORTRAN, ran on an IBM AT clone, supported a fixed array of 1,200 neurons, and computed about two cycles per second. Today’s program can run on a network and process 2.5 billion synapses per second on a powerful desktop CPU.

This book is about the Brain Simulator II, an open-source software project aimed at creating end-to-end artificial intelligence, what type of coding experience is needed to run this software?

No experience needed. If you aren’t a programmer, you can spend time with the Brain Simulator and come away with an understanding of the capabilities and limitations of neurons, a bit about knowledge representation, and even build your own limited networks. If you are a programmer, you’ll follow the more in-depth technical explanations and build your own modules to extend the system to more advanced AGI strategies.

Why is returning to the biologically inspired roots of AI important to achieving AGI?

In the 1980s the thinking was that if we could just build a big enough neural network, it would spontaneously become intelligent. Over the intervening forty years, this scenario has become increasingly implausible. So, if classic AI approaches haven’t panned out for AGI, let’s look at some different approaches, and the only working AGI model we have is the human brain.

At the same time, there’s no reason for slavish adherence to biological plausibility. For example, we know that our brains can estimate distances to objects based on slight differences in the images received by our two eyes, the basis for 3D movies. We don’t know how this works in the brain so instead, I’ve programmed this functionality in a module which estimates distances using a few lines of trigonometry. We can be pretty sure your brain doesn’t work this way, but the trig approach is likely faster and more accurate.

You state in the book that an AGI requires robotics, why is this so important?

Consider trying to explain color to a blind person or music to a deaf person. If a prospective AGI is just a program on a computer, how can it get a basic understanding of things any three-year-old knows? The child has a point of view and is surrounded by reality. The child knows that objects exist in that reality and that many of them can be manipulated. By playing with blocks a child can learn about shape, size, solidity, gravity, visual occlusion, distance, and on and on. With autonomous motion, vision, and manipulators, an AGI can learn about reality on a more fundamental level than any program which relies only on mountains of text and image data.

After a robotic AGI has acquired a fundamental grasp of objects in reality, that knowledge can be cloned into nonrobotic thinking machines and the understanding will persist. Just as someone who loses their senses of sight or hearing can understand things in a different way than a person who has never had these senses.

One important aspect of the Brain Simulator II is that it uses no backpropagation, what is the rationale for not adopting this methodology?

Your brain operates without backpropagation so AGI must be possible without it. In fact, backpropagation is fundamentally incompatible with a biological model because it relies on being able to sense and modify synapse weights with considerable precision. After some time with the Brain Simulator, you’ll conclude setting synapse weights with any degree of precision is very difficult and accurately sensing what those synapse weights are is impossible. The fundamental problem is that firing neurons modify synapse weights but there is no way to detect a synapse weight without firing neurons, so a synapse weight cannot be sensed without modifying it.

Backpropagation has no biological analog and I consider it to be an extremely powerful statistical method. Lots of people are working with it, some with excellent results. My point is to try out some different approaches. By using spiking neurons combined with plug-in software modules, I’m looking at the problems of AGI from a different perspective.

When the brain is probed there appears to be disorder and randomness, is this something that we need to introduce to a software system for true AGI to emerge?

I don’t think so. When you look at the individual neurons and synapses, their function is quite deterministic, as is the transistor. In the brain, things look random because the noise levels are quite high and the information components aren’t in any apparent order. But consider your vision, you can read text with clarity and there is no disorder or randomness in the reading process. So, we conclude that, at least, your visual cortex is reasonably reliable and repeatable. Yet, when probed, it looks just as disordered as the rest of the brain. So the rest of the brain is likely as reliable and repeatable as the visual cortex, we just don’t see the organization and order yet. It’s a bit like reading Chinese, to me it’s disordered semi-random markings but to someone who can read the language, there is an absolute organization. We just can’t read the internal language of the brain yet.

You introduce a concept called the Universal Knowledge Store (UKS), could you briefly discuss what this is and why it matters?

Thinking back to the question of robotics, you can see that one facet of general intelligence is the ability to integrate knowledge from various senses. You know about a block because you can see it, touch it, and hear words about it. All of this represents information about a block. So for an AGI to have similar abilities, it must have a general storing mechanism which can handle a wide variety of disparate information and create useful relationships between the various items. The UKS is a knowledge graph in a very general way so that it can handle ANY kind of information and ANY kinds of relationships.

The UKS can store the spatial information needed for the maze application along with the decision and outcome tree used to traverse the maze to achieve a goal. The same structure is used to associate words with colors. This kind of generality is fundamental to AGI.

What is your time horizon for AGI to emerge?

It’s difficult to say. We already have the hardware necessary for AGI and I see that a single breakthrough is all that’s needed, and it could come at any time. Let me try to describe that breakthrough:

Consider that if all you know is that red is-a color and blue is-a color, I can ask you to name some colors and you can say red and blue. The question is, how can an AGI learn that the “is-a” relationship is something. I could program such a relationship easily, but then my AGI won’t be able to learn new relationships as they are encountered.  A child can learn about relationships of nearer/farther, bigger/smaller, sooner/later, before/after, and on and on. But these rely on even more fundamental concepts of size, distance, time, and more.

How can a head full of neurons learn all this truly fundamental stuff? This ties back to the need for robotics. How can an AGI learn the concept of distance if it can’t go anywhere or reach for anything? It also ties back to the need for universal storage. How can an AGI comprehend going somewhere which combines the concepts of location and time? Going somewhere is relatively straightforward. Understanding what it means to be going somewhere is much more difficult. I believe these truly fundamental questions are all manifestations of the same underlying problem and the solution to that problem is the necessary breakthrough.

Not very many people are working on this question, largely because it’s so difficult to pitch a project which, if truly successful, will have the capabilities of a three-year-old after three years, and the capabilities of a ten-year-old after a decade. So the solution is likely to come from smaller independent researchers who have the time and energy to devote to problems with no short-term return.

Is there anything else that you would like to share about Brain Simulator II or AGI in general?

When you try to use neurons and synapses to design circuits which address these fundamental problems, you conclude that rather than a concept being represented by a few dozen synapses, each requires a few dozen neurons. This means that instead of the brain’s capacity being many billions of things as is commonly held, it is limited to comprehending tens or hundreds of millions of things. With this in mind, a nascent AGI which could comprehend only ten million things should at least be able to grasp some of these fundamental concepts. And a computer system representing ten million things is well within the scope of today’s hardware, perhaps even today’s desktop computer.

The V1.0 release of the Brain Simulator is really its “coming of age.” It now has the capacity and the polished UI which make it far more useful for a more general research audience. It is a community project with a growing development team and a larger corps of end users. Together, we’ll try out a lot of new ideas and make progress on some of the fundamental questions of intelligence and AGI.

Thank you for the great interview, it is always insightful discussing AGI with you. Readers who wish to learn more should read the book Brain Simulator II.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.