stub Charles J. Simon, Author, Will Computers Revolt? - Interview Series - Unite.AI
Connect with us

Interviews

Charles J. Simon, Author, Will Computers Revolt? – Interview Series

mm
Updated on

Charles J. Simon, BSEE, MSCS, nationally-recognized entrepreneur, software developer and manager. With a broad management and technical expertise and degrees in both Electrical Engineering and Computer Sciences Mr. Simon has many years of computer experience in industry including pioneering work in AI and CAD (two generations of CAD).

He is also the author of ‘Will Computers Revolt', which offers an in-depth view at the future possibility of Artificial General Intelligence (AGI).

What was it that originally attracted you to AI, and specifically to AGI?

I’ve been fascinated by the question, “Can machines think?” ever since I first read Alan Turing’s seminal 1950 paper which begins with that question. So far, the answer is clearly, “No,” but there is no scientific reason why not. I joined the AI community with the initial neural network boom in the late 1980s and since then AI has made great strides. But the intervening thirty years haven’t brought understanding to our machines, an ability which would catapult numerous apps to new levels of usefulness.

 

You stated that you share the option of MIT AI expert Rodney Brooks who says, ‘that without interaction with an environment – without a robotic body as you will – machines will never exhibit AGI.’ This is basically stating that with insufficient inputs from a robotic body, the AI will never develop AGI capabilities. Outside of computer vision, what types of inputs are needed to develop AGI?

Today’s AI needs to be augmented with basic concepts like the physical existence of objects in a reality, the passage of time, cause and effect—concepts clear to any three-year-old. A toddler uses multiple senses to learn these concepts by touching and manipulating toys, moving through the home, learning language, etc. While it is possible to create an AGI with more limited senses, just as there are deaf people and blind people who are perfectly intelligent but more senses and abilities to interact makes solving the AGI problem easier.

For completeness my simulator can provide senses of smell and taste. It remains to be seen if these will also prove important to AGI.

 

You stated that ‘A Key Requirement for intelligence is an environment which is external to the intelligence’. The example you gave is that ‘it is unreasonable to expect IBM’s Watson to “understand” anything if it has no underlying idea of what a “thing” is’. This clearly plays in the current limitations of narrow AI, especially natural language processing. How can AI developers best overcome this current limitation of AI?

A key factor is storing knowledge which is not specifically verbal, visual, or tactile but as abstract “Things” which can have verbal, visual, and tactile attributes. Consider something as simple as the phrase, “a red ball”. You know what these words mean because of your visual and tactile experiences. You also know the meaning of related actions like throwing, bouncing, kicking, etc. which all come to mind to some extent when you hear the phrase. Any AI system which is specifically word-based or specifically image-based will miss out on the other levels of understanding.

I have implemented a Universal Knowledge Store which stores any kind of information in a brain-like structure where Things are analogous to neurons and have many attribute references to other Things—references are analogous to synapses. Thus, red and ball are individual Things and a red ball is a Thing which has attribute references to the red Thing and the ball Thing. Both red and ball have references to the corresponding Things for the words “red” and “ball”, each of which, in turn, have references to other Things which define how the words are heard, spoken, read, or spelled as well as possible action Things.

 

You’ve reached the conclusion that brain simulation of general intelligence is a long way off while AGI may be (relatively) just around the corner. Based on this statement, should we move on from attempting to emulate or create a simulation of the human brain, and just focus on AGI?

Today’s deep learning and related technologies are great for appropriate applications but will not spontaneously lead to understanding. To take the next steps, we need to add techniques specifically targeted at solving the problems which are within the capacity of any three-year-old.

Taking advantage of the intrinsic abilities of our computers can be orders of magnitude more efficient than the biological equivalent or any simulation of it. For example, your brain can store information in the chemistry of biological synapses over several iterations requiring 10-100 milliseconds. A computer can simply store the new synapse value in a single memory cycle, a billion times faster.

In developing AGI software, I have done both biological neural simulation and more efficient algorithms.  Carrying forward with the Universal Knowledge Store, when simulated in simulated biological neurons, each Thing requires a minimum of 10 neurons and usually many more. This puts the capacity of the human brain somewhere between ten and a hundred million Things. But perhaps an AGI would appear intelligent if it comprehends only one million Things—well within the scope of today’s high-end desktop computers.

 

A key unknown is how much of the robot’s time should be allocated to processing and reacting to the world versus time spent imagining and planning. Can you briefly explain the importance of imagination to an AGI?

We can imagine many things and then only act on the ones we like, those which further our internal goals, if you will. The real power of imagination is being able to predict the future—a three-year-old can figure out which sequences of motion will lead her to a goal in another room and an adult can speculate on which words will have the greatest impact on others.

An AGI similarly will benefit from going beyond being purely reactive to speculating on various complex actions and choosing the best.

 

You believe that Asimov’s three laws of robotics are too simple and ambiguous. In your book you shared some ideas for recommended laws to be programmed in robots. Which laws do you feel are most important for a robot to follow?

New “laws of robotics” will evolve over years as AGI emerges. I propose a few starters:

  1. Maximize internal knowledge and understanding of the environment.
  2. Share that knowledge accurately with others (both AGI and human).
  3. Maximize the well-being of both AGIs and humans as a whole—not just as an individual.

 

You have some issues with the Turing Test and the concept behind it. Can you explain how you believe the Turing Test is flawed?

The Turing Test has served us well for fifty years as an ad-hoc definition of general intelligence but as AGI nears, we need to hone the definition and we need a clearer definition. The Turing Test is actually a test of how human one is, not how intelligent one is. The longer a computer can maintain the deception, the better it performs on the test. Obviously, asking the question, “Are you a computer?” and related proxy questions such as, “What is your favorite food?”   are dead giveaways unless the AGI is programmed to deceive—a dubious objective at best.

Further, the Turing Test has motivated AI development into areas of limited value with (for example) chatbots with vast flexibility in responses but no underlying comprehension.

 

What would you do differently in your version of the Turing Test?

Better questions could probe specifically into the understanding of time, space, cause-and-effect, forethought, etc. rather than random questions without any particular basis in psychology, neuroscience, or AI. Here are some examples:

  1. What do you see right now? If you stepped back three feet, what differences would you see?
  2. If I [action], what would your reaction be?
  3. if you [action], what will my likely reactions be?
  4. Can you name three things which are like [object]?

Then, rather than evaluating responses as to whether they are indistinguishable from human responses, they should be evaluated in terms of whether or not they are reasonable responses (intelligent) based on the experience of the entity being tested.

 

You’ve stated that when faced with demands to perform some short-term destructive activity, properly programmed AGIs will simply refuse. How can we ensure that the AGI is properly programmed to begin with?

Decision-making is goal-based. In combination with an imagination, you (or an AGI) consider the outcome of different possible actions and choose the one which best achieves the goals. In humans, our goals are set by evolved instincts and our experience; an AGI’s goals are entirely up to the developers. We need to ensure that the goals of an AGI align with the goals of humanity as opposed to the personal goals of an individual. [Three possible goals as listed above.]

 

You’ve stated that it’s inevitable that humans will create an AGI, what’s your best estimate for a timeline?

Facets of AGI will begin to emerge within the coming decade but we won’t all agree that AGI has arrived. Eventually, we will agree that AGI has arrived when they exceed most human abilities by a substantial margin. This will take two or three decades longer.

 

For all the talks of AGI will it be real consciousness as we know it?

Consciousness manifests in a set of behaviors (which we can observe) which are based on an internal sensation (which we can’t observe).  AGIs will manifest the behaviors; they need to in order to make intelligent decisions. But I contend that our internal sensation is largely dependent on our sensory hardware and instincts and so I can guarantee that whatever internal sensations an AGI might have, they will be different from a human’s.

The same can be said for emotions and our sense of free will. In making decisions, one’s belief in free will permeates every decision we make. If you don’t believe you have a choice, you simply react. For an AGI to make thoughtful decisions, it will likewise need to be aware of its own ability to make decisions.

Last question, do you believe that an AGI has more potential for good or bad?

I am optimistic that AGIs will help us to move forward as a species and bring us answers to many questions about the universe. The key will be for us to prepare and decide what our relationship will be with AGIs as we define their goals. If we decide to use the first AGIs as tools of conquest and enrichment, we shouldn’t be surprised if, down the road, they become their own tools of conquest and enrichment against us. If we choose that AGIs are tools of knowledge, exploration, and peace, then that’s what we’re likely to get in return. The choice is up to us.

Thank you for a fantastic interview exploring the future potential of building an AGI. For readers who wish to learn more they may read ‘Will Computers Revolt' or visit Charle's website futureai.guru.

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of Securities.io, a website that focuses on investing in disruptive technology.