Connect with us

Interviews

Charles J. Simon, Author, Will Computers Revolt? – Interview Series

mm

Published

 on

Charles J. Simon, BSEE, MSCS, nationally-recognized entrepreneur, software developer and manager. With a broad management and technical expertise and degrees in both Electrical Engineering and Computer Sciences Mr. Simon has many years of computer experience in industry including pioneering work in AI and CAD (two generations of CAD).

He is also the author of ‘Will Computers Revolt‘, which offers an in-depth view at the future possibility of Artificial General Intelligence (AGI).

What was it that originally attracted you to AI, and specifically to AGI?

I’ve been fascinated by the question, “Can machines think?” ever since I first read Alan Turing’s seminal 1950 paper which begins with that question. So far, the answer is clearly, “No,” but there is no scientific reason why not. I joined the AI community with the initial neural network boom in the late 1980s and since then AI has made great strides. But the intervening thirty years haven’t brought understanding to our machines, an ability which would catapult numerous apps to new levels of usefulness.

 

You stated that you share the option of MIT AI expert Rodney Brooks who says, ‘that without interaction with an environment – without a robotic body as you will – machines will never exhibit AGI.’ This is basically stating that with insufficient inputs from a robotic body, the AI will never develop AGI capabilities. Outside of computer vision, what types of inputs are needed to develop AGI?

Today’s AI needs to be augmented with basic concepts like the physical existence of objects in a reality, the passage of time, cause and effect—concepts clear to any three-year-old. A toddler uses multiple senses to learn these concepts by touching and manipulating toys, moving through the home, learning language, etc. While it is possible to create an AGI with more limited senses, just as there are deaf people and blind people who are perfectly intelligent but more senses and abilities to interact makes solving the AGI problem easier.

For completeness my simulator can provide senses of smell and taste. It remains to be seen if these will also prove important to AGI.

 

You stated that ‘A Key Requirement for intelligence is an environment which is external to the intelligence’. The example you gave is that ‘it is unreasonable to expect IBM’s Watson to “understand” anything if it has no underlying idea of what a “thing” is’. This clearly plays in the current limitations of narrow AI, especially natural language processing. How can AI developers best overcome this current limitation of AI?

A key factor is storing knowledge which is not specifically verbal, visual, or tactile but as abstract “Things” which can have verbal, visual, and tactile attributes. Consider something as simple as the phrase, “a red ball”. You know what these words mean because of your visual and tactile experiences. You also know the meaning of related actions like throwing, bouncing, kicking, etc. which all come to mind to some extent when you hear the phrase. Any AI system which is specifically word-based or specifically image-based will miss out on the other levels of understanding.

I have implemented a Universal Knowledge Store which stores any kind of information in a brain-like structure where Things are analogous to neurons and have many attribute references to other Things—references are analogous to synapses. Thus, red and ball are individual Things and a red ball is a Thing which has attribute references to the red Thing and the ball Thing. Both red and ball have references to the corresponding Things for the words “red” and “ball”, each of which, in turn, have references to other Things which define how the words are heard, spoken, read, or spelled as well as possible action Things.

 

You’ve reached the conclusion that brain simulation of general intelligence is a long way off while AGI may be (relatively) just around the corner. Based on this statement, should we move on from attempting to emulate or create a simulation of the human brain, and just focus on AGI?

Today’s deep learning and related technologies are great for appropriate applications but will not spontaneously lead to understanding. To take the next steps, we need to add techniques specifically targeted at solving the problems which are within the capacity of any three-year-old.

Taking advantage of the intrinsic abilities of our computers can be orders of magnitude more efficient than the biological equivalent or any simulation of it. For example, your brain can store information in the chemistry of biological synapses over several iterations requiring 10-100 milliseconds. A computer can simply store the new synapse value in a single memory cycle, a billion times faster.

In developing AGI software, I have done both biological neural simulation and more efficient algorithms.  Carrying forward with the Universal Knowledge Store, when simulated in simulated biological neurons, each Thing requires a minimum of 10 neurons and usually many more. This puts the capacity of the human brain somewhere between ten and a hundred million Things. But perhaps an AGI would appear intelligent if it comprehends only one million Things—well within the scope of today’s high-end desktop computers.

 

A key unknown is how much of the robot’s time should be allocated to processing and reacting to the world versus time spent imagining and planning. Can you briefly explain the importance of imagination to an AGI?

We can imagine many things and then only act on the ones we like, those which further our internal goals, if you will. The real power of imagination is being able to predict the future—a three-year-old can figure out which sequences of motion will lead her to a goal in another room and an adult can speculate on which words will have the greatest impact on others.

An AGI similarly will benefit from going beyond being purely reactive to speculating on various complex actions and choosing the best.

 

You believe that Asimov’s three laws of robotics are too simple and ambiguous. In your book you shared some ideas for recommended laws to be programmed in robots. Which laws do you feel are most important for a robot to follow?

New “laws of robotics” will evolve over years as AGI emerges. I propose a few starters:

  1. Maximize internal knowledge and understanding of the environment.
  2. Share that knowledge accurately with others (both AGI and human).
  3. Maximize the well-being of both AGIs and humans as a whole—not just as an individual.

 

You have some issues with the Turing Test and the concept behind it. Can you explain how you believe the Turing Test is flawed?

The Turing Test has served us well for fifty years as an ad-hoc definition of general intelligence but as AGI nears, we need to hone the definition and we need a clearer definition. The Turing Test is actually a test of how human one is, not how intelligent one is. The longer a computer can maintain the deception, the better it performs on the test. Obviously, asking the question, “Are you a computer?” and related proxy questions such as, “What is your favorite food?”   are dead giveaways unless the AGI is programmed to deceive—a dubious objective at best.

Further, the Turing Test has motivated AI development into areas of limited value with (for example) chatbots with vast flexibility in responses but no underlying comprehension.

 

What would you do differently in your version of the Turing Test?

Better questions could probe specifically into the understanding of time, space, cause-and-effect, forethought, etc. rather than random questions without any particular basis in psychology, neuroscience, or AI. Here are some examples:

  1. What do you see right now? If you stepped back three feet, what differences would you see?
  2. If I [action], what would your reaction be?
  3. if you [action], what will my likely reactions be?
  4. Can you name three things which are like [object]?

Then, rather than evaluating responses as to whether they are indistinguishable from human responses, they should be evaluated in terms of whether or not they are reasonable responses (intelligent) based on the experience of the entity being tested.

 

You’ve stated that when faced with demands to perform some short-term destructive activity, properly programmed AGIs will simply refuse. How can we ensure that the AGI is properly programmed to begin with?

Decision-making is goal-based. In combination with an imagination, you (or an AGI) consider the outcome of different possible actions and choose the one which best achieves the goals. In humans, our goals are set by evolved instincts and our experience; an AGI’s goals are entirely up to the developers. We need to ensure that the goals of an AGI align with the goals of humanity as opposed to the personal goals of an individual. [Three possible goals as listed above.]

 

You’ve stated that it’s inevitable that humans will create an AGI, what’s your best estimate for a timeline?

Facets of AGI will begin to emerge within the coming decade but we won’t all agree that AGI has arrived. Eventually, we will agree that AGI has arrived when they exceed most human abilities by a substantial margin. This will take two or three decades longer.

 

For all the talks of AGI will it be real consciousness as we know it?

Consciousness manifests in a set of behaviors (which we can observe) which are based on an internal sensation (which we can’t observe).  AGIs will manifest the behaviors; they need to in order to make intelligent decisions. But I contend that our internal sensation is largely dependent on our sensory hardware and instincts and so I can guarantee that whatever internal sensations an AGI might have, they will be different from a human’s.

The same can be said for emotions and our sense of free will. In making decisions, one’s belief in free will permeates every decision we make. If you don’t believe you have a choice, you simply react. For an AGI to make thoughtful decisions, it will likewise need to be aware of its own ability to make decisions.

Last question, do you believe that an AGI has more potential for good or bad?

I am optimistic that AGIs will help us to move forward as a species and bring us answers to many questions about the universe. The key will be for us to prepare and decide what our relationship will be with AGIs as we define their goals. If we decide to use the first AGIs as tools of conquest and enrichment, we shouldn’t be surprised if, down the road, they become their own tools of conquest and enrichment against us. If we choose that AGIs are tools of knowledge, exploration, and peace, then that’s what we’re likely to get in return. The choice is up to us.

Thank you for a fantastic interview exploring the future potential of building an AGI. For readers who wish to learn more they may read ‘Will Computers Revolt‘ or visit Charle’s website futureai.guru.

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is also the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.ai

AI 101

What is the Turing Test and Why Does it Matter?

mm

Published

on

If you’ve been around Artificial Intelligence (AI) you have undoubtedly heard of ‘The Turing Test‘.  This was a test first proposed by Alan Turing in 1950, the test was designed to be the ultimate experiment on whether or not an AI has achieved human level intelligence. Conceptually, if the AI is able to pass the test, it has achieved intelligence that is equivalent to, or indistinguishable from that of a human.

We will explore who Alan Turing is, what the test is, why it matters, and why the definition of the test may need to evolve.

Who is Alan Turing?

Turing is an eccentric British Mathematician who is recognized for his futurist ground breaking ideas.

In 1935, at the age of 22 his work on probability theory won him a Fellowship of King’s College, University of Cambridge. His abstract mathematical ideas served to push him in a completely different direction in a field that was yet to be invented.

In 1936, Turing published a paper that is now recognized as the foundation of computer science. This is where he invented the concept of a ‘Universal Machine’ that could decode and perform any set of instructions.

In 1939, Turing was recruited by the British government’s code-breaking department. At the time Germany was using what is called an ‘enigma machine‘ to encipher all its military and naval signals. Turing rapidly developed a new machine (the ‘Bombe’) which was capable of breaking Enigma messages on an industrial scale. This development has been deemed as instrumental in assisting in pushing back the aggression’s of Nazi Germany.

In 1946, Turing returned to working on his revolutionary idea published in 1936 to develop an electronic computer, capable of running various types of computations. He produced a detailed design for what was was called the Automatic Computing Engine (ACE.)

In 1950, Turing published his seminal work asking if a “Machine Can Think?“.  This paper completely transformed both computer science and AI.

In 1952, after being reported to the police by a young man, Turing was convicted of gross indecency due to his homosexual activities.  Due to this his security clearance for the government was revoked, and his career was destroyed. In order to punish him he was chemically castrated.

With his life shattered he was later discovered in his home by his cleaner on 8 June, 1954. He had died from cyanide poisoning the day before. A partly eaten apple lay next to his body. The coroner’s verdict was suicide.

Fortunately, his legacy continues to live on.

What is the Turing Test?

In 1950, Alan Turing published a seminal paper titled “Computing Machinery and Intelligence” in Mind magazine. In this detailed paper the question “Can Machines Think?” was proposed. The paper suggested abandoning the quest to define if a machine can think, to instead test the machine with the ‘imitation game’. This simple game is played with three people:

  • a man (A)
  • a woman (B),
  • and an interrogator (C) who may be of either sex.

The concept of the game is that the interrogator stays in a room that is separate from both the man (A) and the woman (B), the goal is for the interrogator to identify who the man is, and who the woman is. In this instance the goal of the man (A) is to deceive the interrogator, meanwhile the woman (B) can attempt to help the interrogator (C). To make this fair, no verbal cues can be used, instead only typewritten questions and answers are sent back and forth. The question then becomes: How does the interrogator know who to trust?

The interrogator only knows them by the labels X and Y, and at the end of the game he simply states either ‘X is A and Y is B’ or ‘X is B and Y is A’.

The question then becomes, if we remove the man (A) or the woman (B), and replace that person with an intelligent machine, can the machine use its AI system to trick the interrogator (C) into believing that it’s a man or a woman? This is in essence the nature of the Turing Test.

In other words if you were to communicate with an AI system unknowingly, and you assumed that the ‘entity’ on the other end was a human, could the AI deceive you indefinitely?

Why the Turing Test Matters

In Alan Turing’s paper he alluded to the fact that he believed that the Turing Test could eventually be beat. He states: “by the year 2000 I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent, chance of making the right identification after five minutes of questioning.

When looking at the Turing Test through a modern lens it seems very possible that an AI system could trick a human for five minutes. How often have humans interacted with support chatbots not knowing if the chatbot is a human or a bot?

There have been many reports of the Turing Test being passed. In 2014, a chatbot program named Eugene Goostman, which simulates a 13-year-old Ukrainian boy, is said to have passed the Turing test at an event organised by the University of Reading. The chatbot apparently convinced 33% of the judges at the Royal Society in London that it was human. Nonetheless critics were fast to point out the inadequacies of the test, the fact that so many judges were not convinced, the duration of the test (only 5 minutes), as well as the lack of forthcoming evidence for this achievement.

Nonetheless, it an age of Natural Language Processing (NLP), with its subfields of Natural-language understanding (NLU) and natural-language interpretation (NLI), the question needs to be asked, if a machine is asking and answering questions without fully understanding the context behind what it says is the machine truly intelligent?

After all, if you review the technology behind Watson, a computer system capable of answering questions posed in natural language, developed by IBM to defeat Jeopardy champions, it becomes apparent that Watson was able to beat the world champions by accessing all of the world’s knowledge via the internet, without actually understanding the context behind this language. Similar to a search engine, keywords and reference points were made. If an AI can achieve this level of comprehension, then we should consider that based on today’s advancing technology, deceiving a human for 5 or 10 minutes is simply not setting the bar high enough.

Should the Turing Test Evolve?

The Turing Test has done a remarkable job of standing the test of time. Nonetheless, AI has evolved dramatically since 1950. Every time AI achieves a feat of which we claimed only humans were capable of we set the bar higher. It will only be a matter of time until AI is able to consistently pass the Turing Test as we understand it.

When reviewing the history of AI, the ultimate barometer of whether or not AI can achieve human level intelligence is almost always based on if it can defeat humans at various games. In 1949, Claude Shannon published his thoughts on the topic of how a computer might be made to play chess as this was considered the ultimate summit of human intelligence.

It wasn’t until February 10, 1996, after a grueling three hour match that world chess champion Garry Kasparov lost the first game of a six-game match against Deep Blue, an IBM computer capable of evaluating 200 million moves per second. It wasn’t long until Chess was no longer considered the pinnacle of human intelligence. Chess was then replaced with the game of Go, a game which originated in China over 3000 years ago. The bar for AI achieving human level intelligence was moved up.

Fast forward to October 2015, AlphaGo played its first match against the reigning three-time European Champion, Mr Fan Hui. AlphaGo won the first ever game against a Go professional with a score of 5-0. Go is considered to be the most sophisticated game in the world with its 10360 possible moves. All of a sudden the bar was moved up again.

Eventually the argument was that an AI had to be able to defeat teams of players at MMORPG (massively multiplayer online role-playing games). OpenAI quickly rose to the challenge by using deep reinforcement learning.

It is due to this consistent moving of the proverbial bar that we should reconsider a new modern definition of the Turing Test. The current test may rely too much on deception, and the technology that is in a chatbot. Potentially, with the evolution of robotics we may require that for an AI to truly achieve human level intelligence, the AI will need to interact and “live” in our actual world, versus a game environment or a simulated environment with its defined rules.

If instead of deceiving us,  a robot can can interact with us like any other human, by having conversations, proposing ideas and solutions, maybe only then will the Turing Test be passed. The ultimate version of the Turing Test may be when an AI approaches a human, and attempts to convince us that it is self-aware.

At this point, we will also have achieved Artificial General Intelligence (AGI). It would then be inevitable than the AI/robot would rapidly surpass us in intelligence.

Spread the love
Continue Reading

Artificial General Intelligence

Are we Living in an Artificial Intelligence Simulation?

mm

Published

on

The existential question that we should be asking ourselves, is are we living in a simulated universe?

The idea that we are living in a simulated reality may seem unconventional and irrational to the general public, but it is a belief shared by many of the brightest minds of our time including Neil deGrasse Tyson, Ray Kurzweil and Elon Musk. Elon Musk famously asked the question ‘What’s outside the simulation?’ in a podcast with Lex Fridman a research scientist at MIT.

To understand how we could be living in a simulation, one needs to explore the simulation hypothesis or simulation theory which proposes that all of reality, including the Earth and the universe, is in fact an artificial simulation.

While the idea dates back as far as the 17th-century and was initially proposed by philosopher René Descartes, the idea started to gain mainstream interest when Professor Nick Bostrom of Oxford University, wrote a seminal paper in 2003 titled “Are you Living in a Computer Simulation?”

Nick Bostrom has since doubled down on his claims and uses probabilistic analysis to prove his point. There are many interviews where he details his views in detail including this talk at Google headquarters.

We will explore the concept of how a simulation can be created, who would create it, and why anyone would create it.

How a Simulation Would be Created

If you analyze the history of video games, there is a clear innovation curve in the quality of games. In 1982 Atari Inc released Pong, players could compete by playing a tennis style game featuring simple two-dimensional graphics.

Video games quickly evolved. The 80s featured 2D graphics, the 90s featured 3D graphics, and since then we have been introduced to Virtual Reality (VR).

The accelerated rate of progress when it comes to VR cannot be understated. Initially VR suffered from many issues including giving users headaches, eye strain, dizziness, and nausea. While some of these issues still exist, VR now offers immersive educational, gaming, and travel experiences.

It is not difficult to extrapolate that based on the current rate of progress in 50 years, or even 500 years, VR will become indistinguishable from reality. A gamer could immerse themselves in a simulated setting and may at some point find it difficult to distinguish reality from fiction. The gamer/user could become so immersed in the fictional reality, that they do not realize that they are simply a character in a simulation.

Who Would Create the Simulation?

How we create a simulation can be extrapolated based on exponential technological advances as described by ‘The Law of Accelerating Returns‘. Meanwhile, who would create these simulations is a challenging puzzle. There are many different scenarios that have been proposed, all are equally valid as there is no current way of testing or validating these theories.

Nick Bostrom has proposed an advanced civilization may choose to run “ancestor simulations”.  These are essentially simulations that are indistinguishable from reality, with the goal of simulating human ancestors. The number of simulated realities could run into infinity. This is not a far stretch once you consider that the entire purpose of Deep Reinforcement Learning is to train an Artificial Neural Network to improve itself in a simulated setting.

If we analyze this from a purely AI point of view, we could be simulating different realities to discover the truth about a series of events. You could create a simulation where North Korea is divided from South Korea, and one simulation where both Koreas remain unified. Each small change in a simulation, could have long-term implications.

Other theories abound, that the simulations are created by advanced AI or even an alien species. The truth is completely unknown, but it is interesting to speculate on who would be running such simulations.

How it Works

There are multiple arguments about how a simulated universe would work. Would the entire history of planet earth, all 4.5 billion years be simulated? Or would the simulation simply begin at an undefined starting point such as the year AD 1? This would implicate that to save computing resources the simulation would simply create archaeological, and geological history for us to study. Then again, a random starting point may defeat the purpose of a simulation which may be designed to learn the nature of evolutionary forces, and how lifeforms react to cataclysmic events such as the five major extinctions, including the one which wiped out the dinosaurs 65 millions years ago.

A more likely scenario, is that the simulation would simply begin when the first modern humans began moving outside of Africa starting 70,000 to 100,000 years ago. The human (simulated) perception of time differs from the time experienced in a computer, especially when you factor in quantum computing.

A quantum computer would enable time to be non-linear,  we could experience the perception of time, without the actual passage of time. Even without the power of quantum computing, OpenAI successfully used large scale deep reinforcement learning to enable a robotic hand to teach itself to manipulate a rubik’s cube. It was able to solve the rubik’s cube by practicing for the equivalent of 13,000 years inside a computer simulation.

Why People Believe

When you consider the wide spectrum of those who believe or acknowledge that there is a probability that we live in a simulation, a common denominator is present. Believers have a deep belief in science, in technological progress, in exponential thinking, and most of them are super successful.

If you are Elon Musk, what is more likely, that out of 7.7 billion he is the first person taking humans to Mars, or are the odds higher that he is living in a simulation? This might be why Elon Musk has openly stated that “There’s a billion to one chance we’re living in base reality.”

One of the more compelling arguments is from George Hotz the enigmatic hacker and founder of autonomous vehicle technology startup Comma.ai. His engaging presentation at the popular SXSW 2019 conference had attendees believing for an hour that they were living inside a simulation. What we can conclude with certainty, is that we should keep an open mind.

 

Spread the love
Continue Reading

Artificial General Intelligence

Is Hanson’s Robotics Sophia Robot using AI or is it a Marketing Stunt?

mm

Published

on

If you’ve been following AI for any period of time you have probably heard of Hanson Robotics humanoid Robot Sophia. From a marketing point of view Sophia has been transformational, she has had a romantic encounter with Will Smith, she has been featured on The Tonight Show with Jimmy Fallon, as well as countless other media appearances. There was even justified global controversy when Saudi Arabia which denies women equal rights granted Sophia citizenship.

Something that may seem odd, is that Sophia is rarely discussed in serious AI debates, even while she is busy scheduling public appearances, and being showcased at blockchain conferences. To understand the reasoning behind this, an exploration of the history of its two eccentric representatives needs to be undertaken.

Who is David Hanson?

David Hanson is the founder and CEO of Hanson Robotics.

David grew up in Dallas, Texas reading the works of Isaac Asimov and Philip K. Dick. Isaac Asimov is a science fiction writer who contributed to the popularization of robotics by writing 37 science fiction short stories and six novels featuring positronic robots from 1940 to 1993. The movie starring Will Smith I,Robot was based on one of these short stories.  While the physical appearance of Sophia closely matches the covers, and different illustrations of these works of science fiction, she was modeled after Audrey Hepburn and Hanson’s wife.

David pursued his passion for art and creativity from a young age. He has a Bachelor of Fine Arts from the Rhode Island School of Design in film/animation/video, and a Ph.D. from the University of Texas at Dallas in interactive arts and engineering,

He then pursued a career as an Imagineer at Walt Disney. While working at Disney he worked on creating sculptures and robotic technologies for theme parks.

As a fine artist David exhibited at art museums including the Reina Sophia, Tokyo Modern, and the Cooper Hewitt Design Museums. Hanson’s large figurative sculptures stand prominently in the Atlantis resort,Universal Studios Islands of Adventure, and several Disney theme parks.

In 1995, David designed a humanoid head in his own likeness,  which was operated remotely by a human. This remote humanoid operation is a precursor to Sophia and is instrumental in understanding that the technology behind Sophia may be more of an illusion than what those in the AI community may qualify as AI or even machine learning.

David fully understands the importance of having a humanoid robot that has an appearance that is both non-threatening, and welcoming. Credit should absolutely be given to David for creating a robotic humanoid that has been able to capture the human imagination with very limited and scripted interactions with humans.

It is clear from reviewing David’s background, that he has been instrumental in the aesthetics of Sophia. The question remains what type of AI is being used with Sophia?  And is this AI on a path towards AGI (Artificial General Intelligence) as claimed by its other eccentric spokesman Ben Goertzel?

Who is Ben Goertzel?

Ben Goertzel is a brilliant full-stack AI researcher and the chief scientist and chairman of AI software company Novamente LLC; chairman of the OpenCog Foundation; and advisor to Singularity University.  He was formerly Chief Scientist of Hanson Robotics, the company that created Sophia. He is currently CEO & founder of SingularityNET.

Ben is someone who at first appears to be an eccentric genius, and when you watch him speak it is clear that he is well informed. He shares the same views as his friend Ray Kurzweil  and these views are shared in Ray’s seminal book The Singularity is Near. Ben believes that AGI is fast approaching, and as Ray predicts that 2045 will be the approximate timeline of the singularity, an event marked when human intelligence and nonbiological intelligence will merge.

The singularity is such a focal point to Ben’s existence, that he created SingularityNET in 2017. As described on the company’s website:

SingularityNET is a full-stack AI solution powered by a decentralized protocol. We gathered the leading minds in machine learning and blockchain to democratize access to AI technology. Now anyone can take advantage of a global network of AI algorithms, services, and agents.

SingularityNET raised funds in 2017 in what is called an Initial Coin Offering (ICO).  The timing of the raise was excellent as it was during the ICO craze,  a total of $36 million was raised in less than 60 seconds. Investors would receive AGI tokens, the AGI token would in theory offer the following benefits:

The AGI Token is a crucial aspect of SingularityNET, and it can be utilized in a variety of ways. It will allow for transactions between the network participants, enable the AI Agents to transact value with each other, empower the network to incentivize actions that the community deems ‘benevolent’ and will allow for the governance of the network itself.

Herein is why Ben Goertzel is often speaking at cryptocurrency and blockchain events.  The AGI token was the fundraise for SingularityNET, and the association to Sophia is quite simple. Sophia is shown at these events to keep investors interested in the project. This is how the relationship between SingularityNET and Sophia is described:

SingularityNET was born from a collective will to distribute the power of AI. Sophia, the world’s most expressive robot, is one of our first use cases. Today, she uses multiple AI modules to see, hear, and respond empathetically. Many of her underlying AI modules will be available open-source on SingularityNET.

In other words, SingularityNET associates itself with Sophia to raise funds, and Sophia may at some point use an AI module hosted on SingularityNET.  While Sophia appears to be using some forms of AI, it appears to be very basic. Nonetheless, Sophia is a platform with the ability to have AI modules swapped in or out. This means that her current level of AI is not indicative of future performance.

Is Sophia Scripted?

When watching Sophia on stage there are indicators that we might be spellbound by a well orchestrated magic trick. Ben is especially very well versed at speaking quickly, he enchants you with his intelligence, and gives Sophia very little actual free association speaking time.

If Sophia was as intelligent as claimed, you would want to give her the bulk of the speaking engagement, and investors would be lining up at the door.

Sophia is often wheeled in, which indicates a lack of mobility. She also seems to lack awareness of her surroundings, she is unable to focus her attention on any one object. She blinks a lot, randomly smiles, and offers other random facial expressions.

There is also a lack of input technology. When it comes to building an AGI there is common consensus that input devices are important to form an emergent consciousness. A notion of  “self,”  is needed as related knowledge and functions are developed gradually according to the system’s experience. Based on Sophia’s lack of mobility and input mechanisms this seems to be something that is ignored. Her only input seems to be auditory, with possibly some type of basic computer vision.

There is also the problem that all of her conversations are pre-scripted. If you want to book Sophia for an event, you need to send five questions which need to be pre-approved by the organizers. The questions need to be asked in a specific order. This signifies that based on the preset questions, Sophia is simply parroting pre-canned responses. This is why the answers she gives are always so interesting, they are designed to evoke emotion in the audience, and the answers are delivered by a human using Sophia as a channel.

In other words, Sophia may be using at most computer vision, voice recognition technology, and perhaps some form of Natural Language Processing (NLP), but there is no indicator that she is actually analyzing the meaning behind what is said, or that she understands the meaning behind her answers.  Amazon’s Alexa, and Apple’s Siri are much more advanced AI systems, and neither company would claim that either systems are anywhere near an AGI system.

It’s an interesting social experiment to understand how humans communicate and interact with humanoid robots, but at no time is there any indication that Sophia could even be remotely considered intelligent or self-aware.

In an interview with The Verge, Ben acknowledges that audiences may be overestimating Sophia’s abilities:

“If I tell people I’m using probabilistic logic to do reasoning on how best to prune the backward chaining inference trees that arise in our logic engine, they have no idea what I’m talking about. But if I show them a beautiful smiling robot face, then they get the feeling that AGI may indeed be nearby and viable”.

He then continues to state the following:

“None of this is what I would call AGI, but nor is it simple to get working,  and it is absolutely cutting-edge in terms of dynamic integration of perception, action, and dialogue.”

What are the technologies being used by Sophia? According to Ben’s Blog:

  1. a purely script-based “timeline editor” (used for preprogrammed speeches, and occasionally for media interactions that come with pre-specified questions);
  2. a “sophisticated chat-bot” — that chooses from a large palette of templatized responses based on context and a limited level of understanding (and that also sometimes gives a response grabbed from an online resource, or generated stochastically).
  3. OpenCog, a sophisticated cognitive architecture created with AGI in mind, but still mostly in R&D phase (though also being used for practical value in some domains such as biomedical informatics, see Mozi Health and a bunch of SingularityNET applications to be rolled out this fall).

It is due to mixed and confusing communications regarding her technologies, and the references to AGI, that Sophia continues to be adopted by a mainstream audience that may be deceived in believing that Sophia is more intelligent than she actually is.

Sophia is for the most part ignored by an AI community that understands that the current state of AI is far more advanced than what Sophia is capable of illustrating.  What that AI community may be overlooking is the power of rapid exponential technological growth as described in Kurzweil’s “Law of Accelerating Returns“.  While Sophia’s AI is currently far from AGI, with Sophia capable of hosting any type of AI module, she has the ability to have her neural network upgraded or replaced at any time. We should therefore not be surprised if at the end of this journey, Sophia achieves true AGI.

 

Spread the love
Continue Reading