Connect with us

Thought Leaders

Are we Living in an Artificial Intelligence Simulation?

mm

Updated

 on

The existential question that we should be asking ourselves, is are we living in a simulated universe?

The idea that we are living in a simulated reality may seem unconventional and irrational to the general public, but it is a belief shared by many of the brightest minds of our time including Neil deGrasse Tyson, Ray Kurzweil and Elon Musk. Elon Musk famously asked the question ‘What’s outside the simulation?’ in a podcast with Lex Fridman a research scientist at MIT.

To understand how we could be living in a simulation, one needs to explore the simulation hypothesis or simulation theory which proposes that all of reality, including the Earth and the universe, is in fact an artificial simulation.

While the idea dates back as far as the 17th-century and was initially proposed by philosopher René Descartes, the idea started to gain mainstream interest when Professor Nick Bostrom of Oxford University, wrote a seminal paper in 2003 titled “Are you Living in a Computer Simulation?”

Nick Bostrom has since doubled down on his claims and uses probabilistic analysis to prove his point. There are many interviews where he details his views in detail including this talk at Google headquarters.

We will explore the concept of how a simulation can be created, who would create it, and why anyone would create it.

How a Simulation Would be Created

If you analyze the history of video games, there is a clear innovation curve in the quality of games. In 1982 Atari Inc released Pong, players could compete by playing a tennis style game featuring simple two-dimensional graphics.

Video games quickly evolved. The 80s featured 2D graphics, the 90s featured 3D graphics, and since then we have been introduced to Virtual Reality (VR).

The accelerated rate of progress when it comes to VR cannot be understated. Initially VR suffered from many issues including giving users headaches, eye strain, dizziness, and nausea. While some of these issues still exist, VR now offers immersive educational, gaming, and travel experiences.

It is not difficult to extrapolate that based on the current rate of progress in 50 years, or even 500 years, VR will become indistinguishable from reality. A gamer could immerse themselves in a simulated setting and may at some point find it difficult to distinguish reality from fiction. The gamer/user could become so immersed in the fictional reality, that they do not realize that they are simply a character in a simulation.

Who Would Create the Simulation?

How we create a simulation can be extrapolated based on exponential technological advances as described by ‘The Law of Accelerating Returns‘. Meanwhile, who would create these simulations is a challenging puzzle. There are many different scenarios that have been proposed, all are equally valid as there is no current way of testing or validating these theories.

Nick Bostrom has proposed an advanced civilization may choose to run “ancestor simulations”.  These are essentially simulations that are indistinguishable from reality, with the goal of simulating human ancestors. The number of simulated realities could run into infinity. This is not a far stretch once you consider that the entire purpose of Deep Reinforcement Learning is to train an Artificial Neural Network to improve itself in a simulated setting.

If we analyze this from a purely AI point of view, we could be simulating different realities to discover the truth about a series of events. You could create a simulation where North Korea is divided from South Korea, and one simulation where both Koreas remain unified. Each small change in a simulation, could have long-term implications.

Other theories abound, that the simulations are created by advanced AI or even an alien species. The truth is completely unknown, but it is interesting to speculate on who would be running such simulations.

How it Works

There are multiple arguments about how a simulated universe would work. Would the entire history of planet earth, all 4.5 billion years be simulated? Or would the simulation simply begin at an undefined starting point such as the year AD 1? This would implicate that to save computing resources the simulation would simply create archaeological, and geological history for us to study. Then again, a random starting point may defeat the purpose of a simulation which may be designed to learn the nature of evolutionary forces, and how lifeforms react to cataclysmic events such as the five major extinctions, including the one which wiped out the dinosaurs 65 millions years ago.

A more likely scenario, is that the simulation would simply begin when the first modern humans began moving outside of Africa starting 70,000 to 100,000 years ago. The human (simulated) perception of time differs from the time experienced in a computer, especially when you factor in quantum computing.

A quantum computer would enable time to be non-linear,  we could experience the perception of time, without the actual passage of time. Even without the power of quantum computing, OpenAI successfully used large scale deep reinforcement learning to enable a robotic hand to teach itself to manipulate a rubik’s cube. It was able to solve the rubik’s cube by practicing for the equivalent of 13,000 years inside a computer simulation.

Why People Believe

When you consider the wide spectrum of those who believe or acknowledge that there is a probability that we live in a simulation, a common denominator is present. Believers have a deep belief in science, in technological progress, in exponential thinking, and most of them are super successful.

If you are Elon Musk, what is more likely, that out of 7.7 billion he is the first person taking humans to Mars, or are the odds higher that he is living in a simulation? This might be why Elon Musk has openly stated that “There’s a billion to one chance we’re living in base reality.”

One of the more compelling arguments is from George Hotz the enigmatic hacker and founder of autonomous vehicle technology startup Comma.ai. His engaging presentation at the popular SXSW 2019 conference had attendees believing for an hour that they were living inside a simulation. What we can conclude with certainty, is that we should keep an open mind.

 

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.AI. He is also a member of the Forbes Technology Council.

Artificial General Intelligence

Is AI an Existential Threat?

mm

Updated

 on

When discussing Artificial Intelligence (AI), a common debate is whether AI is an existential threat. The answer requires understanding the technology behind Machine Learning (ML), and recognizing that humans have the tendency to anthropomorphize.  We will explore two different types of AI,  Artificial Narrow Intelligence (ANI) which is available now and is cause for concern, and the threat which is most commonly associated with apocalyptic renditions of AI which is Artificial General Intelligence (AGI).

Artificial Narrow Intelligence Threats

To understand what ANI is you simply need to understand that every single AI application that is currently available is a form of ANI. These are fields of AI which have a narrow field of specialty, for example autonomous vehicles use AI which is designed with the sole purpose of moving a vehicle from point A to B. Another type of ANI might be a chess program which is optimized to play chess, and even if the chess program continuously improves itself by using reinforcement learning, the chess program will never be able to operate an autonomous vehicle.

With its focus on whatever operation it is responsible for, ANI systems are unable to use generalized learning in order to take over the world. That is the good news; the bad news is that with its reliance on a human operator the AI system is susceptible to biased data, human error, or even worse, a rogue human operator.

AI Surveillance

There may be no greater danger to humanity than humans using AI to invade privacy, and in some cases using AI surveillance to completely prevent people from moving freely.  China, Russia, and other nations passed through regulations during COVID-19 to enable them to monitor and control the movement of their respective populations. These are laws which once in place, are difficult to remove, especially in societies that feature autocratic leaders.

In China, cameras are stationed outside of people’s homes, and in some cases inside the person’s home. Each time a member of the household leaves, an AI monitors the time of arrival and departure, and if necessary alerts the authorities. As if that was not sufficient, with the assistance of facial recognition technology, China is able to track the movement of each person every time they are identified by a camera. This offers absolute power to the entity controlling the AI, and absolutely zero recourse to its citizens.

Why this scenario is dangerous, is that corrupt governments can carefully monitor the movements of journalists, political opponents, or anyone who dares to question the authority of the government. It is easy to understand how journalists and citizens would be cautious to criticize governments when every movement is being monitored.

There are fortunately many cities that are fighting to prevent facial recognition from infiltrating their cities. Notably, Portland, Oregon has recently passed a law that blocks facial recognition from being used unnecessarily in the city. While these changes in regulation may have gone unnoticed by the general public, in the future these regulations could be the difference between cities that offer some type of autonomy and freedom, and cities that feel oppressive.

Autonomous Weapons and Drones

Over 4500 AI researches have been calling for a ban on autonomous weapons and have created the Ban Lethal Autonomous Weapons website. The group has many notable non-profits as signatories such as Human Rights Watch, Amnesty International, and the The Future of Life Institute which in itself has a stellar scientific advisory board including Elon Musk, Nick Bostrom, and Stuart Russell.

Before continuing I will share this quote from The Future of Life Institute which best explains why there is clear cause for concern: “In contrast to semi-autonomous weapons that require human oversight to ensure that each target is validated as ethically and legally legitimate, such fully autonomous weapons select and engage targets without human intervention, representing complete automation of lethal harm. ”

Currently, smart bombs are deployed with a target selected by a human, and the bomb then uses AI to plot a course and to land on its target. The problem is what happens when we decide to completely remove the human from the equation?

When an AI chooses what humans need targeting, as well as the type of collateral damage which is deemed acceptable we may have crossed a point of no return. This is why so many AI researchers are opposed to researching anything that is remotely related to autonomous weapons.

There are multiple problems with simply attempting to block autonomous weapons research. The first problem is even if advanced nations such as Canada, the USA, and most of Europe choose to agree to the ban, it doesn’t mean rogue nations such as China, North Korea, Iran, and Russia will play along. The second and bigger problem is that AI research and applications that are designed for use in one field, may be used in a completely unrelated field.

For example, computer vision continuously improves and is important for developing autonomous vehicles, precision medicine, and other important use cases. It is also fundamentally important for regular drones or drones which could be modified to become autonomous.  One potential use case of advanced drone technology is developing drones that can monitor and fight forest fires. This would completely remove firefighters from harms way. In order to do this, you would need to build drones that are able to fly into harms way, to navigate in low or zero visibility, and are able to drop water with impeccable precision. It is not a far stretch to then use this identical technology in an autonomous drone that is designed to selectively target humans.

It is a dangerous predicament and at this point in time, no one fully understands the implications of advancing or attempting to block the development of autonomous weapons. It is nonetheless something that we need to keep our eyes on, enhancing whistle blower protection may enable those in the field to report abuses.

Rogue operator aside, what happens if AI bias creeps into AI technology that is designed to be an autonomous weapon?

AI Bias

One of the most unreported threats of AI is AI bias. This is simple to understand as most of it is unintentional. AI bias slips in when an AI reviews data that is fed to it by humans, using pattern recognition from the data that was fed to the AI, the AI incorrectly reaches conclusions which may have negative repercussions on society. For example, an AI that is fed literature from the past century on how to identify medical personnel may reach the unwanted sexist conclusion that women are always nurses, and men are always doctors.

A more dangerous scenario is when AI that is used to sentence convicted criminals is biased towards giving longer prison sentences to minorities. The AI’s criminal risk assessment algorithms are simply studying patterns in the data that has been fed into the system. This data indicates that historically certain minorities are more likely to re-offend, even when this is due to poor datasets which may be influenced by police racial profiling. The biased AI then reinforces negative human policies. This is why AI should be a guideline, never judge and jury.

Returning to autonomous weapons, if we have an AI which is biased against certain ethnic groups, it could choose to target certain individuals based on biased data, and it could go so far as ensuring that any type of collateral damage impacts certain demographics less than others. For example, when targeting a terrorist, before attacking it could wait until the terrorist is surrounded by those who follow the Muslim faith instead of Christians.

Fortunately, it has been proven that AI that is designed with diverse teams are less prone to bias. This is reason enough for enterprises to attempt when at all possible to hire a diverse well-rounded team.

Artificial General Intelligence Threats

It should be stated that while AI is advancing at an exponential pace, we have still not achieved AGI. When we will reach AGI is up for debate, and everyone has a different answer as to a timeline. I personally subscribe to the views of Ray Kurzweil, inventor, futurist, and author of ‘The Singularity is Near” who believes that we will have achieved AGI by 2029.

AGI will be the most transformational technology in the world. Within weeks of AI achieving human-level intelligence, it will then reach superintelligence which is defined as intelligence that far surpasses that of a human.

With this level of intelligence an AGI could quickly absorb all human knowledge and use pattern recognition to identify biomarkers that cause health issues, and then treat those conditions by using data science. It could create nanobots that enter the bloodstream to target cancer cells or other attack vectors. The list of accomplishments an AGI is capable of is infinite. We’ve previously explored some of the benefits of AGI.

The problem is that humans may no longer be able to control the AI. Elon Musk describes it this way: ”With artificial intelligence we are summoning the demon.’ Will we be able to control this demon is the question?

Achieving AGI may simply be impossible until an AI leaves a simulation setting to truly interact in our open-ended world. Self-awareness cannot be designed, instead it is believed that an emergent consciousness is likely to evolve when an AI has a robotic body featuring multiple input streams. These inputs may include tactile stimulation, voice recognition with enhanced natural language understanding, and augmented computer vision.

The advanced AI may be programmed with altruistic motives and want to save the planet. Unfortunately, the AI may use data science, or even a decision tree to arrive at unwanted faulty logic, such as assessing that it is necessary to sterilize humans,  or eliminate some of the human population in order to control human overpopulation.

Careful thought and deliberation needs to be explored when building an AI with intelligence that will far surpasses that of a human. There have been many nightmare scenarios which have been explored.

Professor Nick Bostrom in the Paperclip Maximizer argument has argued that a misconfigured AGI if instructed to produce paperclips would simply consume all of earths resources to produce these paperclips. While this seems a little far fetched,  a more pragmatic viewpoint is that an AGI could be controlled by a rogue state or a corporation with poor ethics. This entity could train the AGI to maximize profits, and in this case with poor programming and zero remorse it could choose to bankrupt competitors, destroy supply chains, hack the stock market, liquidate bank accounts, or attack political opponents.

This is when we need to remember that humans tend to anthropomorphize. We cannot give the AI human-type emotions, wants, or desires. While there are diabolical humans who kill for pleasure, there is no reason to believe that an AI would be susceptible to this type of behavior. It is inconceivable for humans to even consider how an AI would view the world.

Instead what we need to do is teach AI to always be deferential to a human. The AI should always have a human confirm any changes in settings, and there should always be a fail-safe mechanism. Then again, it has been argued that AI will simply replicate itself in the cloud, and by the time we realize it is self-aware it may be too late.

This is why it is so important to open source as much AI as possible and to have rational discussions regarding these issues.

Summary

There are many challenges to AI, fortunately, we still have many years to collectively figure out the future path that we want AGI to take. We should in the short-term focus on creating a diverse AI workforce, that includes as many women as men, and as many ethnic groups with diverse points of view as possible.

We should also create whistleblower protections for researchers that are working on AI, and we should pass laws and regulations which prevent widespread abuse of state or company-wide surveillance. Humans have a once in a lifetime opportunity  to improve the human condition with the assistance of AI, we just need to ensure that we carefully create a societal framework that best enables the positives, while mitigating the negatives which include existential threats.

Spread the love
Continue Reading

AI 101

Do you recommend Recommendation Engines?

mm

Updated

 on

In business, the needle in a haystack problem is a constant challenge. Recommendation Engines are here to help tackle that challenge. 

In e-commerce and retail, you offer hundreds or thousands of products. Which is the right product for your customers?

In sales and marketing, you have a large number of prospects in your pipeline. Yet, you only have so many hours in the day. So, you face the challenge of deciding where precisely to focus your effort.

There is a specialized technology powered by AI and Big Data, which makes these challenges much easier to manage, recommendation engines.

What are recommender systems?

In its simplest terms, a recommendation engine sorts through many items and predicts the selection most relevant to the user. For consumers, Amazon’s product recommendation engine is a familiar example. In the entertainment world, Netflix has worked hard to develop their engine. Netflix’s recommendation engine has delivered bottom-line benefits:

“[Netflix’s] sophisticated recommendation system and personalized user experience, it has allowed them to save $1 billion per year from service cancellations.” – The ROI of recommendation engines for marketing

From the end user’s perspective, it is often unclear how recommendation engines work. We’re going to pull the curtain back and explain how they work, starting with the key ingredient: data.

Recommendation Engines: What data do they use?

The data you need for a recommendation engine depends on your goal. Assume your goal is to increase sales in an e-commerce company. In that case, the bare minimum required data would fall into two categories: a product database and end-user behavior. To illustrate how this works, look at this simple example.

  • Company: USB Accessories, Inc. The company specializes in selling USB accessories and products like cables, thumb drives, and hubs to consumers and businesses.
  • Product Data. To keep the initial recommendation engine simple, the company limits it to 100 products.
  • User Data. In the case of an online store, user data will include website analytics information, email marketing, and other sources. For instance, you may find that 50% of customers who buy an external hard drive also buy USB cables.
  • Recommendation Output. In this case, your recommendation engine may generate a recommendation (or a discount code) to hard drive buyers to encourage them to buy USB cables.

In practice, the best recommendation engines use much more data. As a general rule, recommendation engines produce better business results when they have a large volume of data to use.

How do recommendation engines use your data?

Many recommendation engines use a handful of techniques to process your data.

Content-based filtering

This type of recommendation algorithm combines user preferences and attempts to recommend similar items. In this case, the engine is focused on the product and highlighting related items. This type of recommendation engine is relatively simple to build. It is a good starting point for companies with limited data.

Collaborative filtering

Have you asked somebody else for a recommendation before making a purchase? Or considered online reviews in your buying process? If so, you have experienced collaborative filtering. More advanced recommendation engines analyze user reviews, ratings, and other user-generated content to produce relevant suggestions. This type of recommendation engine strategy is powerful because it leverages social proof.

Hybrid recommenders

Hybrid recommendation engines combine two or more recommendation methods to produce better results. Returning to the e-commerce example outlined above, let’s say you have acquired user reviews and ratings (e.g., 1 to 5 stars) over the past year. Now, you can use both content-based filtering and collaborative filtering to present recommendations. Combining multiple recommendation engines or algorithms successfully usually takes experimentation. For that reason, it is best considered a relatively advanced strategy.

A recommendation engine is only successful if you feed it with high-quality data. It also cannot perform effectively if you have errors or out of date information in your company database. That’s why you need to invest resources in data quality continuously.

Case Studies: 

Hiring Automated: Candidate Scoring

There are more than 50 applicants on average per job posting, according to Jobvite research. For human resources departments and managers, that applicant volume creates a tremendous amount of work. To simplify the process, Blue Orange implemented a recommendation engine for a fortune 500 hedge fund. This HR automation project helped the company to rank candidates in a standardized way. Using ten years’ worth of applicant data and resumes, the firm now has a sophisticated scoring model to find good-fit candidates.

A Hedge Fund in New York City needed to parse resumes that were inconsistent and required OCR to improve their hiring process. Even the best OCR parsing leaves you with messy and unstructured data. Then, as a candidate moves through the application process, humans get involved. Add to the data set free form text reviews of the applicant and both linguistic and personal biases. In addition, each data source is siloed providing limited analytical opportunity.

Approach: After assessing multiple companies hiring processes, we have found three consistent opportunities to systematically improve hiring outcomes using NLP machine learning. The problem areas are: correctly structuring candidate resume data, assessing job fit, and reducing human hiring bias. With a cleaned and structured data set, we were able to perform both sentiment analysis on the text and subjectivity detection to reduce candidate bias in human assessment.

Results: Using keyword detection classifiers, optical character recognition, and cloud based NLP engines, we were able to scrub string text and turn it into relational data. With structured data, we provided a fast, interactive and searchable Business Analytics dashboard in AWS QuickSight.

E-Commerce: Zageno Medical Supplies

Another example of recommendation engines being implemented in the real-world comes from Zageno. Zageno is an e-commerce company that does for lab scientists what Amazon does for the rest of us. The caveat is that the needs of lab scientists are exact so the supplies procured for their research must be, as well. The quotes below are from our interview with Zageno and highlight how they use recommendation engines to deliver the most accurate supplies to lab scientists. 

Q&A: Blue Orange Digital interviews Zageno

Question:
How has your company used a recommendation engine and what sort of results did you see?

Answer:

There are two examples of the recommendation engines that ZAGENO employs for its scientific customers. To explain these we felt it best to bullet point them.

  • ZAGENO’s Scientific Score:
    • ZAGENO’s Scientific Score is a comprehensive product rating system, specifically developed for evaluating research products. It incorporates several aspects of product data, from multiple sources, to equip scientists with a sophisticated and unbiased product rating for making accurate purchasing decisions.
    • We apply sophisticated machine-learning algorithms to accurately match, group, and categorize millions of products. The Scientific Score accounts for these categorizations, as each product’s score is calculated relative to those in the same category. The result is a rating system that scientists can trust — one that is specific to both product application and product type.
    • Standard product ratings are useful to assess products quickly, but are often biased and unreliable, due to their reliance on unknown reviews or a single metric (e.g. publications). They also provide little detail on experimental context or application. The Scientific Score utilizes a scientific methodology to objectively and comprehensively evaluate research products. It combines all necessary and relevant product information into a single 0—10 rating to support our customers in deciding which product to buy and use for their application — saving hours of product research.
    • To ensure no single factor dominates, we add cut-off points and give more weight to recent contributions. The sheer number of factors we take into account virtually eliminates any opportunity for manipulation. As a result, our score is an objective measure of the quality and quantity of available product information, which supports our customers’ purchasing decisions.
  • Alternative Products:
    • Alternative products are defined by the same values for key attributes; key attributes are defined for each category to account for specific product characteristics.
    • We are working on increasing underlying data and attributes and improving the algorithm to improve the suggestions
    • Alternatives product suggestions are intended to help both, scientist and procurement to consider and evaluate potential products, they might not have considered/known otherwise
    • Alternative products are solely defined by product characteristics  and independent of suppliers, brand or other commercial data

Do you recommend recommendation systems? 

“Yes, but make sure you are using the right data to base your recommendation on both the quality and quantity reflecting true user expectations. Create transparency because nobody, particularly scientists, will trust or rely on a black box. Share with your users which information is used, how it is weighted, and keep on learning so as to continually improve. Finally, complete the cycle by taking the user feedback that you’ve collected and bring it back into the system.” – Zageno

The power of recommendation engines has never been greater. As shown by giants like Amazon and Netflix, recommenders can be directly responsible for increases in revenue and customer retention rates. Companies such as Zageno show that you do not need to be a massive company to leverage the power of recommenders. The benefits of recommendation engines span across many industries such as e-commerce to human resources. 

The Fast Way To Bring Recommendation Engines To Your Company

Developing a recommendation engine takes data expertise. Your internal IT team may not have the capacity to build this out. If you want to get the customer retention and efficiency benefits of recommendation engines, you don’t have to wait for IT to become less busy. Drop us a line and let us know. The Blue Orange Digital data science team is happy to make recommenders work for your benefit too!

main image source: Canva

Spread the love
Continue Reading

AI 101

What is the Turing Test and Why Does it Matter?

mm

Updated

 on

If you’ve been around Artificial Intelligence (AI) you have undoubtedly heard of ‘The Turing Test‘.  This was a test first proposed by Alan Turing in 1950, the test was designed to be the ultimate experiment on whether or not an AI has achieved human level intelligence. Conceptually, if the AI is able to pass the test, it has achieved intelligence that is equivalent to, or indistinguishable from that of a human.

We will explore who Alan Turing is, what the test is, why it matters, and why the definition of the test may need to evolve.

Who is Alan Turing?

Turing is an eccentric British Mathematician who is recognized for his futurist ground breaking ideas.

In 1935, at the age of 22 his work on probability theory won him a Fellowship of King’s College, University of Cambridge. His abstract mathematical ideas served to push him in a completely different direction in a field that was yet to be invented.

In 1936, Turing published a paper that is now recognized as the foundation of computer science. This is where he invented the concept of a ‘Universal Machine’ that could decode and perform any set of instructions.

In 1939, Turing was recruited by the British government’s code-breaking department. At the time Germany was using what is called an ‘enigma machine‘ to encipher all its military and naval signals. Turing rapidly developed a new machine (the ‘Bombe’) which was capable of breaking Enigma messages on an industrial scale. This development has been deemed as instrumental in assisting in pushing back the aggression’s of Nazi Germany.

In 1946, Turing returned to working on his revolutionary idea published in 1936 to develop an electronic computer, capable of running various types of computations. He produced a detailed design for what was was called the Automatic Computing Engine (ACE.)

In 1950, Turing published his seminal work asking if a “Machine Can Think?“.  This paper completely transformed both computer science and AI.

In 1952, after being reported to the police by a young man, Turing was convicted of gross indecency due to his homosexual activities.  Due to this his security clearance for the government was revoked, and his career was destroyed. In order to punish him he was chemically castrated.

With his life shattered he was later discovered in his home by his cleaner on 8 June, 1954. He had died from cyanide poisoning the day before. A partly eaten apple lay next to his body. The coroner’s verdict was suicide.

Fortunately, his legacy continues to live on.

What is the Turing Test?

In 1950, Alan Turing published a seminal paper titled “Computing Machinery and Intelligence” in Mind magazine. In this detailed paper the question “Can Machines Think?” was proposed. The paper suggested abandoning the quest to define if a machine can think, to instead test the machine with the ‘imitation game’. This simple game is played with three people:

  • a man (A)
  • a woman (B),
  • and an interrogator (C) who may be of either sex.

The concept of the game is that the interrogator stays in a room that is separate from both the man (A) and the woman (B), the goal is for the interrogator to identify who the man is, and who the woman is. In this instance the goal of the man (A) is to deceive the interrogator, meanwhile the woman (B) can attempt to help the interrogator (C). To make this fair, no verbal cues can be used, instead only typewritten questions and answers are sent back and forth. The question then becomes: How does the interrogator know who to trust?

The interrogator only knows them by the labels X and Y, and at the end of the game he simply states either ‘X is A and Y is B’ or ‘X is B and Y is A’.

The question then becomes, if we remove the man (A) or the woman (B), and replace that person with an intelligent machine, can the machine use its AI system to trick the interrogator (C) into believing that it’s a man or a woman? This is in essence the nature of the Turing Test.

In other words if you were to communicate with an AI system unknowingly, and you assumed that the ‘entity’ on the other end was a human, could the AI deceive you indefinitely?

Why the Turing Test Matters

In Alan Turing’s paper he alluded to the fact that he believed that the Turing Test could eventually be beat. He states: “by the year 2000 I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent, chance of making the right identification after five minutes of questioning.

When looking at the Turing Test through a modern lens it seems very possible that an AI system could trick a human for five minutes. How often have humans interacted with support chatbots not knowing if the chatbot is a human or a bot?

There have been many reports of the Turing Test being passed. In 2014, a chatbot program named Eugene Goostman, which simulates a 13-year-old Ukrainian boy, is said to have passed the Turing test at an event organised by the University of Reading. The chatbot apparently convinced 33% of the judges at the Royal Society in London that it was human. Nonetheless critics were fast to point out the inadequacies of the test, the fact that so many judges were not convinced, the duration of the test (only 5 minutes), as well as the lack of forthcoming evidence for this achievement.

In 2018, a Google Duplex reservation system with the assistance of Google Assistant, made a phone call to a hair salon to schedule an appointment for a haircut. In this case, the AI system did not introduce itself as AI, and during the phone call pretended to be human while speaking to a salon’s receptionist. After a short exchange, a haircut was successfully scheduled and both parties hung up.

Nonetheless, it an age of Natural Language Processing (NLP), with its subfields of Natural-language understanding (NLU) and natural-language interpretation (NLI), the question needs to be asked, if a machine is asking and answering questions without fully understanding the context behind what it says is the machine truly intelligent?

After all, if you review the technology behind Watson, a computer system capable of answering questions posed in natural language, developed by IBM to defeat Jeopardy champions, it becomes apparent that Watson was able to beat the world champions by downloading a large chunk of the world’s knowledge via the internet, without actually understanding the context behind this language. There were 200 million pages of information, from a variety of sources including Wikipedia. There was a restriction in place that Watson could not access the internet while playing a game but this is simply a minor restriction for an AI that can simply access all of human knowledge before the game begins.

Similar to a search engine, keywords and reference points were made. If an AI can achieve this level of comprehension, then we should consider that based on today’s advancing technology, deceiving a human for 5 or 10 minutes is simply not setting the bar high enough.

Should the Turing Test Evolve?

The Turing Test has done a remarkable job of standing the test of time. Nonetheless, AI has evolved dramatically since 1950. Every time AI achieves a feat of which we claimed only humans were capable of we set the bar higher. It will only be a matter of time until AI is able to consistently pass the Turing Test as we understand it.

When reviewing the history of AI, the ultimate barometer of whether or not AI can achieve human level intelligence is almost always based on if it can defeat humans at various games. In 1949, Claude Shannon published his thoughts on the topic of how a computer might be made to play chess as this was considered the ultimate summit of human intelligence.

It wasn’t until February 10, 1996, after a grueling three hour match that world chess champion Garry Kasparov lost the first game of a six-game match against Deep Blue, an IBM computer capable of evaluating 200 million moves per second. It wasn’t long until Chess was no longer considered the pinnacle of human intelligence. Chess was then replaced with the game of Go, a game which originated in China over 3000 years ago. The bar for AI achieving human level intelligence was moved up.

Fast forward to October 2015, AlphaGo played its first match against the reigning three-time European Champion, Mr Fan Hui. AlphaGo won the first ever game against a Go professional with a score of 5-0. Go is considered to be the most sophisticated game in the world with its 10360 possible moves. All of a sudden the bar was moved up again.

Eventually the argument was that an AI had to be able to defeat teams of players at MMORPG (massively multiplayer online role-playing games). OpenAI quickly rose to the challenge by using deep reinforcement learning.

It is due to this consistent moving of the proverbial bar that we should reconsider a new modern definition of the Turing Test. The current test may rely too much on deception, and the technology that is in a chatbot. Potentially, with the evolution of robotics we may require that for an AI to truly achieve human level intelligence, the AI will need to interact and “live” in our actual world, versus a game environment or a simulated environment with its defined rules.

If instead of deceiving us,  a robot can can interact with us like any other human, by having conversations, proposing ideas and solutions, maybe only then will the Turing Test be passed. The ultimate version of the Turing Test may be when an AI approaches a human, and attempts to convince us that it is self-aware.

At this point, we will also have achieved Artificial General Intelligence (AGI). It would then be inevitable than the AI/robot would rapidly surpass us in intelligence.

Spread the love
Continue Reading