Connect with us

Artificial General Intelligence

How we can Benefit from Advancing Artificial General Intelligence (AGI)

mm

Updated

 on

Creating an Artificial General Intelligence (AGI) is the ultimate endpoint for many AI specialists.  An AGI agent could be leveraged to tackle a myriad of the world’s problems. For instance, you could introduce a problem to an AGI agent and the AGI could use deep reinforcement learning combined with its newly introduced emergent consciousness to make real-life decisions.

The difference between an AGI and a regular algorithm is the ability for the AGI to ask itself the important questions. An AGI can formulate the end solution that it wishes to arrive at, simulate hypothetical ways of getting there, and then make an informed decision on which simulated reality best matches the goals that were set.

The debate on how an AGI can emerge has been around since the term “artificial intelligence” was first introduced at the Dartmouth conference in 1956. Since then many companies have attempted to tackle the AGI challenge, OpenAI is probably the most recognized company. OpenAI was launched as a non-profit on December 11, 2015 with its mission statement being to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.

The OpenAI mission statement clearly outlines the potential gains that an AGI can offer society. Suddenly issues which were too sophisticated for humans and regular AI systems, are now able to be tackled.

The potential benefits of releasing an AGI are astronomical. You could state a goal of curing all forms of cancer, the AGI could then connect itself to the internet to scan all the current research in every language. The AGI can initiate the problem of formulating solutions, and then simulate all potential outcomes. It would be connecting the benefits of consciousness that currently humans possess, with the infinite knowledge of the cloud, use deep learning for pattern recognition of this big data, and use reinforcement learning to simulate different environments/outcomes. All of this combined with a consciousness that never requires a rest period and can be 100% focused on the task at hand.

The potential downsides of AGI of course cannot be understated, you can have an AGI which has the goal of continuously upgrading itself and could then swallow everything in its path in order to maximize the computing resources and atoms that it needs to forever upgrade its system. This theory was explored in detail by Professor Nick Bostrom in the Paperclip Maximizer argument, in this scenario a misconfigured AGI is instructed to produce paperclips and does so until nothing is left, literally every resource on earth has been consumed to maximize the production of paperclips.

A more pragmatic viewpoint is that an AGI could be controlled by a rogue state or a corporation with poor ethics. This entity could program the AGI to maximize profits, and in this case with poor programming and zero remorse it could choose to bankrupt competitors, destroy supply chains, hack the stock market, liquidate bank accounts, etc.

Therefore, a code of ethics needs to be programmed in an AGI from the onset. A code of ethics has been debated by many minds and the concept was first introduced to the general population in the form of the 3 laws of robotics by author Isaac Asimov.

There are some problems with the 3 laws of robotics as the laws can be interpreted in different ways. We previously discussed programming ethics into an AGI, in our interview with Charles J. Simon, Author of Will Computers Revolt?

April 7, 2020, is the day that Brain Simulator II was released to the public. This version of the brain simulator enables experimentation into diverse AI algorithms to create an end-to-end AGI system with modules for vision, hearing, robotic control, learning, internal modeling, and even planning, imagination, and forethought.

“New, unique algorithms that directly address cognition are the key to helping AI evolve into AGI,” Simon explains.

“Brain Simulator II combines vision and touch into a single mental model and is making progress toward the comprehension of causality and the passage of time,” Simon notes. “As the modules are enhanced, progressively more intelligence will emerge.”

Brain Simulator II bridged Artificial Neural Networks (ANN) and Symbolic AI techniques to create new possibilities. It creates an array of millions of neurons interconnected by any number of synapses.

This enables various entities to research possibilities for AGI development.

Anyone interested in Brain Simulator II can follow along or participate in the development process by downloading the software, suggesting new features, and (for advanced developers) even adding custom modules. You can also follow its creator Charles Simon on Twitter.

In the meantime, society has been recently disrupted with the COVID-19 virus. Had we an AGI system in place we could have used this AGI to quickly identify how to stop the spread of COVID-19, and more importantly how to treat COVID-19 patients. While it may be too late for an AGI to help with this outbreak, in future outbreaks an AGI could be the best tool in our arsenal.

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.AI. He is also a member of the Forbes Technology Council.

Artificial General Intelligence

Is AI an Existential Threat?

mm

Updated

 on

When discussing Artificial Intelligence (AI), a common debate is whether AI is an existential threat. The answer requires understanding the technology behind Machine Learning (ML), and recognizing that humans have the tendency to anthropomorphize.  We will explore two different types of AI,  Artificial Narrow Intelligence (ANI) which is available now and is cause for concern, and the threat which is most commonly associated with apocalyptic renditions of AI which is Artificial General Intelligence (AGI).

Artificial Narrow Intelligence Threats

To understand what ANI is you simply need to understand that every single AI application that is currently available is a form of ANI. These are fields of AI which have a narrow field of specialty, for example autonomous vehicles use AI which is designed with the sole purpose of moving a vehicle from point A to B. Another type of ANI might be a chess program which is optimized to play chess, and even if the chess program continuously improves itself by using reinforcement learning, the chess program will never be able to operate an autonomous vehicle.

With its focus on whatever operation it is responsible for, ANI systems are unable to use generalized learning in order to take over the world. That is the good news; the bad news is that with its reliance on a human operator the AI system is susceptible to biased data, human error, or even worse, a rogue human operator.

AI Surveillance

There may be no greater danger to humanity than humans using AI to invade privacy, and in some cases using AI surveillance to completely prevent people from moving freely.  China, Russia, and other nations passed through regulations during COVID-19 to enable them to monitor and control the movement of their respective populations. These are laws which once in place, are difficult to remove, especially in societies that feature autocratic leaders.

In China, cameras are stationed outside of people’s homes, and in some cases inside the person’s home. Each time a member of the household leaves, an AI monitors the time of arrival and departure, and if necessary alerts the authorities. As if that was not sufficient, with the assistance of facial recognition technology, China is able to track the movement of each person every time they are identified by a camera. This offers absolute power to the entity controlling the AI, and absolutely zero recourse to its citizens.

Why this scenario is dangerous, is that corrupt governments can carefully monitor the movements of journalists, political opponents, or anyone who dares to question the authority of the government. It is easy to understand how journalists and citizens would be cautious to criticize governments when every movement is being monitored.

There are fortunately many cities that are fighting to prevent facial recognition from infiltrating their cities. Notably, Portland, Oregon has recently passed a law that blocks facial recognition from being used unnecessarily in the city. While these changes in regulation may have gone unnoticed by the general public, in the future these regulations could be the difference between cities that offer some type of autonomy and freedom, and cities that feel oppressive.

Autonomous Weapons and Drones

Over 4500 AI researches have been calling for a ban on autonomous weapons and have created the Ban Lethal Autonomous Weapons website. The group has many notable non-profits as signatories such as Human Rights Watch, Amnesty International, and the The Future of Life Institute which in itself has a stellar scientific advisory board including Elon Musk, Nick Bostrom, and Stuart Russell.

Before continuing I will share this quote from The Future of Life Institute which best explains why there is clear cause for concern: “In contrast to semi-autonomous weapons that require human oversight to ensure that each target is validated as ethically and legally legitimate, such fully autonomous weapons select and engage targets without human intervention, representing complete automation of lethal harm. ”

Currently, smart bombs are deployed with a target selected by a human, and the bomb then uses AI to plot a course and to land on its target. The problem is what happens when we decide to completely remove the human from the equation?

When an AI chooses what humans need targeting, as well as the type of collateral damage which is deemed acceptable we may have crossed a point of no return. This is why so many AI researchers are opposed to researching anything that is remotely related to autonomous weapons.

There are multiple problems with simply attempting to block autonomous weapons research. The first problem is even if advanced nations such as Canada, the USA, and most of Europe choose to agree to the ban, it doesn’t mean rogue nations such as China, North Korea, Iran, and Russia will play along. The second and bigger problem is that AI research and applications that are designed for use in one field, may be used in a completely unrelated field.

For example, computer vision continuously improves and is important for developing autonomous vehicles, precision medicine, and other important use cases. It is also fundamentally important for regular drones or drones which could be modified to become autonomous.  One potential use case of advanced drone technology is developing drones that can monitor and fight forest fires. This would completely remove firefighters from harms way. In order to do this, you would need to build drones that are able to fly into harms way, to navigate in low or zero visibility, and are able to drop water with impeccable precision. It is not a far stretch to then use this identical technology in an autonomous drone that is designed to selectively target humans.

It is a dangerous predicament and at this point in time, no one fully understands the implications of advancing or attempting to block the development of autonomous weapons. It is nonetheless something that we need to keep our eyes on, enhancing whistle blower protection may enable those in the field to report abuses.

Rogue operator aside, what happens if AI bias creeps into AI technology that is designed to be an autonomous weapon?

AI Bias

One of the most unreported threats of AI is AI bias. This is simple to understand as most of it is unintentional. AI bias slips in when an AI reviews data that is fed to it by humans, using pattern recognition from the data that was fed to the AI, the AI incorrectly reaches conclusions which may have negative repercussions on society. For example, an AI that is fed literature from the past century on how to identify medical personnel may reach the unwanted sexist conclusion that women are always nurses, and men are always doctors.

A more dangerous scenario is when AI that is used to sentence convicted criminals is biased towards giving longer prison sentences to minorities. The AI’s criminal risk assessment algorithms are simply studying patterns in the data that has been fed into the system. This data indicates that historically certain minorities are more likely to re-offend, even when this is due to poor datasets which may be influenced by police racial profiling. The biased AI then reinforces negative human policies. This is why AI should be a guideline, never judge and jury.

Returning to autonomous weapons, if we have an AI which is biased against certain ethnic groups, it could choose to target certain individuals based on biased data, and it could go so far as ensuring that any type of collateral damage impacts certain demographics less than others. For example, when targeting a terrorist, before attacking it could wait until the terrorist is surrounded by those who follow the Muslim faith instead of Christians.

Fortunately, it has been proven that AI that is designed with diverse teams are less prone to bias. This is reason enough for enterprises to attempt when at all possible to hire a diverse well-rounded team.

Artificial General Intelligence Threats

It should be stated that while AI is advancing at an exponential pace, we have still not achieved AGI. When we will reach AGI is up for debate, and everyone has a different answer as to a timeline. I personally subscribe to the views of Ray Kurzweil, inventor, futurist, and author of ‘The Singularity is Near” who believes that we will have achieved AGI by 2029.

AGI will be the most transformational technology in the world. Within weeks of AI achieving human-level intelligence, it will then reach superintelligence which is defined as intelligence that far surpasses that of a human.

With this level of intelligence an AGI could quickly absorb all human knowledge and use pattern recognition to identify biomarkers that cause health issues, and then treat those conditions by using data science. It could create nanobots that enter the bloodstream to target cancer cells or other attack vectors. The list of accomplishments an AGI is capable of is infinite. We’ve previously explored some of the benefits of AGI.

The problem is that humans may no longer be able to control the AI. Elon Musk describes it this way: ”With artificial intelligence we are summoning the demon.’ Will we be able to control this demon is the question?

Achieving AGI may simply be impossible until an AI leaves a simulation setting to truly interact in our open-ended world. Self-awareness cannot be designed, instead it is believed that an emergent consciousness is likely to evolve when an AI has a robotic body featuring multiple input streams. These inputs may include tactile stimulation, voice recognition with enhanced natural language understanding, and augmented computer vision.

The advanced AI may be programmed with altruistic motives and want to save the planet. Unfortunately, the AI may use data science, or even a decision tree to arrive at unwanted faulty logic, such as assessing that it is necessary to sterilize humans,  or eliminate some of the human population in order to control human overpopulation.

Careful thought and deliberation needs to be explored when building an AI with intelligence that will far surpasses that of a human. There have been many nightmare scenarios which have been explored.

Professor Nick Bostrom in the Paperclip Maximizer argument has argued that a misconfigured AGI if instructed to produce paperclips would simply consume all of earths resources to produce these paperclips. While this seems a little far fetched,  a more pragmatic viewpoint is that an AGI could be controlled by a rogue state or a corporation with poor ethics. This entity could train the AGI to maximize profits, and in this case with poor programming and zero remorse it could choose to bankrupt competitors, destroy supply chains, hack the stock market, liquidate bank accounts, or attack political opponents.

This is when we need to remember that humans tend to anthropomorphize. We cannot give the AI human-type emotions, wants, or desires. While there are diabolical humans who kill for pleasure, there is no reason to believe that an AI would be susceptible to this type of behavior. It is inconceivable for humans to even consider how an AI would view the world.

Instead what we need to do is teach AI to always be deferential to a human. The AI should always have a human confirm any changes in settings, and there should always be a fail-safe mechanism. Then again, it has been argued that AI will simply replicate itself in the cloud, and by the time we realize it is self-aware it may be too late.

This is why it is so important to open source as much AI as possible and to have rational discussions regarding these issues.

Summary

There are many challenges to AI, fortunately, we still have many years to collectively figure out the future path that we want AGI to take. We should in the short-term focus on creating a diverse AI workforce, that includes as many women as men, and as many ethnic groups with diverse points of view as possible.

We should also create whistleblower protections for researchers that are working on AI, and we should pass laws and regulations which prevent widespread abuse of state or company-wide surveillance. Humans have a once in a lifetime opportunity  to improve the human condition with the assistance of AI, we just need to ensure that we carefully create a societal framework that best enables the positives, while mitigating the negatives which include existential threats.

Spread the love
Continue Reading

AI 101

Do you recommend Recommendation Engines?

mm

Updated

 on

In business, the needle in a haystack problem is a constant challenge. Recommendation Engines are here to help tackle that challenge. 

In e-commerce and retail, you offer hundreds or thousands of products. Which is the right product for your customers?

In sales and marketing, you have a large number of prospects in your pipeline. Yet, you only have so many hours in the day. So, you face the challenge of deciding where precisely to focus your effort.

There is a specialized technology powered by AI and Big Data, which makes these challenges much easier to manage, recommendation engines.

What are recommender systems?

In its simplest terms, a recommendation engine sorts through many items and predicts the selection most relevant to the user. For consumers, Amazon’s product recommendation engine is a familiar example. In the entertainment world, Netflix has worked hard to develop their engine. Netflix’s recommendation engine has delivered bottom-line benefits:

“[Netflix’s] sophisticated recommendation system and personalized user experience, it has allowed them to save $1 billion per year from service cancellations.” – The ROI of recommendation engines for marketing

From the end user’s perspective, it is often unclear how recommendation engines work. We’re going to pull the curtain back and explain how they work, starting with the key ingredient: data.

Recommendation Engines: What data do they use?

The data you need for a recommendation engine depends on your goal. Assume your goal is to increase sales in an e-commerce company. In that case, the bare minimum required data would fall into two categories: a product database and end-user behavior. To illustrate how this works, look at this simple example.

  • Company: USB Accessories, Inc. The company specializes in selling USB accessories and products like cables, thumb drives, and hubs to consumers and businesses.
  • Product Data. To keep the initial recommendation engine simple, the company limits it to 100 products.
  • User Data. In the case of an online store, user data will include website analytics information, email marketing, and other sources. For instance, you may find that 50% of customers who buy an external hard drive also buy USB cables.
  • Recommendation Output. In this case, your recommendation engine may generate a recommendation (or a discount code) to hard drive buyers to encourage them to buy USB cables.

In practice, the best recommendation engines use much more data. As a general rule, recommendation engines produce better business results when they have a large volume of data to use.

How do recommendation engines use your data?

Many recommendation engines use a handful of techniques to process your data.

Content-based filtering

This type of recommendation algorithm combines user preferences and attempts to recommend similar items. In this case, the engine is focused on the product and highlighting related items. This type of recommendation engine is relatively simple to build. It is a good starting point for companies with limited data.

Collaborative filtering

Have you asked somebody else for a recommendation before making a purchase? Or considered online reviews in your buying process? If so, you have experienced collaborative filtering. More advanced recommendation engines analyze user reviews, ratings, and other user-generated content to produce relevant suggestions. This type of recommendation engine strategy is powerful because it leverages social proof.

Hybrid recommenders

Hybrid recommendation engines combine two or more recommendation methods to produce better results. Returning to the e-commerce example outlined above, let’s say you have acquired user reviews and ratings (e.g., 1 to 5 stars) over the past year. Now, you can use both content-based filtering and collaborative filtering to present recommendations. Combining multiple recommendation engines or algorithms successfully usually takes experimentation. For that reason, it is best considered a relatively advanced strategy.

A recommendation engine is only successful if you feed it with high-quality data. It also cannot perform effectively if you have errors or out of date information in your company database. That’s why you need to invest resources in data quality continuously.

Case Studies: 

Hiring Automated: Candidate Scoring

There are more than 50 applicants on average per job posting, according to Jobvite research. For human resources departments and managers, that applicant volume creates a tremendous amount of work. To simplify the process, Blue Orange implemented a recommendation engine for a fortune 500 hedge fund. This HR automation project helped the company to rank candidates in a standardized way. Using ten years’ worth of applicant data and resumes, the firm now has a sophisticated scoring model to find good-fit candidates.

A Hedge Fund in New York City needed to parse resumes that were inconsistent and required OCR to improve their hiring process. Even the best OCR parsing leaves you with messy and unstructured data. Then, as a candidate moves through the application process, humans get involved. Add to the data set free form text reviews of the applicant and both linguistic and personal biases. In addition, each data source is siloed providing limited analytical opportunity.

Approach: After assessing multiple companies hiring processes, we have found three consistent opportunities to systematically improve hiring outcomes using NLP machine learning. The problem areas are: correctly structuring candidate resume data, assessing job fit, and reducing human hiring bias. With a cleaned and structured data set, we were able to perform both sentiment analysis on the text and subjectivity detection to reduce candidate bias in human assessment.

Results: Using keyword detection classifiers, optical character recognition, and cloud based NLP engines, we were able to scrub string text and turn it into relational data. With structured data, we provided a fast, interactive and searchable Business Analytics dashboard in AWS QuickSight.

E-Commerce: Zageno Medical Supplies

Another example of recommendation engines being implemented in the real-world comes from Zageno. Zageno is an e-commerce company that does for lab scientists what Amazon does for the rest of us. The caveat is that the needs of lab scientists are exact so the supplies procured for their research must be, as well. The quotes below are from our interview with Zageno and highlight how they use recommendation engines to deliver the most accurate supplies to lab scientists. 

Q&A: Blue Orange Digital interviews Zageno

Question:
How has your company used a recommendation engine and what sort of results did you see?

Answer:

There are two examples of the recommendation engines that ZAGENO employs for its scientific customers. To explain these we felt it best to bullet point them.

  • ZAGENO’s Scientific Score:
    • ZAGENO’s Scientific Score is a comprehensive product rating system, specifically developed for evaluating research products. It incorporates several aspects of product data, from multiple sources, to equip scientists with a sophisticated and unbiased product rating for making accurate purchasing decisions.
    • We apply sophisticated machine-learning algorithms to accurately match, group, and categorize millions of products. The Scientific Score accounts for these categorizations, as each product’s score is calculated relative to those in the same category. The result is a rating system that scientists can trust — one that is specific to both product application and product type.
    • Standard product ratings are useful to assess products quickly, but are often biased and unreliable, due to their reliance on unknown reviews or a single metric (e.g. publications). They also provide little detail on experimental context or application. The Scientific Score utilizes a scientific methodology to objectively and comprehensively evaluate research products. It combines all necessary and relevant product information into a single 0—10 rating to support our customers in deciding which product to buy and use for their application — saving hours of product research.
    • To ensure no single factor dominates, we add cut-off points and give more weight to recent contributions. The sheer number of factors we take into account virtually eliminates any opportunity for manipulation. As a result, our score is an objective measure of the quality and quantity of available product information, which supports our customers’ purchasing decisions.
  • Alternative Products:
    • Alternative products are defined by the same values for key attributes; key attributes are defined for each category to account for specific product characteristics.
    • We are working on increasing underlying data and attributes and improving the algorithm to improve the suggestions
    • Alternatives product suggestions are intended to help both, scientist and procurement to consider and evaluate potential products, they might not have considered/known otherwise
    • Alternative products are solely defined by product characteristics  and independent of suppliers, brand or other commercial data

Do you recommend recommendation systems? 

“Yes, but make sure you are using the right data to base your recommendation on both the quality and quantity reflecting true user expectations. Create transparency because nobody, particularly scientists, will trust or rely on a black box. Share with your users which information is used, how it is weighted, and keep on learning so as to continually improve. Finally, complete the cycle by taking the user feedback that you’ve collected and bring it back into the system.” – Zageno

The power of recommendation engines has never been greater. As shown by giants like Amazon and Netflix, recommenders can be directly responsible for increases in revenue and customer retention rates. Companies such as Zageno show that you do not need to be a massive company to leverage the power of recommenders. The benefits of recommendation engines span across many industries such as e-commerce to human resources. 

The Fast Way To Bring Recommendation Engines To Your Company

Developing a recommendation engine takes data expertise. Your internal IT team may not have the capacity to build this out. If you want to get the customer retention and efficiency benefits of recommendation engines, you don’t have to wait for IT to become less busy. Drop us a line and let us know. The Blue Orange Digital data science team is happy to make recommenders work for your benefit too!

main image source: Canva

Spread the love
Continue Reading

AI 101

What is the Turing Test and Why Does it Matter?

mm

Updated

 on

If you’ve been around Artificial Intelligence (AI) you have undoubtedly heard of ‘The Turing Test‘.  This was a test first proposed by Alan Turing in 1950, the test was designed to be the ultimate experiment on whether or not an AI has achieved human level intelligence. Conceptually, if the AI is able to pass the test, it has achieved intelligence that is equivalent to, or indistinguishable from that of a human.

We will explore who Alan Turing is, what the test is, why it matters, and why the definition of the test may need to evolve.

Who is Alan Turing?

Turing is an eccentric British Mathematician who is recognized for his futurist ground breaking ideas.

In 1935, at the age of 22 his work on probability theory won him a Fellowship of King’s College, University of Cambridge. His abstract mathematical ideas served to push him in a completely different direction in a field that was yet to be invented.

In 1936, Turing published a paper that is now recognized as the foundation of computer science. This is where he invented the concept of a ‘Universal Machine’ that could decode and perform any set of instructions.

In 1939, Turing was recruited by the British government’s code-breaking department. At the time Germany was using what is called an ‘enigma machine‘ to encipher all its military and naval signals. Turing rapidly developed a new machine (the ‘Bombe’) which was capable of breaking Enigma messages on an industrial scale. This development has been deemed as instrumental in assisting in pushing back the aggression’s of Nazi Germany.

In 1946, Turing returned to working on his revolutionary idea published in 1936 to develop an electronic computer, capable of running various types of computations. He produced a detailed design for what was was called the Automatic Computing Engine (ACE.)

In 1950, Turing published his seminal work asking if a “Machine Can Think?“.  This paper completely transformed both computer science and AI.

In 1952, after being reported to the police by a young man, Turing was convicted of gross indecency due to his homosexual activities.  Due to this his security clearance for the government was revoked, and his career was destroyed. In order to punish him he was chemically castrated.

With his life shattered he was later discovered in his home by his cleaner on 8 June, 1954. He had died from cyanide poisoning the day before. A partly eaten apple lay next to his body. The coroner’s verdict was suicide.

Fortunately, his legacy continues to live on.

What is the Turing Test?

In 1950, Alan Turing published a seminal paper titled “Computing Machinery and Intelligence” in Mind magazine. In this detailed paper the question “Can Machines Think?” was proposed. The paper suggested abandoning the quest to define if a machine can think, to instead test the machine with the ‘imitation game’. This simple game is played with three people:

  • a man (A)
  • a woman (B),
  • and an interrogator (C) who may be of either sex.

The concept of the game is that the interrogator stays in a room that is separate from both the man (A) and the woman (B), the goal is for the interrogator to identify who the man is, and who the woman is. In this instance the goal of the man (A) is to deceive the interrogator, meanwhile the woman (B) can attempt to help the interrogator (C). To make this fair, no verbal cues can be used, instead only typewritten questions and answers are sent back and forth. The question then becomes: How does the interrogator know who to trust?

The interrogator only knows them by the labels X and Y, and at the end of the game he simply states either ‘X is A and Y is B’ or ‘X is B and Y is A’.

The question then becomes, if we remove the man (A) or the woman (B), and replace that person with an intelligent machine, can the machine use its AI system to trick the interrogator (C) into believing that it’s a man or a woman? This is in essence the nature of the Turing Test.

In other words if you were to communicate with an AI system unknowingly, and you assumed that the ‘entity’ on the other end was a human, could the AI deceive you indefinitely?

Why the Turing Test Matters

In Alan Turing’s paper he alluded to the fact that he believed that the Turing Test could eventually be beat. He states: “by the year 2000 I believe that in about fifty years’ time it will be possible to programme computers, with a storage capacity of about 109, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent, chance of making the right identification after five minutes of questioning.

When looking at the Turing Test through a modern lens it seems very possible that an AI system could trick a human for five minutes. How often have humans interacted with support chatbots not knowing if the chatbot is a human or a bot?

There have been many reports of the Turing Test being passed. In 2014, a chatbot program named Eugene Goostman, which simulates a 13-year-old Ukrainian boy, is said to have passed the Turing test at an event organised by the University of Reading. The chatbot apparently convinced 33% of the judges at the Royal Society in London that it was human. Nonetheless critics were fast to point out the inadequacies of the test, the fact that so many judges were not convinced, the duration of the test (only 5 minutes), as well as the lack of forthcoming evidence for this achievement.

In 2018, a Google Duplex reservation system with the assistance of Google Assistant, made a phone call to a hair salon to schedule an appointment for a haircut. In this case, the AI system did not introduce itself as AI, and during the phone call pretended to be human while speaking to a salon’s receptionist. After a short exchange, a haircut was successfully scheduled and both parties hung up.

Nonetheless, it an age of Natural Language Processing (NLP), with its subfields of Natural-language understanding (NLU) and natural-language interpretation (NLI), the question needs to be asked, if a machine is asking and answering questions without fully understanding the context behind what it says is the machine truly intelligent?

After all, if you review the technology behind Watson, a computer system capable of answering questions posed in natural language, developed by IBM to defeat Jeopardy champions, it becomes apparent that Watson was able to beat the world champions by downloading a large chunk of the world’s knowledge via the internet, without actually understanding the context behind this language. There were 200 million pages of information, from a variety of sources including Wikipedia. There was a restriction in place that Watson could not access the internet while playing a game but this is simply a minor restriction for an AI that can simply access all of human knowledge before the game begins.

Similar to a search engine, keywords and reference points were made. If an AI can achieve this level of comprehension, then we should consider that based on today’s advancing technology, deceiving a human for 5 or 10 minutes is simply not setting the bar high enough.

Should the Turing Test Evolve?

The Turing Test has done a remarkable job of standing the test of time. Nonetheless, AI has evolved dramatically since 1950. Every time AI achieves a feat of which we claimed only humans were capable of we set the bar higher. It will only be a matter of time until AI is able to consistently pass the Turing Test as we understand it.

When reviewing the history of AI, the ultimate barometer of whether or not AI can achieve human level intelligence is almost always based on if it can defeat humans at various games. In 1949, Claude Shannon published his thoughts on the topic of how a computer might be made to play chess as this was considered the ultimate summit of human intelligence.

It wasn’t until February 10, 1996, after a grueling three hour match that world chess champion Garry Kasparov lost the first game of a six-game match against Deep Blue, an IBM computer capable of evaluating 200 million moves per second. It wasn’t long until Chess was no longer considered the pinnacle of human intelligence. Chess was then replaced with the game of Go, a game which originated in China over 3000 years ago. The bar for AI achieving human level intelligence was moved up.

Fast forward to October 2015, AlphaGo played its first match against the reigning three-time European Champion, Mr Fan Hui. AlphaGo won the first ever game against a Go professional with a score of 5-0. Go is considered to be the most sophisticated game in the world with its 10360 possible moves. All of a sudden the bar was moved up again.

Eventually the argument was that an AI had to be able to defeat teams of players at MMORPG (massively multiplayer online role-playing games). OpenAI quickly rose to the challenge by using deep reinforcement learning.

It is due to this consistent moving of the proverbial bar that we should reconsider a new modern definition of the Turing Test. The current test may rely too much on deception, and the technology that is in a chatbot. Potentially, with the evolution of robotics we may require that for an AI to truly achieve human level intelligence, the AI will need to interact and “live” in our actual world, versus a game environment or a simulated environment with its defined rules.

If instead of deceiving us,  a robot can can interact with us like any other human, by having conversations, proposing ideas and solutions, maybe only then will the Turing Test be passed. The ultimate version of the Turing Test may be when an AI approaches a human, and attempts to convince us that it is self-aware.

At this point, we will also have achieved Artificial General Intelligence (AGI). It would then be inevitable than the AI/robot would rapidly surpass us in intelligence.

Spread the love
Continue Reading