Connect with us

Surveillance

China Leading the Global Expansion and Exportation of AI Technology

Updated

 on

China is leading the world when it comes to the global expansion of AI technology, having exported it to over 60 countries, many of which have dismal human rights records. Some of those nations which Chinese companies have exported the technology to include Iran, Myanmar, Venezuela, and Zimbabwe.

According to the report released by U.S. think tank the Carnegie Endowment for International Peace, many states are deploying advanced AI surveillance tools to monitor and track citizens. The new index details the ways in which the countries are doing this. 

The report had several key findings, including how AI surveillance technology is spreading to other countries at a much faster rate than previously expected by experts. At least seventy-five out of 176 countries around the globe are currently using AI technologies for surveillance. Fifty-six countries are using it for smart city/safe city platforms, sixty-four countries are using it within facial recognition systems, and fifty-two are using it for smart policing. 

Another key finding was that China is a major provider of AI surveillance around the world. The technology is strongly linked to some of China’s biggest companies like Huawei, Hikvision, Dahua, and ZTE. AI surveillance technology connected to these companies supplies sixty-three countries with capabilities. Thirty-six of those countries are part of China’s Belt and Road Initiative (BRI). Huawei, one of the most talked about Chinese companies recently, provides AI surveillance technology to at least fifty countries worldwide, just by themselves. The next biggest non-Chinese supplier of the technology is Japan’s NEC Corporation, and they only provide it to fourteen countries. 

China often hands out soft loans to governments when pitching a product. The governments then turn around and use that money to purchase the product and equipment, and this technique has been specifically employed in countries such as Kenya, Laos, Mongolia, Uganda, and Uzbekistan. Without China, these countries would most likely not have access to the technology. This technique of handing out soft loans to purchase AI surveillance technology is concerning to many, and questions are being raised about how much the Chinese government is subsidizing the purchase of “advanced repressive technology.” 

China is not alone in supplying AI surveillance technology; technology supplied by U.S. firms is currently in thirty-two countries. Some of the big-name U.S. companies include IBM (in eleven countries), Palantir (in nine countries), and Cisco (in six countries). Outside of the U.S. and China, nations around the world who call themselves liberal democracies, such as France, Germany, Israel, and Japan, also have companies responsible for exporting and proliferating the technology. According to the report, there are not enough steps being taken to monitor and control the potential hazards of the spread of the technology.

According to the index, 51 percent of advanced democracies deploy AI surveillance systems, while 37 percent of closed autocratic states, 41 percent of electoral autocratic/competitive autocratic states, and 41 percent of electoral democracies/illiberal democracies deploy AI surveillance technology. While the numbers do not mean that the technology is being abused by all of the governments, the potential is there and many are in fact doing just that. 

Countries such as China, Russia, and Saudi Arabia are known to be exploiting AI technology for mass surveillance purposes, while other governments with bad human rights records are using it to reinforce repression. Specifically, the Communist Party in China is currently using facial recognition systems to target Uighurs and other Muslim minorities in the far western region of Xinjiang. 

The report also found that there is a strong connection between the military expenditures of a country and the government’s use of AI surveillance systems. Out of the top fifty military spending countries, forty of them are using AI surveillance technology. 

The new report by the Carnegie Endowment for International Peace highlights the dangers that were once foreshadowed by experts. These dangers are now a reality, and AI technology is seen by many nations as an extremely efficient way to track and surveil people. While it will be hard to go back, many still believe that international organizations and agreements need to start addressing the issues surrounding AI.

 

Spread the love

Artificial General Intelligence

Is AI an Existential Threat?

mm

Updated

 on

When discussing Artificial Intelligence (AI), a common debate is whether AI is an existential threat. The answer requires understanding the technology behind Machine Learning (ML), and recognizing that humans have the tendency to anthropomorphize.  We will explore two different types of AI,  Artificial Narrow Intelligence (ANI) which is available now and is cause for concern, and the threat which is most commonly associated with apocalyptic renditions of AI which is Artificial General Intelligence (AGI).

Artificial Narrow Intelligence Threats

To understand what ANI is you simply need to understand that every single AI application that is currently available is a form of ANI. These are fields of AI which have a narrow field of specialty, for example autonomous vehicles use AI which is designed with the sole purpose of moving a vehicle from point A to B. Another type of ANI might be a chess program which is optimized to play chess, and even if the chess program continuously improves itself by using reinforcement learning, the chess program will never be able to operate an autonomous vehicle.

With its focus on whatever operation it is responsible for, ANI systems are unable to use generalized learning in order to take over the world. That is the good news; the bad news is that with its reliance on a human operator the AI system is susceptible to biased data, human error, or even worse, a rogue human operator.

AI Surveillance

There may be no greater danger to humanity than humans using AI to invade privacy, and in some cases using AI surveillance to completely prevent people from moving freely.  China, Russia, and other nations passed through regulations during COVID-19 to enable them to monitor and control the movement of their respective populations. These are laws which once in place, are difficult to remove, especially in societies that feature autocratic leaders.

In China, cameras are stationed outside of people’s homes, and in some cases inside the person’s home. Each time a member of the household leaves, an AI monitors the time of arrival and departure, and if necessary alerts the authorities. As if that was not sufficient, with the assistance of facial recognition technology, China is able to track the movement of each person every time they are identified by a camera. This offers absolute power to the entity controlling the AI, and absolutely zero recourse to its citizens.

Why this scenario is dangerous, is that corrupt governments can carefully monitor the movements of journalists, political opponents, or anyone who dares to question the authority of the government. It is easy to understand how journalists and citizens would be cautious to criticize governments when every movement is being monitored.

There are fortunately many cities that are fighting to prevent facial recognition from infiltrating their cities. Notably, Portland, Oregon has recently passed a law that blocks facial recognition from being used unnecessarily in the city. While these changes in regulation may have gone unnoticed by the general public, in the future these regulations could be the difference between cities that offer some type of autonomy and freedom, and cities that feel oppressive.

Autonomous Weapons and Drones

Over 4500 AI researches have been calling for a ban on autonomous weapons and have created the Ban Lethal Autonomous Weapons website. The group has many notable non-profits as signatories such as Human Rights Watch, Amnesty International, and the The Future of Life Institute which in itself has a stellar scientific advisory board including Elon Musk, Nick Bostrom, and Stuart Russell.

Before continuing I will share this quote from The Future of Life Institute which best explains why there is clear cause for concern: “In contrast to semi-autonomous weapons that require human oversight to ensure that each target is validated as ethically and legally legitimate, such fully autonomous weapons select and engage targets without human intervention, representing complete automation of lethal harm. ”

Currently, smart bombs are deployed with a target selected by a human, and the bomb then uses AI to plot a course and to land on its target. The problem is what happens when we decide to completely remove the human from the equation?

When an AI chooses what humans need targeting, as well as the type of collateral damage which is deemed acceptable we may have crossed a point of no return. This is why so many AI researchers are opposed to researching anything that is remotely related to autonomous weapons.

There are multiple problems with simply attempting to block autonomous weapons research. The first problem is even if advanced nations such as Canada, the USA, and most of Europe choose to agree to the ban, it doesn’t mean rogue nations such as China, North Korea, Iran, and Russia will play along. The second and bigger problem is that AI research and applications that are designed for use in one field, may be used in a completely unrelated field.

For example, computer vision continuously improves and is important for developing autonomous vehicles, precision medicine, and other important use cases. It is also fundamentally important for regular drones or drones which could be modified to become autonomous.  One potential use case of advanced drone technology is developing drones that can monitor and fight forest fires. This would completely remove firefighters from harms way. In order to do this, you would need to build drones that are able to fly into harms way, to navigate in low or zero visibility, and are able to drop water with impeccable precision. It is not a far stretch to then use this identical technology in an autonomous drone that is designed to selectively target humans.

It is a dangerous predicament and at this point in time, no one fully understands the implications of advancing or attempting to block the development of autonomous weapons. It is nonetheless something that we need to keep our eyes on, enhancing whistle blower protection may enable those in the field to report abuses.

Rogue operator aside, what happens if AI bias creeps into AI technology that is designed to be an autonomous weapon?

AI Bias

One of the most unreported threats of AI is AI bias. This is simple to understand as most of it is unintentional. AI bias slips in when an AI reviews data that is fed to it by humans, using pattern recognition from the data that was fed to the AI, the AI incorrectly reaches conclusions which may have negative repercussions on society. For example, an AI that is fed literature from the past century on how to identify medical personnel may reach the unwanted sexist conclusion that women are always nurses, and men are always doctors.

A more dangerous scenario is when AI that is used to sentence convicted criminals is biased towards giving longer prison sentences to minorities. The AI’s criminal risk assessment algorithms are simply studying patterns in the data that has been fed into the system. This data indicates that historically certain minorities are more likely to re-offend, even when this is due to poor datasets which may be influenced by police racial profiling. The biased AI then reinforces negative human policies. This is why AI should be a guideline, never judge and jury.

Returning to autonomous weapons, if we have an AI which is biased against certain ethnic groups, it could choose to target certain individuals based on biased data, and it could go so far as ensuring that any type of collateral damage impacts certain demographics less than others. For example, when targeting a terrorist, before attacking it could wait until the terrorist is surrounded by those who follow the Muslim faith instead of Christians.

Fortunately, it has been proven that AI that is designed with diverse teams are less prone to bias. This is reason enough for enterprises to attempt when at all possible to hire a diverse well-rounded team.

Artificial General Intelligence Threats

It should be stated that while AI is advancing at an exponential pace, we have still not achieved AGI. When we will reach AGI is up for debate, and everyone has a different answer as to a timeline. I personally subscribe to the views of Ray Kurzweil, inventor, futurist, and author of ‘The Singularity is Near” who believes that we will have achieved AGI by 2029.

AGI will be the most transformational technology in the world. Within weeks of AI achieving human-level intelligence, it will then reach superintelligence which is defined as intelligence that far surpasses that of a human.

With this level of intelligence an AGI could quickly absorb all human knowledge and use pattern recognition to identify biomarkers that cause health issues, and then treat those conditions by using data science. It could create nanobots that enter the bloodstream to target cancer cells or other attack vectors. The list of accomplishments an AGI is capable of is infinite. We’ve previously explored some of the benefits of AGI.

The problem is that humans may no longer be able to control the AI. Elon Musk describes it this way: ”With artificial intelligence we are summoning the demon.’ Will we be able to control this demon is the question?

Achieving AGI may simply be impossible until an AI leaves a simulation setting to truly interact in our open-ended world. Self-awareness cannot be designed, instead it is believed that an emergent consciousness is likely to evolve when an AI has a robotic body featuring multiple input streams. These inputs may include tactile stimulation, voice recognition with enhanced natural language understanding, and augmented computer vision.

The advanced AI may be programmed with altruistic motives and want to save the planet. Unfortunately, the AI may use data science, or even a decision tree to arrive at unwanted faulty logic, such as assessing that it is necessary to sterilize humans,  or eliminate some of the human population in order to control human overpopulation.

Careful thought and deliberation needs to be explored when building an AI with intelligence that will far surpasses that of a human. There have been many nightmare scenarios which have been explored.

Professor Nick Bostrom in the Paperclip Maximizer argument has argued that a misconfigured AGI if instructed to produce paperclips would simply consume all of earths resources to produce these paperclips. While this seems a little far fetched,  a more pragmatic viewpoint is that an AGI could be controlled by a rogue state or a corporation with poor ethics. This entity could train the AGI to maximize profits, and in this case with poor programming and zero remorse it could choose to bankrupt competitors, destroy supply chains, hack the stock market, liquidate bank accounts, or attack political opponents.

This is when we need to remember that humans tend to anthropomorphize. We cannot give the AI human-type emotions, wants, or desires. While there are diabolical humans who kill for pleasure, there is no reason to believe that an AI would be susceptible to this type of behavior. It is inconceivable for humans to even consider how an AI would view the world.

Instead what we need to do is teach AI to always be deferential to a human. The AI should always have a human confirm any changes in settings, and there should always be a fail-safe mechanism. Then again, it has been argued that AI will simply replicate itself in the cloud, and by the time we realize it is self-aware it may be too late.

This is why it is so important to open source as much AI as possible and to have rational discussions regarding these issues.

Summary

There are many challenges to AI, fortunately, we still have many years to collectively figure out the future path that we want AGI to take. We should in the short-term focus on creating a diverse AI workforce, that includes as many women as men, and as many ethnic groups with diverse points of view as possible.

We should also create whistleblower protections for researchers that are working on AI, and we should pass laws and regulations which prevent widespread abuse of state or company-wide surveillance. Humans have a once in a lifetime opportunity  to improve the human condition with the assistance of AI, we just need to ensure that we carefully create a societal framework that best enables the positives, while mitigating the negatives which include existential threats.

Spread the love
Continue Reading

Ethics

Appen Partners with World Economic Forum to Create Responsible AI Standards

mm

Updated

 on

Appen, a global leader in high-quality training data for machine learning systems, has partnered with the World Economic Forum to design and release standards and best practices for responsible training data when building machine learning and artificial intelligence applications. As a World Economic Forum Associate Partner, Appen will collaborate with industry leaders to release the new standards within the “Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning” platform, which enables a global footprint and guidepost for responsible training data collection and creation across countries and industries.

The standards and best practices for responsible training data aim to improve quality, efficiency, transparency, and responsibility for AI projects while promoting inclusivity and collaboration. The adoption of these standards by the larger technology community will increase the value of – and trust in – the use of AI by businesses and the general public.

Modern AI applications largely depend on human-annotated data to train machine learning models that rely on deep learning and neural net technology. Responsible training data practices include paying fair wages and adhering to labor wellness guidelines and standards. Appen’s Crowd Code of Ethics, released in 2019.

“Ethical, diverse training data is essential to building a responsible AI system,” said CEO of Appen, Mark Brayan. “A solid training data platform and management strategy is often the most critical component of launching a successful, responsible machine learning powered product into production. We are delighted to share our 20+ years of expertise in this area, along with our Crowd Code of Ethics, with the World Economic Forum to accelerate standards and responsible practices across the technology industry.”

A key focus of the partnership will bring together leaders in the AI industry to:

  1. Contribute to the Human-Centered AI for Human Resources project
  2. Empower AI leadership with a C-Suite Toolkit and Model AI Governance Framework

“Getting access to large volumes of responsibly-sourced training data has been a longstanding challenge in the machine learning industry,” said Kay Firth-Butterfield, Head of AI and Machine Learning at the World Economic Forum. “The industry needs to respond with guidelines and standards for what it means to acquire and use responsible training data, addressing topics ranging from user permission, privacy, and security to how individuals are compensated for their work as part of the AI supply chain. We look forward to working with Appen and our multi-stakeholder community to provide practical guidance for responsible machine learning development around the world.”

Join industry leaders on October 14th for Appen’s annual Train AI conference providing leaders with the confidence to launch AI beyond pilot and into production. A curated collection of topics will teach how to successfully scale AI programs with actionable insights and get to ROI faster. Kay Firth-Butterfield will be the keynote speaker presenting on the importance of responsible AI practices and the tools available to leaders to ensure that ethical standards are being met.

Spread the love
Continue Reading

Ethics

Advanced AI Technologies Present Ethical Challenges – Thought Leaders

mm

Updated

 on

By Alfred Crews, Jr the Vice president & chief counsel for the Intelligence & Security sector of BAE Systems Inc.

Earlier this year, before the global pandemic, I attended The Citadel’s Intelligence Ethics Conference in Charleston, where we discussed the topic of ethics in intelligence collection as it relates to protecting national security. In the defense industry, we are seeing the proliferation of knowledge, computing, and advanced technologies, especially in the area of artificial intelligence (AI) and machine learning (ML). However, there could be significant issues when deploying AI within the context of intelligence gathering or real-time combat.

AI coupled with quantum computing presents risks

What we must question, analyze and determine a path forward is when using AI coupled with quantum computing capabilities in the process of war-time decision making. For example, remember the Terminator? As our technology makes leaps and bounds, the reality of what Skynet presented is before us. We could be asking ourselves, “Is Skynet coming to get us?” Take a stroll down memory lane with me; the AI machines took over because they had the capability to think and make decisions on their own, without a human to direct it. When the machines deducted that humans were a bug, they set out to destroy humankind. Don’t get me wrong, AI has great potential, but I believe it must have control parameters because of the risk factor involved.

AI’s ethical ambiguities & philosophical dilemma

I believe this is precisely why the U.S. Department of Defense (DoD) issued its own Ethical Principles for AI, because the use of AI raises new ethical ambiguities and risks. When combining AI with quantum computing capabilities, the ability to make decisions changes and the risk of losing control increases –more than we might realize today. Quantum computing puts our human brain’s operating system to shame because super computers can make exponentially more calculations quicker and with more accuracy than our human brains will ever be able to.

Additionally, the use of AI coupled with computing presents a philosophical dilemma. At what point will the world allow machines to have a will of their own; and, if machines are permitted to think on their own, does that mean the machine itself has become self-aware? Does being self-aware constitute life? As a society, we have not yet determined how to define this situation. Thus, as it stands today, machines taking action on their own without a human to control it, could lead to ramifications. Could a machine override a human’s intervention to stop fire? If the machine is operating on its own, will we be able to pull the plug?

As I see it, using AI from a defensive standpoint is easy to do. However, how much easier would it be to transfer to the offensive? On the offense, machines would be making combat firing decisions on the spot.  Would a machine firing down an enemy constitute a violation of the Geneva Convention and laws of armed conflict? Moving into this space at a rapid rate, the world must agree that the use of AI and quantum computing in combat must play into the laws we currently have in place.

The DoD has a position when using AI with autonomous systems and states that there will always be a person engaged with the decision making process; a person would make the final call on pulling a trigger to fire a weapon. That’s our rule, but what happens if an adversary decides to take another route and have an AI-capable machine make all the final decisions? Then the machine, which, as we discussed, is already faster, smarter and more accurate, would have the advantage.

Let’s look at a drone equipped with AI and facial recognition: The drone fires on its own will because of a pre-determined target labelled as a terrorist. Who is actually responsible for the firing? Is there accountability if there is a biased mistake?

Bias baked in to AI/ML

Research points to the fact that a machine is less likely to make mistakes than a human would. However, research also proves there are bias in machine learning based on the human “teacher” teaching the machine. The DoD’s five Ethical Principles of AI referenced existing biases when it states, “The Department will take deliberate steps to minimize unintended bias in AI capabilities.” We already know through proven studies that in the use of facial recognition applications there are bias toward people of color with false positives. When a person creates the code that teaches the machine how to make decisions, there will be biases. This could be unintentional because the person creating the AI was not aware of the bias that existed within themselves.

So, how does one eliminate bias? AI output is only as good as the input. Therefore, there must be controls. You must control the data flowing in because that is what could make AI results less valid. Developers will constantly have to re-write the code to eliminate the bias.

The world to define best use of technology  

Technology in and of itself is not good or bad. It is how a nation puts it to use that could take the best of intentions and have it go wrong.  As technology advances in ways that impact human lives, the world must work together to define appropriate action. If we take the human out of the equation in AI applications, we also take that pause before pulling the trigger – that moral compass that guides us; that pause when we stop and question, “Is this right?” A machine taught to engage will not have that pause. So, the question is, in the future, will the world stand for this? How far will the world go to allow machines to make combat decisions?

Spread the love
Continue Reading