Last year saw an increasing amount of attention drawn to the regulation of the AI industry, and this year seems to be continuing the trend. Just recently, Sundar Pichai, the CEO of Google and Alphabet Inc., supported the regulation of AI at an economic think tank taking place in Brugel.
Pichai’s comments were likely made in anticipation of new EU plans to regulate AI, which will be revealed in a few weeks. It’s possible that the EU regulations could contain policies legally enforcing certain standards for AI used in transportation, healthcare, and other high-risk sectors. The new EU regulations may also require increased transparency regarding AI systems and platforms.
According to Bloomberg, Google has previously tried to challenge antitrust fines and copyright enforcement in the EU. Despite previous attempts to push back against certain regulatory frameworks in Europe, Pichai stated that regulation is welcome as long as it takes “a proportionate approach, balancing potential harms with social opportunities.”
Pichai recently wrote an opinion piece in Financial Times, where he acknowledged that along with many opportunities to improve society, AI also has the potential to be misused. Pichai stated that regulations should help avoid the “negative consequences of AI”, citing abusive use of facial recognition and deepfakes as negative applications of AI. Pichai stated that international alignment is necessary for regulatory principles to work, and as such, there needs to be agreement on core values. Beyond that, Pichai said that it is the responsibility of AI companies like Google to give consideration to how AI can be used in an ethical manner and that this is why Google adopted its own standards for ethical AI use in 2018.
Pichai stated that government regulatory bodies and policies will play an important role in ensuring AI is used ethically, but that these bodies need not start from scratch. Pichai suggests that regulatory entities can look to previously established regulations for inspiration, such as Europe’s General Data Protection Regulation. Pichai also wrote that ethical AI regulation can potentially be both broad and flexible, with regulation providing general guidance that can be tailored for specific implementations in specific AI sectors. Newer technologies like self-driving vehicles will require new rules and policies that weigh benefits and costs against one another, while for more well-tread ground like medical devices, existing frameworks can be a good starting point.
Finally, Pichai stated that Google wants to partner with regulators to develop policies and find solutions that will balance trade-offs, Pichai wrote in Financial Times:
“We want to be a helpful and engaged partner to regulators as they grapple with the inevitable tensions and trade-offs. We offer our expertise, experience and tools as we navigate these issues together.”
While some have applauded Google for taking a stance on the need for regulation to ensure ethical AI usage, the debate continues over the extent to which it’s appropriate that AI companies should be involved with the creation of regulatory frameworks.
As for the upcoming EU regulations themselves, it’s possible that the EU is pursuing a risk-based rules system, which would put tighter restrictions on high-risk applications of AI. This includes restrictions that could be much tighter than Google hopes for, including a potential multi-year ban on facial recognition technology (with exceptions for research and security). In contrast to the EU’s more restrictive approaches, the US has pushed for relatively light regulations. It remains to be seen how the different regulation strategies will impact AI development, and society at large, in the two different regions of the globe.
Growing Calls for AI Regulation After Weeks of News Reports
Over the past few weeks, there have been growing calls for stronger regulation of artificial intelligence (AI). The concern comes after various news stories broke showing the potential abuses of the technology. Now, even more questions are being raised with the release of the European Commission’s (EU) much anticipated white paper on artificial intelligence, which is the first pan-national attempt to regulate AI. Around the same time, the White House Office of Science and Technology Policy (OSTP) released a report on its American Artificial Intelligence Initiative.
Prior to the release of the EU’s white paper, the Intercept broke a news story regarding leaked internal European Union documents. According to the documents, the EU was considering the creation of a network of facial recognition databases throughout Europe. The national police forces of 10 EU member states produced a report that called for the creation and interconnection of the databases in every member state. Many are worried that the creation of such databases will inevitably be connected with similar ones in the United States, allowing massive amounts of biometric data to be consolidated. Many were expecting the EU’s white paper to propose a ban on facial regulation, but it was not there.
According to Edin Omanovic, advocacy director for Privacy International, “This is concerning on a national level and on a European level, especially as some EU countries veer towards more authoritarian governments.”
One of the big news stories over the last few weeks, kicked off by a New York Times investigation in January, has to do with the start-up Clearview AI. The company’s facial recognition app identifies people through the use of a database of images taken from social media. The app compares a photo to the database of over 3 billion pictures coming from sites such as Facebook, Venmo, and Youtube. Once the app finds matches, it responds with links to the site where the photos originally came from. This technology could lead to the discovery of personal details about an individual. According to the report, the app has been used by over 600 law enforcement agencies. While Clearview’s database has over 3 billion images, the FBI’s database only contains 641 million images of US citizens.
The story has erupted once again with a new report by BuzzFeed News last week. According to the report, the company has considered expanding into more than just law enforcement purposes, with retail, real estate, banking, and international markets being considered. The report also stated that the facial recognition app has already been sold to thousands of organizations across the globe and used by the Justice Department, ICE, Macy’s, Walmart, and the NBA. Perhaps most problematic on the list of clients is a sovereign wealth fund in the United Arab Emirates and thousands of government entities.
The new developments have caused concerns among privacy advocates about potential mass surveillance uses. Another concern is the possibility of inaccuracy with the technology, leading to the law enforcement targeting of innocent individuals. On top of all of the privacy concerns, companies such as Facebook, Google, and Twitter are threatening legal action. Besides private legal action, at least two United States senators have stated that they intend to probe the company.
An example of the abuse of this technology can be seen within the U.S. Immigration and Customs Enforcement Agency (ICE). The Washington Post reported last week that ICE officials have been permitted to use facial-recognition technology to search through millions of Maryland driver’s license photos, and they can do it without seeking state or court approval.
According to Harrison Rudolph, a senior associate at Georgetown University Law School’s Center on Privacy and Technology, “ICE is using biometric information in the shadows, without government notice or public approval, to hunt down the most vulnerable people.”
These are just some of the most public examples of how this technology is being used, but much more is happening behind the scenes. Because of this, there has been an increased call for scrutiny and regulation. Increased transparency, whether willing or through investigative reporting, is bringing many practices to light. Without the public having an understanding of artificial intelligence and what it could mean for the economy, government, law enforcement, surveillance, and every other aspect of society, there is little hope for governments and companies to self-regulate. Society is seeing both the enormous benefits of AI and the massive concerns, with all of it happening too fast for companies, governments, and individuals to keep up.
U.S. Government Will Limit Exports of Artificial Intelligence
The U.S. government will take steps next week to limit the export of artificial intelligence (AI) software. The decision by the Trump administration comes at a time when powerful rival nations, such as China, are becoming increasingly dominant in the field. The move is meant to keep certain sensitive technologies from falling into the hands of those nations.
The new rule goes into effect on January 6, 2020, and it will be aimed at certain companies that export geospatial imagery software from the United States. Those companies will be required to apply for a license to export it. The only exception is that a license will not be required to export to Canada.
The new measure was the first of its kind to be finalized by the Commerce Department under a mandate from a 2018 law passed by Congress. That law updated arms controls to include emerging technology.
The new rules will likely have an effect on a growing part of the tech industry. These algorithms are currently being used in order to analyze satellite images of crops, trade patterns and other changes within the economy and environment.
Chinese companies are responsible for having exported artificial intelligence surveillance technology to over 60 countries. Some of those countries have dismal human rights records and include Iran, Myanmar, Venezuela, and Zimbabwe.
Within the nation of China itself, the Communist Party is using facial recognition technology systems to target Uighurs and other Muslim minorities located in China’s far western Xinjang region. According to a report released by a U.S. think tank, Beijing has been involved in “authoritarian tech.”
The think tank that released the report was the Carnegie Endowment for International Peace, and they did so after rising concerns of authoritarian regimes using the technology as a way to gain power.
“Technology linked to Chinese companies — particularly Huawei, Hikvision, Dahua and ZTE — supply AI surveillance technology in 63 countries, 36 of which have signed onto China’s Belt and Road Initiative,” the report said.
One of China’s leading technology companies, Huawei Technologies Co., alone provides AI surveillance technology to at least 50 countries.
“Chinese product pitches are often accompanied by soft loans to encourage governments to purchase their equipment,” according to the report. “This raises troubling questions about the extent to which the Chinese government is subsidizing the purchase of advanced repressive technology.”
China has faced increased scrutiny after an investigative report by the International Consortium of Investigative Journalists was released detailing the nation’s surveillance and policing systems, which are being used to oppress Uighurs and send them to internment camps.
The new rules implemented by the U.S. government will at first only go into effect within the country. However, U.S. authorities have said that they could be submitted to international bodies at a later time.
There has been recent bi-partisan frustration over the long amount of time it is taking to roll-out new export controls for the technology.
“While the government believes that it is in the national security interests of the United States to immediately implement these controls, it also wants to provide the interested public with an opportunity to comment on the control of new items,” according to Senate Minority Leader Chuck Schumer.
AI Now Institute Warns About Misuse Of Emotion Detection Software And Other Ethical Issues
The AI Now Institute has released a report that urges lawmakers and other regulatory bodies to set hard limits on the use of emotion-detecting technology, banning it in cases where it may be used to make important decisions like employee hiring or student acceptance. In addition, the report contained a number of other suggestions regarding a range of topics in the AI field.
The AI Now Institute is a research institute based at NYU, possessing the mission of studying AI’s impact on society. AI Now releases a yearly report demonstrating their findings regarding the state of AI research and the ethical implications of how AI is currently being used. As the BBC reported, this year’s report addressed topics like algorithmic discrimination, lack of diversity in AI research, and labor issues.
Affect recognition, the technical term for emotion-detection algorithms, is a rapidly growing area of AI research. Those who employ the technology to make decisions often claim that the systems can draw reliable information about people’s emotional states by analyzing microexpressions, along with other cues like tone of voice and body language. The AI Now institute notes that the technology is being employed across a wide range of applications, like determining who to hire, setting insurance prices, and monitoring if students are paying attention in class.
Prof. Kate Crawford, co-founder of AI Now explained that its often believed that human emotions can accurately be predicted with relatively simple models. Crawford said that some firms are basing the development of their software on Paul Ekman’s work, a psychologist who hypothesized there are only six basic types of emotions that register on the face. However, Crawford notes that since Ekman’s theory was introduced studies have found that is far greater variability in facial expressions and that expressions can change across situations and cultures very easily.
“At the same time as these technologies are being rolled out, large numbers of studies are showing that there is… no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks,” said Crawford to the BBC.
For this reason, the AI Now institute argues that much of affect recognition is based on unreliable theories and questionable science. Hence, emotion detection systems shouldn’t be deployed until more research has been done and that “governments should specifically prohibit the use of affect recognition in high-stakes decision-making processes”. AI Now argued that we should especially stop using the technology in “sensitive social and political contexts”, contexts that include employment, education, and policing.
At least one AI-development firm specializing in affect recognition, Emteq, agreed that there should be regulation that prevents misuse of the tech. The founder of Emteq, Charles Nduka, explained to the BBC that while AI systems can accurately recognize different facial expressions, there is not a simple map from expression to emotion. Nduka did express worry about regulation being taken too far and stifling research, noting that if “things are going to be banned, it’s very important that people don’t throw out the baby with the bathwater”.
As NextWeb reports, AI Now also recommended a number of other policies and norms that should guide the AI industry moving forward.
AI Now highlighted the need for the AI industry to make workplaces more diverse and stated that workers should be guaranteed a right to voice their concerns about invasive and exploitative AI. Tech workers should also have the right to know if their efforts are being used to construct harmful or unethical work.
AI Now also suggested that lawmakers take steps to require informed consent for the use of any data derived from health-related AI. Beyond this, it was advised that data privacy be taken more seriously and that the states should work to design privacy laws for biometric data covering both private and public entities.
Finally, the institute advised that the AI industry begin thinking and acting more globally, trying to address the larger political, societal, and ecological consequences of AI. It was recommended that there be a substantial effort to account for AI’s impact regarding geographical displacement and climate and that governments should make the climate impact of the AI industry publically available.