Connect with us

Regulation

Tech Advisory Group Pushes For Limits On Pentagon’s AI Use

mm

Published

 on

Tech Advisory Group Pushes For Limits On Pentagon's AI Use

The Pentagon has made its intentions to invest heavily in artificial intelligence clear, stating that AI will make the US military more powerful and robust to possible national security threats. As Engadget reports, this past Thursday the Defense Innovation Board pushed forward a number of proposed ethical guidelines for the use of AI in the military. The list of proposals includes strategies to avoid unintended bias and governable AI that would have emergency stopping procedures that prevent the AI from causing unnecessary harm.

Wired reports that the Defense Innovation Board was created by the Obama Administration to guide the Pentagon in acquiring tech industry experience and talent. The board is currently chaired by the former CEO of Google, Eric Schmidt. The department was recently tasked with establishing guidelines for the ethical implementation of AI in military projects. On Thursday the board put out their guidelines and recommendations for review. The report notes that the time to have serious discussions about the use of AI in a military context is now before some serious incident mandates that there must be one.

According to Artificial Intelligence News, a former military official recently stated that the Pentagon was falling behind when it comes to the use of AI. The Pentagon is aiming to make up this difference and it has declared the development and expansion of AI in the military to be a national priority. AI ethicists are concerned that in the Pentagon’s haste to become a leader in AI, AI systems may be used in unethical ways. While various independent AI ethics boards have made their own suggestions, the Defense Innovation Board has proposed at least five principles that the military should follow at all times when developing and implementing AI systems.

The first principle proposed by the board is the principle that humans should always be responsible for the utilization, deployment, and outcomes of any artificial intelligence platform used in a military context. This is reminiscent of a 2012 policy that mandated that humans should ultimately be part of the decision making process whenever lethal force could be used. There are a number of other principles on the list which provide general advice like making sure that AI systems are always built by engineers who understand and thoroughly document their programs. Another principle advises that military AI systems should always be tested for their reliability. These guidelines seem to be common sense, but the board wants to underscore their importance.

The other principles in the list are involved in the control of bias for AI algorithms and the ability of AIs to detect if unintended harm may be caused and to automatically disengage. The guidelines specify that if unnecessary harm will occur, the AI should be able to disengage itself and have a human operator take over. The draft of principles also recommends that the output of AIs be traceable so that analysts can see what led to the AI making a given decision.

The set of recommendations pushed forward by the board underscore two different ideas. The principles are reflective of the fact that AI will be integral to the future of military operations, but that much of AI still relies on human management and decision making.

While the Pentagon doesn’t have to adopt the recommendations of the board, it sounds as if the Pentagon is taking the recommendations seriously. As reported by Wired, the director of the Joint Artifical Intelligence Center, Lieutenant General Jack Shanahan, stated the board’s recommendations would assist the Pentagon in  “upholding the highest ethical standards as outlined in the DoD AI strategy, while embracing the US military’s strong history of applying rigorous testing and fielding standards for technology innovations.”

The tech industry as a whole remains wary of using AI in the creation of military hardware and software. Microsoft and Google employees have both protested over collaborations with military entities, and Google has recently elected not to renew the contract that had them contributing to Project Maven. A number of CEOs, scientists, and engineers have also signed a pledge not to “participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons”. If the Pentagon does adopt the guidelines suggested by the board, it could make the tech industry more willing to collaborate on projects, though only time will tell.

Spread the love

Blogger and programmer with specialties in Machine Learning and Deep Learning topics. Daniel hopes to help others use the power of AI for social good.

Regulation

Google’s CEO Calls For Increased Regulation To Avoid “Negative Consequences of AI”

mm

Published

on

Google's CEO Calls For Increased Regulation To Avoid "Negative Consequences of AI"

Last year saw an increasing amount of attention drawn to the regulation of the AI industry, and this year seems to be continuing the trend. Just recently, Sundar Pichai, the CEO of Google and Alphabet Inc., supported the regulation of AI at an economic think tank taking place in Brugel.

Pichai’s comments were likely made in anticipation of new EU plans to regulate AI, which will be revealed in a few weeks. It’s possible that the EU regulations could contain policies legally enforcing certain standards for AI used in transportation, healthcare, and other high-risk sectors. The new EU regulations may also require increased transparency regarding AI systems and platforms.

According to Bloomberg, Google has previously tried to challenge antitrust fines and copyright enforcement in the EU. Despite previous attempts to push back against certain regulatory frameworks in Europe, Pichai stated that regulation is welcome as long as it takes “a proportionate approach, balancing potential harms with social opportunities.”

Pichai recently wrote an opinion piece in Financial Times, where he acknowledged that along with many opportunities to improve society, AI also has the potential to be misused. Pichai stated that regulations should help avoid the “negative consequences of AI”, citing abusive use of facial recognition and deepfakes as negative applications of AI. Pichai stated that international alignment is necessary for regulatory principles to work, and as such, there needs to be agreement on core values. Beyond that, Pichai said that it is the responsibility of AI companies like Google to give consideration to how AI can be used in an ethical manner and that this is why Google adopted its own standards for ethical AI use in 2018.

Pichai stated that government regulatory bodies and policies will play an important role in ensuring AI is used ethically, but that these bodies need not start from scratch. Pichai suggests that regulatory entities can look to previously established regulations for inspiration, such as Europe’s General Data Protection Regulation. Pichai also wrote that ethical AI regulation can potentially be both broad and flexible, with regulation providing general guidance that can be tailored for specific implementations in specific AI sectors. Newer technologies like self-driving vehicles will require new rules and policies that weigh benefits and costs against one another, while for more well-tread ground like medical devices, existing frameworks can be a good starting point.

Finally, Pichai stated that Google wants to partner with regulators to develop policies and find solutions that will balance trade-offs, Pichai wrote in Financial Times:

“We want to be a helpful and engaged partner to regulators as they grapple with the inevitable tensions and trade-offs. We offer our expertise, experience and tools as we navigate these issues together.”

While some have applauded Google for taking a stance on the need for regulation to ensure ethical AI usage, the debate continues over the extent to which it’s appropriate that AI companies should be involved with the creation of regulatory frameworks.

As for the upcoming EU regulations themselves, it’s possible that the EU is pursuing a risk-based rules system, which would put tighter restrictions on high-risk applications of AI. This includes restrictions that could be much tighter than Google hopes for, including a potential multi-year ban on facial recognition technology (with exceptions for research and security). In contrast to the EU’s more restrictive approaches, the US has pushed for relatively light regulations. It remains to be seen how the different regulation strategies will impact AI development, and society at large, in the two different regions of the globe.

Spread the love
Continue Reading

Regulation

U.S. Government Will Limit Exports of Artificial Intelligence

Published

on

U.S. Government Will Limit Exports of Artificial Intelligence

The U.S. government will take steps next week to limit the export of artificial intelligence (AI) software. The decision by the Trump administration comes at a time when powerful rival nations, such as China, are becoming increasingly dominant in the field. The move is meant to keep certain sensitive technologies from falling into the hands of those nations. 

The new rule goes into effect on January 6, 2020,  and it will be aimed at certain companies that export geospatial imagery software from the United States. Those companies will be required to apply for a license to export it. The only exception is that a license will not be required to export to Canada. 

The new measure was the first of its kind to be finalized by the Commerce Department under a mandate from a 2018 law passed by Congress. That law updated arms controls to include emerging technology. 

The new rules will likely have an effect on a growing part of the tech industry. These algorithms are currently being used in order to analyze satellite images of crops, trade patterns and other changes within the economy and environment. 

Chinese companies are responsible for having exported artificial intelligence surveillance technology to over 60 countries. Some of those countries have dismal human rights records and include Iran, Myanmar, Venezuela, and Zimbabwe. 

Within the nation of China itself, the Communist Party is using facial recognition technology systems to target Uighurs and other Muslim minorities located in China’s far western Xinjang region. According to a report released by a U.S. think tank, Beijing has been involved in “authoritarian tech.”

The think tank that released the report was the Carnegie Endowment for International Peace, and they did so after rising concerns of authoritarian regimes using the technology as a way to gain power. 

“Technology linked to Chinese companies — particularly Huawei, Hikvision, Dahua and ZTE — supply AI surveillance technology in 63 countries, 36 of which have signed onto China’s Belt and Road Initiative,” the report said.

One of China’s leading technology companies, Huawei Technologies Co., alone provides AI surveillance technology to at least 50 countries. 

“Chinese product pitches are often accompanied by soft loans to encourage governments to purchase their equipment,” according to the report. “This raises troubling questions about the extent to which the Chinese government is subsidizing the purchase of advanced repressive technology.”

China has faced increased scrutiny after an investigative report by the International Consortium of Investigative Journalists was released detailing the nation’s surveillance and policing systems, which are being used to oppress Uighurs and send them to internment camps. 

The new rules implemented by the U.S. government will at first only go into effect within the country. However, U.S. authorities have said that they could be submitted to international bodies at a later time. 

There has been recent bi-partisan frustration over the long amount of time it is taking to roll-out new export controls for the technology. 

“While the government believes that it is in the national security interests of the United States to immediately implement these controls, it also wants to provide the interested public with an opportunity to comment on the control of new items,” according to Senate Minority Leader Chuck Schumer.

 

Spread the love
Continue Reading

Ethics

AI Now Institute Warns About Misuse Of Emotion Detection Software And Other Ethical Issues

mm

Published

on

AI Now Institute Warns About Misuse Of Emotion Detection Software And Other Ethical Issues

The AI Now Institute has released a report that urges lawmakers and other regulatory bodies to set hard limits on the use of emotion-detecting technology, banning it in cases where it may be used to make important decisions like employee hiring or student acceptance. In addition, the report contained a number of other suggestions regarding a range of topics in the AI field.

The AI Now Institute is a research institute based at NYU, possessing the mission of studying AI’s impact on society. AI Now releases a yearly report demonstrating their findings regarding the state of AI research and the ethical implications of how AI is currently being used. As the BBC reported, this year’s report addressed topics like algorithmic discrimination, lack of diversity in AI research, and labor issues.

Affect recognition, the technical term for emotion-detection algorithms, is a rapidly growing area of AI research. Those who employ the technology to make decisions often claim that the systems can draw reliable information about people’s emotional states by analyzing microexpressions, along with other cues like tone of voice and body language. The AI Now institute notes that the technology is being employed across a wide range of applications, like determining who to hire, setting insurance prices, and monitoring if students are paying attention in class.

Prof. Kate Crawford, co-founder of AI Now explained that its often believed that human emotions can accurately be predicted with relatively simple models. Crawford said that some firms are basing the development of their software on Paul Ekman’s work, a psychologist who hypothesized there are only six basic types of emotions that register on the face. However, Crawford notes that since Ekman’s theory was introduced studies have found that is far greater variability in facial expressions and that expressions can change across situations and cultures very easily.

“At the same time as these technologies are being rolled out, large numbers of studies are showing that there is… no substantial evidence that people have this consistent relationship between the emotion that you are feeling and the way that your face looks,” said Crawford to the BBC.

For this reason, the AI Now institute argues that much of affect recognition is based on unreliable theories and questionable science. Hence, emotion detection systems shouldn’t be deployed until more research has been done and that “governments should specifically prohibit the use of affect recognition in high-stakes decision-making processes”. AI Now argued that we should especially stop using the technology in “sensitive social and political contexts”, contexts that include employment, education, and policing.

At least one AI-development firm specializing in affect recognition, Emteq, agreed that there should be regulation that prevents misuse of the tech. The founder of Emteq, Charles Nduka, explained to the BBC that while AI systems can accurately recognize different facial expressions, there is not a simple map from expression to emotion. Nduka did express worry about regulation being taken too far and stifling research, noting that if “things are going to be banned, it’s very important that people don’t throw out the baby with the bathwater”.

As NextWeb reports, AI Now also recommended a number of other policies and norms that should guide the AI industry moving forward.

AI Now highlighted the need for the AI industry to make workplaces more diverse and stated that workers should be guaranteed a right to voice their concerns about invasive and exploitative AI. Tech workers should also have the right to know if their efforts are being used to construct harmful or unethical work.

AI Now also suggested that lawmakers take steps to require informed consent for the use of any data derived from health-related AI. Beyond this, it was advised that data privacy be taken more seriously and that the states should work to design privacy laws for biometric data covering both private and public entities.

Finally, the institute advised that the AI industry begin thinking and acting more globally, trying to address the larger political, societal, and ecological consequences of AI. It was recommended that there be a substantial effort to account for AI’s impact regarding geographical displacement and climate and that governments should make the climate impact of the AI industry publically available.

Spread the love
Continue Reading