Connect with us

Regulation

U.S. Government Will Limit Exports of Artificial Intelligence

Published

 on

The U.S. government will take steps next week to limit the export of artificial intelligence (AI) software. The decision by the Trump administration comes at a time when powerful rival nations, such as China, are becoming increasingly dominant in the field. The move is meant to keep certain sensitive technologies from falling into the hands of those nations. 

The new rule goes into effect on January 6, 2020,  and it will be aimed at certain companies that export geospatial imagery software from the United States. Those companies will be required to apply for a license to export it. The only exception is that a license will not be required to export to Canada. 

The new measure was the first of its kind to be finalized by the Commerce Department under a mandate from a 2018 law passed by Congress. That law updated arms controls to include emerging technology. 

The new rules will likely have an effect on a growing part of the tech industry. These algorithms are currently being used in order to analyze satellite images of crops, trade patterns and other changes within the economy and environment. 

Chinese companies are responsible for having exported artificial intelligence surveillance technology to over 60 countries. Some of those countries have dismal human rights records and include Iran, Myanmar, Venezuela, and Zimbabwe. 

Within the nation of China itself, the Communist Party is using facial recognition technology systems to target Uighurs and other Muslim minorities located in China’s far western Xinjang region. According to a report released by a U.S. think tank, Beijing has been involved in “authoritarian tech.”

The think tank that released the report was the Carnegie Endowment for International Peace, and they did so after rising concerns of authoritarian regimes using the technology as a way to gain power. 

“Technology linked to Chinese companies — particularly Huawei, Hikvision, Dahua and ZTE — supply AI surveillance technology in 63 countries, 36 of which have signed onto China’s Belt and Road Initiative,” the report said.

One of China’s leading technology companies, Huawei Technologies Co., alone provides AI surveillance technology to at least 50 countries. 

“Chinese product pitches are often accompanied by soft loans to encourage governments to purchase their equipment,” according to the report. “This raises troubling questions about the extent to which the Chinese government is subsidizing the purchase of advanced repressive technology.”

China has faced increased scrutiny after an investigative report by the International Consortium of Investigative Journalists was released detailing the nation’s surveillance and policing systems, which are being used to oppress Uighurs and send them to internment camps. 

The new rules implemented by the U.S. government will at first only go into effect within the country. However, U.S. authorities have said that they could be submitted to international bodies at a later time. 

There has been recent bi-partisan frustration over the long amount of time it is taking to roll-out new export controls for the technology. 

“While the government believes that it is in the national security interests of the United States to immediately implement these controls, it also wants to provide the interested public with an opportunity to comment on the control of new items,” according to Senate Minority Leader Chuck Schumer.

 

Spread the love

Cybersecurity

AI Experts Rank Deepfakes and 19 Other AI-Based Crimes By Danger Level

mm

Published

 on

A new report published by University College London aimed to identify the many different ways that AI could potentially assist criminals over the next 15 years. The report had 31 different AI experts take 20 different methods of using AI to carry out crimes and rank these methods based on various factors. The AI experts ranked the crimes according to variables like how easy the crime would be to commit, the potential societal harm the crime could do, the amount of money a criminal could make, and how difficult the crime would be to stop. According to the results of the report, Deepfakes posed the greatest threat to law-abiding citizens and society generally, as their potential for exploitation by criminals and terrorists is high.

The AI experts ranked deepfakes at the top of the list of potential AI threats because deepfakes are difficult to identify and counteract. Deepfakes are constantly getting better at fooling even the eyes of deepfake experts and even other AI-based methods of detecting deepfakes are often unreliable. In terms of their capacity for harm, deepfakes can easily be used by bad actors to discredit trusted, expert figures or to attempt to swindle people by posing as loved ones or other trusted individuals. If deepfakes are abundant, people could begin to lose trust in any audio or video media, which could make them lost faith in the validity of real events and facts.

Dr. Matthew Caldwell, from UCL Computer Science, was the first author on the paper. Caldwell underlines the growing danger of deepfakes as more and more of our activity moves online. As Caldwell was quoted by UCL News:

“Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”

The team of experts ranked five other emerging AI technologies as highly concerning potential catalysts for new kinds of crime: driverless vehicles being used as weapons, hack attacks on AI-controlled systems and devices, online data collection for the purposes of blackmail, AI-based phishing featuring customized messages, and fake news/misinformation in general.

According to Shane Johnson, the Director of the Dawes Centre for Future Crimes at UCL, the goal of the study was to identify possible threats associated with newly emerging technologies and hypothesize ways to get ahead of these threats. Johnson says that as the speed of technological change increases, it’s imperative that “we anticipate future crime threats so that policymakers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur”.

Regarding the fourteen other possible crimes on the list, they were put into one of two categories: moderate concern and low concern.

AI crimes of moderate concern include the misuse of military robots, data poisoning, automated attack drones, learning-based cyberattacks, denial of service attacks for online activities, manipulating financial/stock markets, snake oil (sale of fraudulent services cloaked in AI/ML terminology), and tricking face recognition.

Low concern AI-based crimes include the forgery of art or music, AI-assisted stalking, fake reviews authored by AI, evading AI detection methods, and “burglar bots” (bots which break into people’s homes to steal things).

Of course, AI models themselves can be used to help combat some of these crimes. Recently, AI models have been deployed to assist in the detection of money laundering schemes, detecting suspicious financial transactions. The results are analyzed by human operators who then approve or deny the alert, and the feedback is used to better train the model. It seems likely that the future will involve AIs being pitted against one another, with criminals trying to design their best AI-assisted tools and security, law enforcement, and other ethical AI designers trying to design their own best AI systems.

Spread the love
Continue Reading

Regulation

U.S. Representatives Release Bipartisan Plan for AI and National Security

Published

 on

U.S. Representatives Robin Kelly (D-IL) and Will Hurd (R-TX) have released a plan on how the nation should proceed with artificial intelligence (AI) technology in relation to national security.

The report released on July 30 details how the U.S. should collaborate with its allies on AI development, as well as advocating for restricting the export of specific technology to China, such as computer chips used in machine learning

The report was compiled by the Congressmen along with the Bipartisan Policy Center and Georgetown University’s Center for Security and Emerging Technology (CSET), other government officials, industry representatives, civil society advocates, and academics. 

The main principles of the report are:

  1. Focusing on human-machine teaming, trustworthiness, and implementing the DOD’s Ethical Principles for AI in regard to defense and intelligence applications of AI.
  2. Cooperation between the U.S. and its allies, but also an openness to working with competitive nations such as Russia and China.
  3. The creation of AI-specific metrics in order to evaluate AI sectors in other nations.
  4. More investment in research, development, testing, and standardization in AI systems.
  5. Controls on export and investment in order to prevent sensitive AI technologies from being acquired by foreign adversaries, specifically China. 

Here is a look at some of the highlights of the report:

Autonomous Vehicles and Weapons Systems

According to the report, the U.S military is undergoing the process of incorporating AI into various semi-autonomous and autonomous vehicles, including ground vehicles, naval vessels, fighter aircraft, and drones. Within these vehicles, AI technology is being used to map out environments, fuse sensor data, plan out navigation routes, and communicate with other vehicles. 

Autonomous vehicles are able to take the place of humans in certain high-risk objectives, like explosive ordnance disposal and route clearance. The main problem that arises when it comes to autonomous vehicles and national defense is that the current algorithms are optimized for commercial use, not for military use. 

The report also addressed lethal autonomous systems, saying that many defense experts argue AI weapons systems can help guard against incoming aircraft, missiles, rockets, artillery, and mortar shells. The DOD’s AI strategy also takes up the position that these systems can reduce the risk of civilian casualties and collateral damage, specifically when warfighters are given enhanced decision support and greater situational awareness. Not everyone agrees on these systems, however, with many experts and ethicists calling for a ban on them. To address this, the report recommends that the DOD should work closely with industry and experts to develop ethical principles for the use of this AI, as well as reach out to nongovernmental organizations, humanitarian groups, and civil society organizations with the costs and benefits of the technology. The goal of this communication is to build a greater level of public trust.

AI Diplomacy

Another key aspect of the report is its advocacy for the U.S. to work with other nations to prevent issues that could arise from AI technology. One of its recommendations is for the U.S. to establish communication procedures with China and Russia, specifically in regard to AI, which would allow humans to talk it out in the case that there is an escalation due to algorithms. Hurd asks: Imagine a high stakes issue: What does a Cuban missile crisis look like with the use of AI?” 

Export and Investment Controls

The report also recommends that export and investment controls be put in place in order to prevent China from acquiring and assimilating U.S. technologies. It pushes for the Department of State and the Department of Commerce to work with allies and partners, specifically Taiwan and South Korea, in order to synchronize with existing U.S. export controls on advanced AI chips. 

New Interest in AI Strategy

The report compiled by the congressmen is the second of four on AI strategy. Along with the Bipartisan Policy Center, the pair of representatives have closely worked together to release another report earlier this month. That report focused on reforming education, all the way from kindergarten through grad school, in order to prepare the workforce for a changing economy due to AI. One of the future papers set to be released is about AI research and development, and the other is on AI ethics. 

The congressmen are working on drafting a resolution based on their ideas about AI, and then work will be done to introduce legislation in Congress. 

 

Spread the love
Continue Reading

Regulation

Pentagon’s Joint AI Center (JAIC) Testing First Lethal AI Projects

Published

 on

The new acting director of the Joint Artificial Intelligence Center (JAIC), Nand Mulchandani, gave his first-ever Pentagon press conference on July 8, where he laid out what is ahead for the JAIC and how current projects are unfolding.

The press conference comes two years after Google pulled out of Project Maven, also known as the Algorithmic Warfare Cross-Functional Team. According to the Pentagon, the project that was launched in April 2017 aimed to develop “computer-vision algorithms needed to help military and civilian analysts encumbered by the sheer volume of full-motion video data that DOD collects every day in support of counterinsurgency and counterterrorism operations.”

One of the Pentagon’s main objectives was to have algorithms implemented into “warfighting systems” by the end of 2017.

The proposal was met by strong opposition, including 3,000 Google employees who signed a petition protesting against the company’s involvement in the project.

According to Mulchandani, that dynamic has changed and the JAIC is now receiving support from tech firms, including Google.

“We have had overwhelming support and interest from tech industry in working with the JAIC and the DoD,” Mulchandani said. “[we] have commercial contracts and work going on with all of the major tech and AI companies – including Google – and many others.” 

Mulchandani sits in a much better position than his predecessor, Lt. Gen. Jack Shanahan, when it comes to the relationship between the JAIC and Silicon Valley. Shanahan founded the JAIC in 2018 and had a tense relationship with the tech industry, whereas Mulchandani spent much of his life as a part of it. He has co-founded and led multiple startup companies. 

The JAIC 2.0

The JAIC was created in 2018 with a focus on low technology risk areas, like disaster relief and predictive maintenance. Now with these projects advancing, there is work being done to transition them into production. 

Termed JAIC 2.0, the new plan includes six mission initiatives that are all underway, including joint warfighting operations, warfighter health, business process transformation, threat reduction and protection, joint logistics, and the newest one, joint information warfare. The latest addition includes cyber operations.

There is special focus now being turned to the joint warfighting operations mission, which adopts the priorities of the National Defense Strategy in regard to technological advances in the United States military.

The JAIC has not laid out many specifics about the new project, but Mulchandani referred to it as “tactical edge AI” and said that it will be controlled fully by humans. 

Mulchandani answered a question from a reporter about General Shanahan’s statements as director about lethal AI application by 2021, which “could be the first lethal AI in the industry.” 

Here is how he responded: 

“I don’t want to start straying into issues around autonomy and lethality versus lethal — or lethality itself. So yes, it is true that many of the products we work will go into weapon systems.”

“None of them right now are going to be autonomous weapon systems. We’re still governed by 3000.09, that principle still stays intact. None of the work or anything that General Shanahan may have mentioned crosses that line period.”

“Now we do have projects going under Joint Warfighting, which are going to be actually going into testing. They are very tactical edge AI is the way I describe it. And that work is going to be tested, it’s actually very promising work, we’re very excited about it. It’s — it’s one of the, as I talked about the pivot from predictive maintenance and others to Joint Warfighting, that is the — probably the flagship product that we’re sort of thinking about and talking about that will go out there.”

“But, it will involve, you know, operators, human in the loop, full human control, all of those things are still absolutely valid.”

Other Projects

In his statement, Mulchandani also talked about the “huge potential for using AI in offensive capabilities” like cybersecurity.

“You can read the news in terms of what our adversaries are doing out there, and you can imagine that there’s a lot of room for growth in that area,” he said.

Mulchandani revealed what the JAIC is doing in regard to challenges brought on by the COVID-19 pandemic, through a recent $800 million contract with Booz Allen Hamilton, and Project Salus. JAIC developed a series of algorithms for NORTHCOM and National Guard units to predict supply chain resource challenges. 

 

Spread the love
Continue Reading