Connect with us

Regulation

U.S. Government Blacklists Top AI Startups in China

Updated

 on

The United States government has blacklisted several top artificial intelligence startups in China. This action follows the already existing trade blacklist that has been present against China since the beginning of the ongoing trade war. The new developments are a response to the current actions being taken against Muslim minorities in the country. The decision will undoubtedly increase the current tensions between the U.S. and China. 

The new policy will require U.S. government approval for firms who want to buy components from U.S. companies. It was the same tactic used against China in the Huawei Technologies Co Ltd conflict. 

According to the U.S. Government and Department of Commerce, “entities have been implicated in human rights violations and abuses in the implementation of China’s campaign of repression, mass arbitrary detention, and high-technology surveillance against Uighurs, Kazakhs, and other members of Muslim minority groups.” 

Secretary of Commerce Wilbur Ross has said that the U.S. government will not tolerate the actions that are taking place in the Xinjiang region of China.

Blacklist Comes Days Before Trade Talks Resume  

The new developments come just as trade talks are set to resume between Washington and Beijing in the coming days. 

Companies that are being targeted include some of China’s most important AI startups. Included in the list are Hikvision, a video surveillance gear company with a market value of $42 billion, the $7.5 billion valued ScienceTime, the Alibaba connected Megvii valued at $4 billion, speech recognition specialist iFlytek Co, data recovery company Xiamen Meiya Pico Information Co, and facial recognition company Yitu Technology. 

In total, there are 28 entities that the U.S. Commerce Department has added to the blacklist; Eight of them are companies and the other 20 are organizations including local public security bureaus, which have been targeted for their direct role in the ongoing human rights abuses taking place in Xinjiang. 

The Massachusetts Institute of Technology has announced that they will be reviewing their relationship with SenseTime Group Ltd. According to the university, their relationship with the company is to “confront some of the world’s greatest challenges.” The co-founder of SenseTime is MIT graduate Xiao’ou Tang. 

Damage to AI Startups in China

Many of the companies should be able to change over to backup supply chains, but there is still the strong possibility of heavy damage. Research will likely slow down as many of the companies rely on the chips created in the United States, and partnerships with U.S. companies can start to deteriorate or even come to a complete stop. 

Beijing has been largely quiet on the issue, and they will still attend the trade meetings in Washington. However, the companies involved have not been quite like the government. 

According to Hikvision, “Punishing Hikvision, despite these engagements, will deter global companies from communicating with the U.S. government, hurt Hikvision’s U.S. businesses partners and negatively impact the U.S. economy.” 

In a statement released by SenseTime, the company expressed their views on the issue while also claiming they follow all relevant laws in the jurisdictions they operate within. The company also reiterated their commitment to ethics within the AI industry. 

After the announcement of the blacklist, iFlytek fell by 2.7% and Xiamen Meiya by 1.8%. 

With artificial intelligence becoming such a huge part of the global technology market with its enormous potential, it will likely keep becoming a target. It can be expected that the AI industry becomes a tool used against nations and companies, and it will be included in actions such as blacklisting.

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Cybersecurity

AI Experts Rank Deepfakes and 19 Other AI-Based Crimes By Danger Level

mm

Updated

 on

A new report published by University College London aimed to identify the many different ways that AI could potentially assist criminals over the next 15 years. The report had 31 different AI experts take 20 different methods of using AI to carry out crimes and rank these methods based on various factors. The AI experts ranked the crimes according to variables like how easy the crime would be to commit, the potential societal harm the crime could do, the amount of money a criminal could make, and how difficult the crime would be to stop. According to the results of the report, Deepfakes posed the greatest threat to law-abiding citizens and society generally, as their potential for exploitation by criminals and terrorists is high.

The AI experts ranked deepfakes at the top of the list of potential AI threats because deepfakes are difficult to identify and counteract. Deepfakes are constantly getting better at fooling even the eyes of deepfake experts and even other AI-based methods of detecting deepfakes are often unreliable. In terms of their capacity for harm, deepfakes can easily be used by bad actors to discredit trusted, expert figures or to attempt to swindle people by posing as loved ones or other trusted individuals. If deepfakes are abundant, people could begin to lose trust in any audio or video media, which could make them lost faith in the validity of real events and facts.

Dr. Matthew Caldwell, from UCL Computer Science, was the first author on the paper. Caldwell underlines the growing danger of deepfakes as more and more of our activity moves online. As Caldwell was quoted by UCL News:

“Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”

The team of experts ranked five other emerging AI technologies as highly concerning potential catalysts for new kinds of crime: driverless vehicles being used as weapons, hack attacks on AI-controlled systems and devices, online data collection for the purposes of blackmail, AI-based phishing featuring customized messages, and fake news/misinformation in general.

According to Shane Johnson, the Director of the Dawes Centre for Future Crimes at UCL, the goal of the study was to identify possible threats associated with newly emerging technologies and hypothesize ways to get ahead of these threats. Johnson says that as the speed of technological change increases, it’s imperative that “we anticipate future crime threats so that policymakers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur”.

Regarding the fourteen other possible crimes on the list, they were put into one of two categories: moderate concern and low concern.

AI crimes of moderate concern include the misuse of military robots, data poisoning, automated attack drones, learning-based cyberattacks, denial of service attacks for online activities, manipulating financial/stock markets, snake oil (sale of fraudulent services cloaked in AI/ML terminology), and tricking face recognition.

Low concern AI-based crimes include the forgery of art or music, AI-assisted stalking, fake reviews authored by AI, evading AI detection methods, and “burglar bots” (bots which break into people’s homes to steal things).

Of course, AI models themselves can be used to help combat some of these crimes. Recently, AI models have been deployed to assist in the detection of money laundering schemes, detecting suspicious financial transactions. The results are analyzed by human operators who then approve or deny the alert, and the feedback is used to better train the model. It seems likely that the future will involve AIs being pitted against one another, with criminals trying to design their best AI-assisted tools and security, law enforcement, and other ethical AI designers trying to design their own best AI systems.

Spread the love
Continue Reading

Regulation

U.S. Representatives Release Bipartisan Plan for AI and National Security

Updated

 on

U.S. Representatives Robin Kelly (D-IL) and Will Hurd (R-TX) have released a plan on how the nation should proceed with artificial intelligence (AI) technology in relation to national security.

The report released on July 30 details how the U.S. should collaborate with its allies on AI development, as well as advocating for restricting the export of specific technology to China, such as computer chips used in machine learning

The report was compiled by the Congressmen along with the Bipartisan Policy Center and Georgetown University’s Center for Security and Emerging Technology (CSET), other government officials, industry representatives, civil society advocates, and academics. 

The main principles of the report are:

  1. Focusing on human-machine teaming, trustworthiness, and implementing the DOD’s Ethical Principles for AI in regard to defense and intelligence applications of AI.
  2. Cooperation between the U.S. and its allies, but also an openness to working with competitive nations such as Russia and China.
  3. The creation of AI-specific metrics in order to evaluate AI sectors in other nations.
  4. More investment in research, development, testing, and standardization in AI systems.
  5. Controls on export and investment in order to prevent sensitive AI technologies from being acquired by foreign adversaries, specifically China. 

Here is a look at some of the highlights of the report:

Autonomous Vehicles and Weapons Systems

According to the report, the U.S military is undergoing the process of incorporating AI into various semi-autonomous and autonomous vehicles, including ground vehicles, naval vessels, fighter aircraft, and drones. Within these vehicles, AI technology is being used to map out environments, fuse sensor data, plan out navigation routes, and communicate with other vehicles. 

Autonomous vehicles are able to take the place of humans in certain high-risk objectives, like explosive ordnance disposal and route clearance. The main problem that arises when it comes to autonomous vehicles and national defense is that the current algorithms are optimized for commercial use, not for military use. 

The report also addressed lethal autonomous systems, saying that many defense experts argue AI weapons systems can help guard against incoming aircraft, missiles, rockets, artillery, and mortar shells. The DOD’s AI strategy also takes up the position that these systems can reduce the risk of civilian casualties and collateral damage, specifically when warfighters are given enhanced decision support and greater situational awareness. Not everyone agrees on these systems, however, with many experts and ethicists calling for a ban on them. To address this, the report recommends that the DOD should work closely with industry and experts to develop ethical principles for the use of this AI, as well as reach out to nongovernmental organizations, humanitarian groups, and civil society organizations with the costs and benefits of the technology. The goal of this communication is to build a greater level of public trust.

AI Diplomacy

Another key aspect of the report is its advocacy for the U.S. to work with other nations to prevent issues that could arise from AI technology. One of its recommendations is for the U.S. to establish communication procedures with China and Russia, specifically in regard to AI, which would allow humans to talk it out in the case that there is an escalation due to algorithms. Hurd asks: Imagine a high stakes issue: What does a Cuban missile crisis look like with the use of AI?” 

Export and Investment Controls

The report also recommends that export and investment controls be put in place in order to prevent China from acquiring and assimilating U.S. technologies. It pushes for the Department of State and the Department of Commerce to work with allies and partners, specifically Taiwan and South Korea, in order to synchronize with existing U.S. export controls on advanced AI chips. 

New Interest in AI Strategy

The report compiled by the congressmen is the second of four on AI strategy. Along with the Bipartisan Policy Center, the pair of representatives have closely worked together to release another report earlier this month. That report focused on reforming education, all the way from kindergarten through grad school, in order to prepare the workforce for a changing economy due to AI. One of the future papers set to be released is about AI research and development, and the other is on AI ethics. 

The congressmen are working on drafting a resolution based on their ideas about AI, and then work will be done to introduce legislation in Congress. 

 

Spread the love
Continue Reading

Regulation

Pentagon’s Joint AI Center (JAIC) Testing First Lethal AI Projects

Updated

 on

The new acting director of the Joint Artificial Intelligence Center (JAIC), Nand Mulchandani, gave his first-ever Pentagon press conference on July 8, where he laid out what is ahead for the JAIC and how current projects are unfolding.

The press conference comes two years after Google pulled out of Project Maven, also known as the Algorithmic Warfare Cross-Functional Team. According to the Pentagon, the project that was launched in April 2017 aimed to develop “computer-vision algorithms needed to help military and civilian analysts encumbered by the sheer volume of full-motion video data that DOD collects every day in support of counterinsurgency and counterterrorism operations.”

One of the Pentagon’s main objectives was to have algorithms implemented into “warfighting systems” by the end of 2017.

The proposal was met by strong opposition, including 3,000 Google employees who signed a petition protesting against the company’s involvement in the project.

According to Mulchandani, that dynamic has changed and the JAIC is now receiving support from tech firms, including Google.

“We have had overwhelming support and interest from tech industry in working with the JAIC and the DoD,” Mulchandani said. “[we] have commercial contracts and work going on with all of the major tech and AI companies – including Google – and many others.” 

Mulchandani sits in a much better position than his predecessor, Lt. Gen. Jack Shanahan, when it comes to the relationship between the JAIC and Silicon Valley. Shanahan founded the JAIC in 2018 and had a tense relationship with the tech industry, whereas Mulchandani spent much of his life as a part of it. He has co-founded and led multiple startup companies. 

The JAIC 2.0

The JAIC was created in 2018 with a focus on low technology risk areas, like disaster relief and predictive maintenance. Now with these projects advancing, there is work being done to transition them into production. 

Termed JAIC 2.0, the new plan includes six mission initiatives that are all underway, including joint warfighting operations, warfighter health, business process transformation, threat reduction and protection, joint logistics, and the newest one, joint information warfare. The latest addition includes cyber operations.

There is special focus now being turned to the joint warfighting operations mission, which adopts the priorities of the National Defense Strategy in regard to technological advances in the United States military.

The JAIC has not laid out many specifics about the new project, but Mulchandani referred to it as “tactical edge AI” and said that it will be controlled fully by humans. 

Mulchandani answered a question from a reporter about General Shanahan’s statements as director about lethal AI application by 2021, which “could be the first lethal AI in the industry.” 

Here is how he responded: 

“I don’t want to start straying into issues around autonomy and lethality versus lethal — or lethality itself. So yes, it is true that many of the products we work will go into weapon systems.”

“None of them right now are going to be autonomous weapon systems. We’re still governed by 3000.09, that principle still stays intact. None of the work or anything that General Shanahan may have mentioned crosses that line period.”

“Now we do have projects going under Joint Warfighting, which are going to be actually going into testing. They are very tactical edge AI is the way I describe it. And that work is going to be tested, it’s actually very promising work, we’re very excited about it. It’s — it’s one of the, as I talked about the pivot from predictive maintenance and others to Joint Warfighting, that is the — probably the flagship product that we’re sort of thinking about and talking about that will go out there.”

“But, it will involve, you know, operators, human in the loop, full human control, all of those things are still absolutely valid.”

Other Projects

In his statement, Mulchandani also talked about the “huge potential for using AI in offensive capabilities” like cybersecurity.

“You can read the news in terms of what our adversaries are doing out there, and you can imagine that there’s a lot of room for growth in that area,” he said.

Mulchandani revealed what the JAIC is doing in regard to challenges brought on by the COVID-19 pandemic, through a recent $800 million contract with Booz Allen Hamilton, and Project Salus. JAIC developed a series of algorithms for NORTHCOM and National Guard units to predict supply chain resource challenges. 

 

Spread the love
Continue Reading