Connect with us

Regulation

Google’s CEO Calls For Increased Regulation To Avoid “Negative Consequences of AI”

mm

Published

 on

Last year saw an increasing amount of attention drawn to the regulation of the AI industry, and this year seems to be continuing the trend. Just recently, Sundar Pichai, the CEO of Google and Alphabet Inc., supported the regulation of AI at an economic think tank taking place in Brugel.

Pichai’s comments were likely made in anticipation of new EU plans to regulate AI, which will be revealed in a few weeks. It’s possible that the EU regulations could contain policies legally enforcing certain standards for AI used in transportation, healthcare, and other high-risk sectors. The new EU regulations may also require increased transparency regarding AI systems and platforms.

According to Bloomberg, Google has previously tried to challenge antitrust fines and copyright enforcement in the EU. Despite previous attempts to push back against certain regulatory frameworks in Europe, Pichai stated that regulation is welcome as long as it takes “a proportionate approach, balancing potential harms with social opportunities.”

Pichai recently wrote an opinion piece in Financial Times, where he acknowledged that along with many opportunities to improve society, AI also has the potential to be misused. Pichai stated that regulations should help avoid the “negative consequences of AI”, citing abusive use of facial recognition and deepfakes as negative applications of AI. Pichai stated that international alignment is necessary for regulatory principles to work, and as such, there needs to be agreement on core values. Beyond that, Pichai said that it is the responsibility of AI companies like Google to give consideration to how AI can be used in an ethical manner and that this is why Google adopted its own standards for ethical AI use in 2018.

Pichai stated that government regulatory bodies and policies will play an important role in ensuring AI is used ethically, but that these bodies need not start from scratch. Pichai suggests that regulatory entities can look to previously established regulations for inspiration, such as Europe’s General Data Protection Regulation. Pichai also wrote that ethical AI regulation can potentially be both broad and flexible, with regulation providing general guidance that can be tailored for specific implementations in specific AI sectors. Newer technologies like self-driving vehicles will require new rules and policies that weigh benefits and costs against one another, while for more well-tread ground like medical devices, existing frameworks can be a good starting point.

Finally, Pichai stated that Google wants to partner with regulators to develop policies and find solutions that will balance trade-offs, Pichai wrote in Financial Times:

“We want to be a helpful and engaged partner to regulators as they grapple with the inevitable tensions and trade-offs. We offer our expertise, experience and tools as we navigate these issues together.”

While some have applauded Google for taking a stance on the need for regulation to ensure ethical AI usage, the debate continues over the extent to which it’s appropriate that AI companies should be involved with the creation of regulatory frameworks.

As for the upcoming EU regulations themselves, it’s possible that the EU is pursuing a risk-based rules system, which would put tighter restrictions on high-risk applications of AI. This includes restrictions that could be much tighter than Google hopes for, including a potential multi-year ban on facial recognition technology (with exceptions for research and security). In contrast to the EU’s more restrictive approaches, the US has pushed for relatively light regulations. It remains to be seen how the different regulation strategies will impact AI development, and society at large, in the two different regions of the globe.

Spread the love

Blogger and programmer with specialties in Machine Learning and Deep Learning topics. Daniel hopes to help others use the power of AI for social good.

Cybersecurity

AI Experts Rank Deepfakes and 19 Other AI-Based Crimes By Danger Level

mm

Published

 on

A new report published by University College London aimed to identify the many different ways that AI could potentially assist criminals over the next 15 years. The report had 31 different AI experts take 20 different methods of using AI to carry out crimes and rank these methods based on various factors. The AI experts ranked the crimes according to variables like how easy the crime would be to commit, the potential societal harm the crime could do, the amount of money a criminal could make, and how difficult the crime would be to stop. According to the results of the report, Deepfakes posed the greatest threat to law-abiding citizens and society generally, as their potential for exploitation by criminals and terrorists is high.

The AI experts ranked deepfakes at the top of the list of potential AI threats because deepfakes are difficult to identify and counteract. Deepfakes are constantly getting better at fooling even the eyes of deepfake experts and even other AI-based methods of detecting deepfakes are often unreliable. In terms of their capacity for harm, deepfakes can easily be used by bad actors to discredit trusted, expert figures or to attempt to swindle people by posing as loved ones or other trusted individuals. If deepfakes are abundant, people could begin to lose trust in any audio or video media, which could make them lost faith in the validity of real events and facts.

Dr. Matthew Caldwell, from UCL Computer Science, was the first author on the paper. Caldwell underlines the growing danger of deepfakes as more and more of our activity moves online. As Caldwell was quoted by UCL News:

“Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”

The team of experts ranked five other emerging AI technologies as highly concerning potential catalysts for new kinds of crime: driverless vehicles being used as weapons, hack attacks on AI-controlled systems and devices, online data collection for the purposes of blackmail, AI-based phishing featuring customized messages, and fake news/misinformation in general.

According to Shane Johnson, the Director of the Dawes Centre for Future Crimes at UCL, the goal of the study was to identify possible threats associated with newly emerging technologies and hypothesize ways to get ahead of these threats. Johnson says that as the speed of technological change increases, it’s imperative that “we anticipate future crime threats so that policymakers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur”.

Regarding the fourteen other possible crimes on the list, they were put into one of two categories: moderate concern and low concern.

AI crimes of moderate concern include the misuse of military robots, data poisoning, automated attack drones, learning-based cyberattacks, denial of service attacks for online activities, manipulating financial/stock markets, snake oil (sale of fraudulent services cloaked in AI/ML terminology), and tricking face recognition.

Low concern AI-based crimes include the forgery of art or music, AI-assisted stalking, fake reviews authored by AI, evading AI detection methods, and “burglar bots” (bots which break into people’s homes to steal things).

Of course, AI models themselves can be used to help combat some of these crimes. Recently, AI models have been deployed to assist in the detection of money laundering schemes, detecting suspicious financial transactions. The results are analyzed by human operators who then approve or deny the alert, and the feedback is used to better train the model. It seems likely that the future will involve AIs being pitted against one another, with criminals trying to design their best AI-assisted tools and security, law enforcement, and other ethical AI designers trying to design their own best AI systems.

Spread the love
Continue Reading

Regulation

U.S. Representatives Release Bipartisan Plan for AI and National Security

Published

 on

U.S. Representatives Robin Kelly (D-IL) and Will Hurd (R-TX) have released a plan on how the nation should proceed with artificial intelligence (AI) technology in relation to national security.

The report released on July 30 details how the U.S. should collaborate with its allies on AI development, as well as advocating for restricting the export of specific technology to China, such as computer chips used in machine learning

The report was compiled by the Congressmen along with the Bipartisan Policy Center and Georgetown University’s Center for Security and Emerging Technology (CSET), other government officials, industry representatives, civil society advocates, and academics. 

The main principles of the report are:

  1. Focusing on human-machine teaming, trustworthiness, and implementing the DOD’s Ethical Principles for AI in regard to defense and intelligence applications of AI.
  2. Cooperation between the U.S. and its allies, but also an openness to working with competitive nations such as Russia and China.
  3. The creation of AI-specific metrics in order to evaluate AI sectors in other nations.
  4. More investment in research, development, testing, and standardization in AI systems.
  5. Controls on export and investment in order to prevent sensitive AI technologies from being acquired by foreign adversaries, specifically China. 

Here is a look at some of the highlights of the report:

Autonomous Vehicles and Weapons Systems

According to the report, the U.S military is undergoing the process of incorporating AI into various semi-autonomous and autonomous vehicles, including ground vehicles, naval vessels, fighter aircraft, and drones. Within these vehicles, AI technology is being used to map out environments, fuse sensor data, plan out navigation routes, and communicate with other vehicles. 

Autonomous vehicles are able to take the place of humans in certain high-risk objectives, like explosive ordnance disposal and route clearance. The main problem that arises when it comes to autonomous vehicles and national defense is that the current algorithms are optimized for commercial use, not for military use. 

The report also addressed lethal autonomous systems, saying that many defense experts argue AI weapons systems can help guard against incoming aircraft, missiles, rockets, artillery, and mortar shells. The DOD’s AI strategy also takes up the position that these systems can reduce the risk of civilian casualties and collateral damage, specifically when warfighters are given enhanced decision support and greater situational awareness. Not everyone agrees on these systems, however, with many experts and ethicists calling for a ban on them. To address this, the report recommends that the DOD should work closely with industry and experts to develop ethical principles for the use of this AI, as well as reach out to nongovernmental organizations, humanitarian groups, and civil society organizations with the costs and benefits of the technology. The goal of this communication is to build a greater level of public trust.

AI Diplomacy

Another key aspect of the report is its advocacy for the U.S. to work with other nations to prevent issues that could arise from AI technology. One of its recommendations is for the U.S. to establish communication procedures with China and Russia, specifically in regard to AI, which would allow humans to talk it out in the case that there is an escalation due to algorithms. Hurd asks: Imagine a high stakes issue: What does a Cuban missile crisis look like with the use of AI?” 

Export and Investment Controls

The report also recommends that export and investment controls be put in place in order to prevent China from acquiring and assimilating U.S. technologies. It pushes for the Department of State and the Department of Commerce to work with allies and partners, specifically Taiwan and South Korea, in order to synchronize with existing U.S. export controls on advanced AI chips. 

New Interest in AI Strategy

The report compiled by the congressmen is the second of four on AI strategy. Along with the Bipartisan Policy Center, the pair of representatives have closely worked together to release another report earlier this month. That report focused on reforming education, all the way from kindergarten through grad school, in order to prepare the workforce for a changing economy due to AI. One of the future papers set to be released is about AI research and development, and the other is on AI ethics. 

The congressmen are working on drafting a resolution based on their ideas about AI, and then work will be done to introduce legislation in Congress. 

 

Spread the love
Continue Reading

Regulation

Pentagon’s Joint AI Center (JAIC) Testing First Lethal AI Projects

Published

 on

The new acting director of the Joint Artificial Intelligence Center (JAIC), Nand Mulchandani, gave his first-ever Pentagon press conference on July 8, where he laid out what is ahead for the JAIC and how current projects are unfolding.

The press conference comes two years after Google pulled out of Project Maven, also known as the Algorithmic Warfare Cross-Functional Team. According to the Pentagon, the project that was launched in April 2017 aimed to develop “computer-vision algorithms needed to help military and civilian analysts encumbered by the sheer volume of full-motion video data that DOD collects every day in support of counterinsurgency and counterterrorism operations.”

One of the Pentagon’s main objectives was to have algorithms implemented into “warfighting systems” by the end of 2017.

The proposal was met by strong opposition, including 3,000 Google employees who signed a petition protesting against the company’s involvement in the project.

According to Mulchandani, that dynamic has changed and the JAIC is now receiving support from tech firms, including Google.

“We have had overwhelming support and interest from tech industry in working with the JAIC and the DoD,” Mulchandani said. “[we] have commercial contracts and work going on with all of the major tech and AI companies – including Google – and many others.” 

Mulchandani sits in a much better position than his predecessor, Lt. Gen. Jack Shanahan, when it comes to the relationship between the JAIC and Silicon Valley. Shanahan founded the JAIC in 2018 and had a tense relationship with the tech industry, whereas Mulchandani spent much of his life as a part of it. He has co-founded and led multiple startup companies. 

The JAIC 2.0

The JAIC was created in 2018 with a focus on low technology risk areas, like disaster relief and predictive maintenance. Now with these projects advancing, there is work being done to transition them into production. 

Termed JAIC 2.0, the new plan includes six mission initiatives that are all underway, including joint warfighting operations, warfighter health, business process transformation, threat reduction and protection, joint logistics, and the newest one, joint information warfare. The latest addition includes cyber operations.

There is special focus now being turned to the joint warfighting operations mission, which adopts the priorities of the National Defense Strategy in regard to technological advances in the United States military.

The JAIC has not laid out many specifics about the new project, but Mulchandani referred to it as “tactical edge AI” and said that it will be controlled fully by humans. 

Mulchandani answered a question from a reporter about General Shanahan’s statements as director about lethal AI application by 2021, which “could be the first lethal AI in the industry.” 

Here is how he responded: 

“I don’t want to start straying into issues around autonomy and lethality versus lethal — or lethality itself. So yes, it is true that many of the products we work will go into weapon systems.”

“None of them right now are going to be autonomous weapon systems. We’re still governed by 3000.09, that principle still stays intact. None of the work or anything that General Shanahan may have mentioned crosses that line period.”

“Now we do have projects going under Joint Warfighting, which are going to be actually going into testing. They are very tactical edge AI is the way I describe it. And that work is going to be tested, it’s actually very promising work, we’re very excited about it. It’s — it’s one of the, as I talked about the pivot from predictive maintenance and others to Joint Warfighting, that is the — probably the flagship product that we’re sort of thinking about and talking about that will go out there.”

“But, it will involve, you know, operators, human in the loop, full human control, all of those things are still absolutely valid.”

Other Projects

In his statement, Mulchandani also talked about the “huge potential for using AI in offensive capabilities” like cybersecurity.

“You can read the news in terms of what our adversaries are doing out there, and you can imagine that there’s a lot of room for growth in that area,” he said.

Mulchandani revealed what the JAIC is doing in regard to challenges brought on by the COVID-19 pandemic, through a recent $800 million contract with Booz Allen Hamilton, and Project Salus. JAIC developed a series of algorithms for NORTHCOM and National Guard units to predict supply chain resource challenges. 

 

Spread the love
Continue Reading