Connect with us

Regulation

Pentagon’s Joint AI Center (JAIC) Testing First Lethal AI Projects

Published

 on

The new acting director of the Joint Artificial Intelligence Center (JAIC), Nand Mulchandani, gave his first-ever Pentagon press conference on July 8, where he laid out what is ahead for the JAIC and how current projects are unfolding.

The press conference comes two years after Google pulled out of Project Maven, also known as the Algorithmic Warfare Cross-Functional Team. According to the Pentagon, the project that was launched in April 2017 aimed to develop “computer-vision algorithms needed to help military and civilian analysts encumbered by the sheer volume of full-motion video data that DOD collects every day in support of counterinsurgency and counterterrorism operations.”

One of the Pentagon’s main objectives was to have algorithms implemented into “warfighting systems” by the end of 2017.

The proposal was met by strong opposition, including 3,000 Google employees who signed a petition protesting against the company’s involvement in the project.

According to Mulchandani, that dynamic has changed and the JAIC is now receiving support from tech firms, including Google.

“We have had overwhelming support and interest from tech industry in working with the JAIC and the DoD,” Mulchandani said. “[we] have commercial contracts and work going on with all of the major tech and AI companies – including Google – and many others.” 

Mulchandani sits in a much better position than his predecessor, Lt. Gen. Jack Shanahan, when it comes to the relationship between the JAIC and Silicon Valley. Shanahan founded the JAIC in 2018 and had a tense relationship with the tech industry, whereas Mulchandani spent much of his life as a part of it. He has co-founded and led multiple startup companies. 

The JAIC 2.0

The JAIC was created in 2018 with a focus on low technology risk areas, like disaster relief and predictive maintenance. Now with these projects advancing, there is work being done to transition them into production. 

Termed JAIC 2.0, the new plan includes six mission initiatives that are all underway, including joint warfighting operations, warfighter health, business process transformation, threat reduction and protection, joint logistics, and the newest one, joint information warfare. The latest addition includes cyber operations.

There is special focus now being turned to the joint warfighting operations mission, which adopts the priorities of the National Defense Strategy in regard to technological advances in the United States military.

The JAIC has not laid out many specifics about the new project, but Mulchandani referred to it as “tactical edge AI” and said that it will be controlled fully by humans. 

Mulchandani answered a question from a reporter about General Shanahan’s statements as director about lethal AI application by 2021, which “could be the first lethal AI in the industry.” 

Here is how he responded: 

“I don’t want to start straying into issues around autonomy and lethality versus lethal — or lethality itself. So yes, it is true that many of the products we work will go into weapon systems.”

“None of them right now are going to be autonomous weapon systems. We’re still governed by 3000.09, that principle still stays intact. None of the work or anything that General Shanahan may have mentioned crosses that line period.”

“Now we do have projects going under Joint Warfighting, which are going to be actually going into testing. They are very tactical edge AI is the way I describe it. And that work is going to be tested, it’s actually very promising work, we’re very excited about it. It’s — it’s one of the, as I talked about the pivot from predictive maintenance and others to Joint Warfighting, that is the — probably the flagship product that we’re sort of thinking about and talking about that will go out there.”

“But, it will involve, you know, operators, human in the loop, full human control, all of those things are still absolutely valid.”

Other Projects

In his statement, Mulchandani also talked about the “huge potential for using AI in offensive capabilities” like cybersecurity.

“You can read the news in terms of what our adversaries are doing out there, and you can imagine that there’s a lot of room for growth in that area,” he said.

Mulchandani revealed what the JAIC is doing in regard to challenges brought on by the COVID-19 pandemic, through a recent $800 million contract with Booz Allen Hamilton, and Project Salus. JAIC developed a series of algorithms for NORTHCOM and National Guard units to predict supply chain resource challenges. 

 

Spread the love

Regulation

U.S. Representatives Release Bipartisan Plan for AI and National Security

Published

on

U.S. Representatives Robin Kelly (D-IL) and Will Hurd (R-TX) have released a plan on how the nation should proceed with artificial intelligence (AI) technology in relation to national security.

The report released on July 30 details how the U.S. should collaborate with its allies on AI development, as well as advocating for restricting the export of specific technology to China, such as computer chips used in machine learning

The report was compiled by the Congressmen along with the Bipartisan Policy Center and Georgetown University’s Center for Security and Emerging Technology (CSET), other government officials, industry representatives, civil society advocates, and academics. 

The main principles of the report are:

  1. Focusing on human-machine teaming, trustworthiness, and implementing the DOD’s Ethical Principles for AI in regard to defense and intelligence applications of AI.
  2. Cooperation between the U.S. and its allies, but also an openness to working with competitive nations such as Russia and China.
  3. The creation of AI-specific metrics in order to evaluate AI sectors in other nations.
  4. More investment in research, development, testing, and standardization in AI systems.
  5. Controls on export and investment in order to prevent sensitive AI technologies from being acquired by foreign adversaries, specifically China. 

Here is a look at some of the highlights of the report:

Autonomous Vehicles and Weapons Systems

According to the report, the U.S military is undergoing the process of incorporating AI into various semi-autonomous and autonomous vehicles, including ground vehicles, naval vessels, fighter aircraft, and drones. Within these vehicles, AI technology is being used to map out environments, fuse sensor data, plan out navigation routes, and communicate with other vehicles. 

Autonomous vehicles are able to take the place of humans in certain high-risk objectives, like explosive ordnance disposal and route clearance. The main problem that arises when it comes to autonomous vehicles and national defense is that the current algorithms are optimized for commercial use, not for military use. 

The report also addressed lethal autonomous systems, saying that many defense experts argue AI weapons systems can help guard against incoming aircraft, missiles, rockets, artillery, and mortar shells. The DOD’s AI strategy also takes up the position that these systems can reduce the risk of civilian casualties and collateral damage, specifically when warfighters are given enhanced decision support and greater situational awareness. Not everyone agrees on these systems, however, with many experts and ethicists calling for a ban on them. To address this, the report recommends that the DOD should work closely with industry and experts to develop ethical principles for the use of this AI, as well as reach out to nongovernmental organizations, humanitarian groups, and civil society organizations with the costs and benefits of the technology. The goal of this communication is to build a greater level of public trust.

AI Diplomacy

Another key aspect of the report is its advocacy for the U.S. to work with other nations to prevent issues that could arise from AI technology. One of its recommendations is for the U.S. to establish communication procedures with China and Russia, specifically in regard to AI, which would allow humans to talk it out in the case that there is an escalation due to algorithms. Hurd asks: Imagine a high stakes issue: What does a Cuban missile crisis look like with the use of AI?” 

Export and Investment Controls

The report also recommends that export and investment controls be put in place in order to prevent China from acquiring and assimilating U.S. technologies. It pushes for the Department of State and the Department of Commerce to work with allies and partners, specifically Taiwan and South Korea, in order to synchronize with existing U.S. export controls on advanced AI chips. 

New Interest in AI Strategy

The report compiled by the congressmen is the second of four on AI strategy. Along with the Bipartisan Policy Center, the pair of representatives have closely worked together to release another report earlier this month. That report focused on reforming education, all the way from kindergarten through grad school, in order to prepare the workforce for a changing economy due to AI. One of the future papers set to be released is about AI research and development, and the other is on AI ethics. 

The congressmen are working on drafting a resolution based on their ideas about AI, and then work will be done to introduce legislation in Congress. 

 

Spread the love
Continue Reading

Regulation

AI Research Cloud Bill in US Congress Receives Support of More Than 20 Organizations 

Published

on

Over 20 organizations have signed on to support the creation of a national AI research cloud. The idea has already received the support of tech companies such as AWS, Google, IBM, and Nvidia, as well as research institutions like Stanford University and Ohio State University. 

The AI research cloud would be part of the National AI Research Resource Task Force Act.

National AI Research Resource Task Force Act

It was introduced early in June by U.S. Senators Rob Portman (R-OH) and Martin Heinrich (D-NM), and House Members Anna G. Eshoo (D-CA), Anthony Gonzalez (R-OH), and Mike Sherrill (D-NJ). 

“We cannot take America’s AI leadership for granted. With China focused on toppling the United States’ leadership in AI, we need to redouble our efforts with a sustained commitment to the best and brightest by developing a national research cloud to ensure our technical researchers get the tools they need to succeed,” Sen. Portman, co-founder and co-chair of the Senate Artificial Intelligence Caucus said. “This legislation takes the first steps towards a national research cloud. By democratizing access to computing power, we ensure that any American with computer science talent can pursue their good ideas.”

The new act would bring together technical experts within the fields of academia, government, and industry to develop a plan for how the U.S. can create, deploy, govern, and sustain a national research cloud.

According to the members who introduced the bill, “The widespread support for the National AI Research Resource Task Force Act from our country’s preeminent research universities and leading technology firms demonstrates how critical the legislation is for our country to retain our global lead in AI research. We thank the universities and companies supporting our bill, and we call on Congress to act on this legislation as soon as possible.”

If the cloud was created, it would provide a point for researchers in the United States to gain access to computer power and data sets. These are already accessible to big tech companies like Google, but not yet academia. 

Previous Efforts for an AI Research Cloud

The push for a national AI research cloud began last year when leaders from Stanford University and more than 20 other academic institutions drafted a letter to President Trump and Congress in support of one. 

Prior to that effort, there were other bills that advocated for a comprehensive U.S. AI strategy that included AI centers and a national AI coordination office. 

The Artificial Intelligence Initiative Act, proposed by Senators Portman, Heinrich, and Brian Schatz, pushed for the injection of $2.2 billion into federal research and development for a national AI strategy. 

Another effort was put forth when the Computing Community Consortium (CCC) released its 20-year AI research road map. It included ideas like increased data sharing and a national center of excellence. 

Dr. Fei-Fei Li and John Etchemendy, co-directors of the Stanford Institute for Human-Centered Artificial Intelligence (HAI), have been some of the leading voices on the issue. 

Back in March, the pair said that the creation of a national AI research cloud could be “one of the most strategic research investments the federal government has ever made.”

“Data is a first-class citizen of today’s AI research. We should admit that, but it’s not the only thing that defines AI,” Li said. “Rare disease understanding, genetic study of rare disease, drug discovery, treatment management — they are by definition not necessarily data heavy, and AI can play a huge role. Human-centered design, I think about elder care and that kind of nuanced technological help. That’s not necessarily data heavy as well, so I think we need to be very thoughtful about how to use data.”

A full list of support for the National AI Research Resource Task Force Act is below. 

  • National Security Commission on Artificial Intelligence Chairman Eric Schmidt and Vice Chairman Bob Work
  • Stanford University
  • The Ohio State University 
  • Princeton University
  • UCLA
  • Carnegie Mellon University
  • Duke University
  • Pennsylvania State University
  • University of Pennsylvania
  • Johns Hopkins University
  • Allen Institute for AI
  • OpenAI
  • Mozilla
  • IEEE-USA
  • Google
  • Amazon Web Services
  • Microsoft
  • IBM
  • NVIDIA
  • Orbital Insight
  • Calypso AI 

 

Spread the love
Continue Reading

Interviews

Bradford Newman, Chair of North America Trade Secrets Practice – Interview Series

mm

Published

on

Bradford specializes in matters related to trade secrets and Artificial Intelligence. He is the Chair of the AI Subcommittee of the ABA. Recognized by the Daily Journal in 2019 as one of the Top 20 AI attorneys in California, Bradford has been instrumental in proposing federal AI workplace and IP legislation that in 2018 was turned into a United States House of Representatives Discussion Draft bill. He has also developed AI oversight and corporate governance best practices designed to ensure algorithmic fairness.

What was it that initially ignited your interest in artificial intelligence? 

I have represented the world’s leading innovators and producers of AI products and technology for many years. My interest has always been to go behind the curtain and understand the legal and technical facets of machine learning, and watch AI evolve. I am fascinated by what is possible for applications across various domains.

 

You are a fierce advocate for rational regulation of artificial intelligence, specifically in regulation that protects public health, can you discuss what some of your major concerns are?  

I believe we are in the early stages of one of the most profound revolutions humankind has experienced. AI has the potential to impact every aspect of our lives, from the minute we wake up in the morning to the moment we go to sleep — and also, while we are sleeping. Many of AI’s applications will positively impact the quality of our lives, and likely our longevity as a species.

Right now, from a computer science and machine learning standpoint, humans are still very involved in the process, from coding the algorithms, to understanding the training data sets, to processing the results, recognizing the shortcomings and productizing the technology.

But we are in a race against time on two major fronts. First, what is commonly referred to as the “black box” problem: human involvement and understanding of AI will decrease over time as AI’s sophistication (think ANNs) evolve. And second, the use of AI by governments and private interests will increase.

My concern is that AI will be used, both purposely and unintentionally, in ways that are at odds with Western Democratic ideals of individual liberty and freedom.

 

How do we address these concerns? 

Society is at the point where we must resolve not what is possible with respect to AI, but what should be prohibited and/or partially constrained.

First, we must specifically identify the decisions that can never be made in whole or in part by the algorithmic output generated by AI.  This means that even in situations where every expert agrees that the data in and out is totally unbiased, transparent and accurate, there must be a statutory prohibition on utilizing it for any type of predictive or substantive decision-making.

Admittedly, this is counter-intuitive in a world where we crave mathematical certainty, but establishing an AI “no fly zone” is essential to preserving the liberties we all hold dear and that serve as the bedrock for our society.

Second, for other identified decisions based on AI analytics that are not outright prohibited, we need legislation that clearly defines those where a human must be involved in the decision-making process.

 

You’ve been in instrumental in proposing federal AI workplace and IP legislation that in 2018 was turned into a United States House of Representatives Discussion Draft bill. Can you discuss some of these proposals?

The AI Data Protection Act is intended to promote innovation and designed to (1) increase transparency in the nature and use of, and to build public trust in, artificial intelligence; (2) address the impact of artificial intelligence on the labor market and (3) protect public health and safety.

It has several key components.  For example, it prohibits covered companies’ sole reliance on artificial intelligence to make certain decisions, including a decision regarding employment of individuals or the denial or limitation of medical treatment, and prohibits medical insurance issuers from making decisions regarding coverage of a medical treatment solely on the AI analytics.  It also establishes the Artificial Intelligence Board — a new federal agency charged with specific responsibilities for regulating AI as it pertains to public health and safety.  And it requires covered entities to appoint a Chief Artificial Intelligence Officer.

 

You’ve also developed AI oversight and corporate governance best practices designed to ensure algorithmic fairness. What are some of the current issues that you see with fairness or bias in AI systems? 

This subject has been the focus of intense scrutiny from academics and is now getting the interest of U.S. government agencies, like the Equal Employment Opportunity Commission (EEOC), and the plaintiff’s bar.  Most of the time, causation results from either a flaw in the training data sets or the lack of understanding and transparency into the testing environment. This is compounded by the lack of central ownership and oversight of AI by senior management.

This lack of technical understanding and situational awareness is a significant liability concern.  I have spoken to several prominent plaintiff’s attorneys who are on the hunt for AI bias cases.

 

Deep learning often suffers from the black box problem, whereby we input data into an Artificial Neural Network (ANN), and we then receive an output, with no means of knowing how that output was generated. Do you believe that this is a major problem?  

I do. And as algorithms and neural networks continue to evolve, and humans are increasingly not “in the loop,” there is a real risk of passing the tipping point where we will no longer be able to understand critical elements of function and output.

 

With COVID-19 countries all over the world have introduced AI powered state surveillance systems. How much of an issue do you have with potential abuse with this type of surveillance?

It is naïve, and frankly, irresponsible from an individual rights and liberty perspective, to ignore or downplay the risk of abuse.  While contact tracing seems prudent in the midst of a global pandemic, and AI-based facial recognition provides an effective measure to do what humans alone would not be capable of accomplishing, society must institute legal prohibitions on misuse along with effective oversight and enforcement mechanisms.  Otherwise, we are surrendering to the state a core element of our individual fundamental rights. Once given away in wholesale fashion, this basic element of our freedom and privacy will not be returned.

 

You previously stated that “We must establish an AI “no-fly zone” if we want to preserve the liberties that Americans all hold dear and that serve as the bedrock of our society.” Could you share some of these concerns that you have?  

When discussing AI, we must always focus on AI’s essential purpose: to produce accurate predictive analytics from very large data sets which are then used to classify humans and make decisions.  Next, we must examine who the decision-makers are, what are they deciding, and on what are they basing their decisions.

If we understand that the decision-makers are those with the largest impact on our health, livelihood and freedoms —  employers, landlords, doctors, insurers, law enforcement and every other private, commercial and governmental enterprise that can generate, collect or purchase AI analytics — it becomes easy to see that in a Western liberal democracy, as opposed to a totalitarian regime, there should be decisions which should not, and must not, be left solely to AI.

While many obvious decisions come to mind, like prohibiting the incarceration of someone before a crime is committed, AI’s widespread adoption into every aspect of our lives presents much more vexing instances of ethical conundrums.  For example, if an algorithm accurately predicts that workers who post photos on social media of beach vacations where they are drinking alcohol quit or get fired from their jobs on average of 3.5 years earlier than those who post photos of themselves working out, should the former category be denied a promotion or raise based solely on the algorithmic output?  If an algorithm correctly determines that teenagers who play videogames on average of more than 2 hours per day are less likely to graduate from a four-year university, should such students be denied admission?  If Asian women in their 60s admitted to the ICU for COVID-19 related symptoms are determined to have a higher survival rate than African-American men in their 70s, should women receive preferential medical treatment?

These over-simplified examples are just a few of the numerous decisions where reliance on AI alone contradicts our views of what individual human rights require.

 

Is there anything else that you would like to share regarding AI or Baker McKenzie? 

I am extremely energized to help lead Baker McKenzie’s truly international AI practice amidst the evolving landscape. We recognize that our clients are hungry for guidance on all things AI, from negotiating contracts, to establishing internal oversight, to avoiding claims of bias, to understanding the emerging domestic and international regulatory framework.

Companies want to do the right thing, but there are very few law firms that have the necessary expertise and understanding of AI and machine learning to be able to help them.  For those of us who are both AI junkies and attorneys, this is an exciting time to add real value for our clients.

Thank you for the fantastic answers regarding some of these major societal concerns involving AI. Readers who wish to learn more about Bradford Newman should click here.

Spread the love
Continue Reading