Connect with us

Regulation

Tech Advisory Group Pushes For Limits On Pentagon’s AI Use

mm

Published

 on

Tech Advisory Group Pushes For Limits On Pentagon's AI Use

The Pentagon has made its intentions to invest heavily in artificial intelligence clear, stating that AI will make the US military more powerful and robust to possible national security threats. As Engadget reports, this past Thursday the Defense Innovation Board pushed forward a number of proposed ethical guidelines for the use of AI in the military. The list of proposals includes strategies to avoid unintended bias and governable AI that would have emergency stopping procedures that prevent the AI from causing unnecessary harm.

Wired reports that the Defense Innovation Board was created by the Obama Administration to guide the Pentagon in acquiring tech industry experience and talent. The board is currently chaired by the former CEO of Google, Eric Schmidt. The department was recently tasked with establishing guidelines for the ethical implementation of AI in military projects. On Thursday the board put out their guidelines and recommendations for review. The report notes that the time to have serious discussions about the use of AI in a military context is now before some serious incident mandates that there must be one.

According to Artificial Intelligence News, a former military official recently stated that the Pentagon was falling behind when it comes to the use of AI. The Pentagon is aiming to make up this difference and it has declared the development and expansion of AI in the military to be a national priority. AI ethicists are concerned that in the Pentagon’s haste to become a leader in AI, AI systems may be used in unethical ways. While various independent AI ethics boards have made their own suggestions, the Defense Innovation Board has proposed at least five principles that the military should follow at all times when developing and implementing AI systems.

The first principle proposed by the board is the principle that humans should always be responsible for the utilization, deployment, and outcomes of any artificial intelligence platform used in a military context. This is reminiscent of a 2012 policy that mandated that humans should ultimately be part of the decision making process whenever lethal force could be used. There are a number of other principles on the list which provide general advice like making sure that AI systems are always built by engineers who understand and thoroughly document their programs. Another principle advises that military AI systems should always be tested for their reliability. These guidelines seem to be common sense, but the board wants to underscore their importance.

The other principles in the list are involved in the control of bias for AI algorithms and the ability of AIs to detect if unintended harm may be caused and to automatically disengage. The guidelines specify that if unnecessary harm will occur, the AI should be able to disengage itself and have a human operator take over. The draft of principles also recommends that the output of AIs be traceable so that analysts can see what led to the AI making a given decision.

The set of recommendations pushed forward by the board underscore two different ideas. The principles are reflective of the fact that AI will be integral to the future of military operations, but that much of AI still relies on human management and decision making.

While the Pentagon doesn’t have to adopt the recommendations of the board, it sounds as if the Pentagon is taking the recommendations seriously. As reported by Wired, the director of the Joint Artifical Intelligence Center, Lieutenant General Jack Shanahan, stated the board’s recommendations would assist the Pentagon in  “upholding the highest ethical standards as outlined in the DoD AI strategy, while embracing the US military’s strong history of applying rigorous testing and fielding standards for technology innovations.”

The tech industry as a whole remains wary of using AI in the creation of military hardware and software. Microsoft and Google employees have both protested over collaborations with military entities, and Google has recently elected not to renew the contract that had them contributing to Project Maven. A number of CEOs, scientists, and engineers have also signed a pledge not to “participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons”. If the Pentagon does adopt the guidelines suggested by the board, it could make the tech industry more willing to collaborate on projects, though only time will tell.

Spread the love

Deep Learning Specialization on Coursera

Blogger and programmer with specialties in machine learning and deep learning topics. Daniel hopes to help others use the power of AI for social good.

Investments

Senators Began To Get Involved In AI

Published

on

Senators Began To Get Involved In AI

According to the top Democrat in the U.S. Senate, Senator Chuck Schumer (D-NY), the U.S. government should make a massive investment in artificial intelligence. He is advocating for the government to create a brand new agency to invest $100 billion in basic research in AI over 5 years. According to the senator, this will help the United States compete against Russia and China, who are moving ahead quickly in the field. The agency will also provide funding to certain areas where U.S. companies are not heavily involved. 

Senator Schumer gave a speech last week to senior national security and research policy-makers who gathered in Washington D.C. It was the first time he publicly outlined the new plan, and he is in an influential position to make progress as minority leader. This comes at a time when there is an increasing level of interest in AI and other related fields including robotics. There has also been a recent presidential executive order.

The new national science tech fund would invest $100 billion into “fundamental research related to AI and some other cutting-edge areas.”

Some of those cutting edge areas include quantum computing, 5G networks, robotics, cybersecurity, and biotechnology. The money would be used to fund research at U.S. universities, companies, and other federal agencies. It would also fund “testbed facilities” used to complete work needed to turn discoveries into commercial products. 

Behind Closed Doors

This plan has been discussed behind closed doors for several months by tech industry executives and academic leaders, but it still has a long way to go. According to Schumer, “this is just a discussion draft.”

Schumer suggested the fund would be a “subsidiary” of the National Science Foundation (NSF). It would also have a connection to the Defense Advanced Research Projects Agency (DARPA) within the Department of Defense (DOD) and have a board of directors. 

National Security Commission on Artificial Intelligence

The speech took place at a symposium sponsored by the National Security Commission on Artificial Intelligence, which is a bipartisan body that was created by Congress. This issue can bring together politicians from both parties, especially during a time when the government is so divided over the impeachment proceedings taking place against President Donald Trump. 

“This should not be a partisan issue. This is about the future of America,” Schumer asserted, saying the country’s security and economic prosperity depend on making such a major investment. And he asked the politically well-connected audience to help him sell the proposal.

“This idea has support from some people very close to the president and very close to [Senate Majority Leader] Mitch McConnell [R],” Schumer said. “But thus far they have been unable to get their [principals’] full-throated support. Anyone here who has any relationship with those people or people near them should be pushing this.”

The U.S. Government 

The U.S. government has not been completely absent from artificial intelligence, but many believe more needs to be done to keep pace with technology which will revolutionize almost everything.

Last month, the Department of Energy released plans to request $3 billion to $4 billion from Congress over the next 10 years. It will be used for AI research which already has some investment taking place. NSF officials have said that the agency spends that amount each year over the past decade in order to improve AI algorithms and software. 

Trump issued an executive order in February that told NSF, DOD, and other federal agencies to invest more in high-performance computing. Under the order, federal agencies are required to develop an “action plan to protect the U.S. advantage in AI technology.”

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Regulation

U.S. Department of Energy Wants To Accelerate Scientific Discoveries With AI

mm

Published

on

U.S. Department of Energy Wants To Accelerate Scientific Discoveries With AI

Science, the magazine that is part of the American Association for the Advancement of Science (AAAS), reports that the U.S. Department of Energy (DOE) is planning to ask the US Congress for “between $3 billion and $4 billion over 10 years.” Science says that this is “roughly the amount the agency is spending to build next-generation “exascale” supercomputers”. This major AI initiative on the part of DOE has a goal to speed up scientific discoveries.

Earl Joseph, CEO of Hyperion Research, a high-performance computing analysis firm in St. Paul that tracks AI research funding is of the opinion that this could be a good starting point but also notes that “DOE’s planned spending is modest compared with the feverish investment in AI by China and industry.”

On the other hand, Science points to DOE’s one big asset, and that is an abundance of that that it could use. For example, it funds atom smashers, surveys of the universe, and the sequencing of thousands of genomes.

DOE already held four town halls in support of its initiative, where Chris Fall, director of DOE’s Office of Science said that his office generates “almost unimaginable amounts of data, petabytes per day and that algorithms trained with these data could help discover new materials or spot signs of new physics.“

Science also noted that according to IDC, worldwide corporate AI funding is expected to hit $35.8 billion this year, up 44% from 2018. U.S. President Donald Trump signed an executive order launching the American AI Initiative, and the administration requested nearly $1 billion for AI and machine learning research in the fiscal year 2020 across all civilian agencies, according to the U.S. Office of Science and Technology Policy. The U.S. Department of Defense is seeking a similar level of funding for unclassified military AI programs.

For its part, in 2017, China announced a national AI plan that aims for global leadership, and a projected commercial AI market worth 1 trillion yuan ($140 billion) by 2030. And the European Union has committed to spending €20 billion through 2020. 

According to Hyperion Research, China accounted for 60% of all investments in AI from 2013 to 2018. U.S. investments were about 30% of the global total. China dominates the number of AI publications, whereas the European Union has the most AI researchers, Joseph says. But U.S. researchers in AI get the most citations per paper, he says, suggesting their research has the most impact.

While DOE has not yet come up with a detailed program, its officials say that their AI initiative “will help keep U.S. researchers at the forefront.” Rick Stevens, associate laboratory director for computing, environment, and life sciences at Argonne National Laboratory in Lemont, Illinois expects that the funding will include the capabilities for “national labs to optimize existing supercomputers for AI, and external funding for academic research into AI computer architectures.”

Jeff Nichols, associate laboratory director for computing and computational science at Oak Ridge National Laboratory in Tennessee, concluded that “AI won’t replace scientists, but scientists who use AI will replace scientists who don’t.” 

 

Spread the love

Deep Learning Specialization on Coursera
Continue Reading

Artifical Neural Networks

Scalable Autonomous Vehicle Safety Tools Developed By Researchers

mm

Published

on

Scalable Autonomous Vehicle Safety Tools Developed By Researchers

As the speed of autonomous vehicle manufacturing and deployment increases, the safety of autonomous vehicles becomes even more important. For that reason, researchers are investing in the creation of metrics and tools to track the safety of autonomous vehicles. As reported by ScienceDaily, a research team from the University of Illinois at Urbana-Champaign have used machine learning algorithms to create a scalable autonomous vehicle safety analysis platform, utilizing both hardware and software improvements to do so.

Improving the safety of autonomous vehicles has remained one of the more difficult tasks in AI, because of the many variables involved in the task. Not only are the sensors and algorithms involved in the vehicle extremely complex, but there are many external conditions that are constantly in flux, such as road conditions, topography, weather, lighting and traffic.

The landscape and algorithms of autonomous vehicles are both constantly changing, and companies need a way to keep up with the changes and respond to new issues. The Illinois researchers are working on a platform that lets companies address recently identified safety concerns in a quick, cost-effective method. However, the sheer complexity of the systems that drive autonomous vehicles make this a massive undertaking. The research team is designing a system that will be able to keep track of and update autonomous vehicle systems that contain dozens of processors and accelerators running millions of lines of code.

In general, autonomous vehicles drive quite safely. However, when a failure or unexpected event occurs, an autonomous vehicle is currently more likely to get in an accident than human drivers, as the vehicle often has trouble negotiating sudden emergencies.  While it is admittedly difficult to quantify how safe autonomous vehicles are and what is to blame for accidents, it is obvious that failures of a vehicle going at 70 mph down a road could prove extremely dangerous, hence the need to improve the handling of emergencies by autonomous vehicles.

Saurabh Jha, a doctoral candidate and one of the researchers involved with the program, explained to ScienceDaily the need to improve failure handling in autonomous vehicles. Jha explained:

“If a driver of a typical car senses a problem such as vehicle drift or pull, the driver can adjust his/her behavior and guide the car to a safe stopping point. However, the behavior of the autonomous vehicle may be unpredictable in such a scenario unless the autonomous vehicle is explicitly trained for such problems. In the real world, there are an infinite number of such cases.”

The researchers are aiming to solve this problem by gathering and analyzing data involving safety reports submitted by autonomous vehicle companies.  Companies like Waymo and Uber are required to submit reports to the DMV in California at least annually. These reports contain data on statistics like how far cars have driven, how many accidents occured, and what conditions the vehicles were operating under.

The University of Illinois research team analyzed reports covering the years 2014 to 2017. During this period, autonomous vehicles drove around 1,116,000 miles distributed across 144 different vehicles. According to the findings of the research team, when compared with the same distance driven by human drivers, accidents were 4000 times more likely to occur. The accidents may imply that the AI of the vehicle failed to properly disengage and avoid the accident, relying instead on the human driver to take over.

It’s difficult to diagnose potential errors in the hardware or software of the autonomous vehicle because many errors will manifest only under the correct conditions. It also isn’t feasible to conduct tests under every possible condition that could occur on the road. Instead of collecting data on hundreds of thousands of real miles logged by autonomous vehicles, the research team is utilizing simulated environments to drastically reduce the amount of money and time spent in generating data for the training of AVs.

The research team uses the generated data to explore situations where AV failures can happen and safety issues can occur. It appears that utilizing the simulations can genuinely help companies find safety risks they wouldn’t be able to otherwise. For instance, when the team tested the Apollo AV, created by Baidu, they isolated over 500 instances where the AV failed to handle an emergency situation and an accident occurred as a result. The research team hopes that other companies will make use of their testing platform and improve the safety of their autonomous vehicles.

Spread the love

Deep Learning Specialization on Coursera
Continue Reading