Science, the magazine that is part of the American Association for the Advancement of Science (AAAS), reports that the U.S. Department of Energy (DOE) is planning to ask the US Congress for “between $3 billion and $4 billion over 10 years.” Science says that this is “roughly the amount the agency is spending to build next-generation “exascale” supercomputers”. This major AI initiative on the part of DOE has a goal to speed up scientific discoveries.
Earl Joseph, CEO of Hyperion Research, a high-performance computing analysis firm in St. Paul that tracks AI research funding is of the opinion that this could be a good starting point but also notes that “DOE’s planned spending is modest compared with the feverish investment in AI by China and industry.”
On the other hand, Science points to DOE’s one big asset, and that is an abundance of that that it could use. For example, it funds atom smashers, surveys of the universe, and the sequencing of thousands of genomes.
DOE already held four town halls in support of its initiative, where Chris Fall, director of DOE’s Office of Science said that his office generates “almost unimaginable amounts of data, petabytes per day and that algorithms trained with these data could help discover new materials or spot signs of new physics.“
Science also noted that according to IDC, worldwide corporate AI funding is expected to hit $35.8 billion this year, up 44% from 2018. U.S. President Donald Trump signed an executive order launching the American AI Initiative, and the administration requested nearly $1 billion for AI and machine learning research in the fiscal year 2020 across all civilian agencies, according to the U.S. Office of Science and Technology Policy. The U.S. Department of Defense is seeking a similar level of funding for unclassified military AI programs.
For its part, in 2017, China announced a national AI plan that aims for global leadership, and a projected commercial AI market worth 1 trillion yuan ($140 billion) by 2030. And the European Union has committed to spending €20 billion through 2020.
According to Hyperion Research, China accounted for 60% of all investments in AI from 2013 to 2018. U.S. investments were about 30% of the global total. China dominates the number of AI publications, whereas the European Union has the most AI researchers, Joseph says. But U.S. researchers in AI get the most citations per paper, he says, suggesting their research has the most impact.
While DOE has not yet come up with a detailed program, its officials say that their AI initiative “will help keep U.S. researchers at the forefront.” Rick Stevens, associate laboratory director for computing, environment, and life sciences at Argonne National Laboratory in Lemont, Illinois expects that the funding will include the capabilities for “national labs to optimize existing supercomputers for AI, and external funding for academic research into AI computer architectures.”
Jeff Nichols, associate laboratory director for computing and computational science at Oak Ridge National Laboratory in Tennessee, concluded that “AI won’t replace scientists, but scientists who use AI will replace scientists who don’t.”
AI Experts Rank Deepfakes and 19 Other AI-Based Crimes By Danger Level
A new report published by University College London aimed to identify the many different ways that AI could potentially assist criminals over the next 15 years. The report had 31 different AI experts take 20 different methods of using AI to carry out crimes and rank these methods based on various factors. The AI experts ranked the crimes according to variables like how easy the crime would be to commit, the potential societal harm the crime could do, the amount of money a criminal could make, and how difficult the crime would be to stop. According to the results of the report, Deepfakes posed the greatest threat to law-abiding citizens and society generally, as their potential for exploitation by criminals and terrorists is high.
The AI experts ranked deepfakes at the top of the list of potential AI threats because deepfakes are difficult to identify and counteract. Deepfakes are constantly getting better at fooling even the eyes of deepfake experts and even other AI-based methods of detecting deepfakes are often unreliable. In terms of their capacity for harm, deepfakes can easily be used by bad actors to discredit trusted, expert figures or to attempt to swindle people by posing as loved ones or other trusted individuals. If deepfakes are abundant, people could begin to lose trust in any audio or video media, which could make them lost faith in the validity of real events and facts.
Dr. Matthew Caldwell, from UCL Computer Science, was the first author on the paper. Caldwell underlines the growing danger of deepfakes as more and more of our activity moves online. As Caldwell was quoted by UCL News:
“Unlike many traditional crimes, crimes in the digital realm can be easily shared, repeated, and even sold, allowing criminal techniques to be marketed and for crime to be provided as a service. This means criminals may be able to outsource the more challenging aspects of their AI-based crime.”
The team of experts ranked five other emerging AI technologies as highly concerning potential catalysts for new kinds of crime: driverless vehicles being used as weapons, hack attacks on AI-controlled systems and devices, online data collection for the purposes of blackmail, AI-based phishing featuring customized messages, and fake news/misinformation in general.
According to Shane Johnson, the Director of the Dawes Centre for Future Crimes at UCL, the goal of the study was to identify possible threats associated with newly emerging technologies and hypothesize ways to get ahead of these threats. Johnson says that as the speed of technological change increases, it’s imperative that “we anticipate future crime threats so that policymakers and other stakeholders with the competency to act can do so before new ‘crime harvests’ occur”.
Regarding the fourteen other possible crimes on the list, they were put into one of two categories: moderate concern and low concern.
AI crimes of moderate concern include the misuse of military robots, data poisoning, automated attack drones, learning-based cyberattacks, denial of service attacks for online activities, manipulating financial/stock markets, snake oil (sale of fraudulent services cloaked in AI/ML terminology), and tricking face recognition.
Low concern AI-based crimes include the forgery of art or music, AI-assisted stalking, fake reviews authored by AI, evading AI detection methods, and “burglar bots” (bots which break into people’s homes to steal things).
Of course, AI models themselves can be used to help combat some of these crimes. Recently, AI models have been deployed to assist in the detection of money laundering schemes, detecting suspicious financial transactions. The results are analyzed by human operators who then approve or deny the alert, and the feedback is used to better train the model. It seems likely that the future will involve AIs being pitted against one another, with criminals trying to design their best AI-assisted tools and security, law enforcement, and other ethical AI designers trying to design their own best AI systems.
U.S. Representatives Release Bipartisan Plan for AI and National Security
U.S. Representatives Robin Kelly (D-IL) and Will Hurd (R-TX) have released a plan on how the nation should proceed with artificial intelligence (AI) technology in relation to national security.
The report released on July 30 details how the U.S. should collaborate with its allies on AI development, as well as advocating for restricting the export of specific technology to China, such as computer chips used in machine learning.
The report was compiled by the Congressmen along with the Bipartisan Policy Center and Georgetown University’s Center for Security and Emerging Technology (CSET), other government officials, industry representatives, civil society advocates, and academics.
The main principles of the report are:
- Focusing on human-machine teaming, trustworthiness, and implementing the DOD’s Ethical Principles for AI in regard to defense and intelligence applications of AI.
- Cooperation between the U.S. and its allies, but also an openness to working with competitive nations such as Russia and China.
- The creation of AI-specific metrics in order to evaluate AI sectors in other nations.
- More investment in research, development, testing, and standardization in AI systems.
- Controls on export and investment in order to prevent sensitive AI technologies from being acquired by foreign adversaries, specifically China.
Here is a look at some of the highlights of the report:
Autonomous Vehicles and Weapons Systems
According to the report, the U.S military is undergoing the process of incorporating AI into various semi-autonomous and autonomous vehicles, including ground vehicles, naval vessels, fighter aircraft, and drones. Within these vehicles, AI technology is being used to map out environments, fuse sensor data, plan out navigation routes, and communicate with other vehicles.
Autonomous vehicles are able to take the place of humans in certain high-risk objectives, like explosive ordnance disposal and route clearance. The main problem that arises when it comes to autonomous vehicles and national defense is that the current algorithms are optimized for commercial use, not for military use.
The report also addressed lethal autonomous systems, saying that many defense experts argue AI weapons systems can help guard against incoming aircraft, missiles, rockets, artillery, and mortar shells. The DOD’s AI strategy also takes up the position that these systems can reduce the risk of civilian casualties and collateral damage, specifically when warfighters are given enhanced decision support and greater situational awareness. Not everyone agrees on these systems, however, with many experts and ethicists calling for a ban on them. To address this, the report recommends that the DOD should work closely with industry and experts to develop ethical principles for the use of this AI, as well as reach out to nongovernmental organizations, humanitarian groups, and civil society organizations with the costs and benefits of the technology. The goal of this communication is to build a greater level of public trust.
Another key aspect of the report is its advocacy for the U.S. to work with other nations to prevent issues that could arise from AI technology. One of its recommendations is for the U.S. to establish communication procedures with China and Russia, specifically in regard to AI, which would allow humans to talk it out in the case that there is an escalation due to algorithms. Hurd asks: Imagine a high stakes issue: What does a Cuban missile crisis look like with the use of AI?”
Export and Investment Controls
The report also recommends that export and investment controls be put in place in order to prevent China from acquiring and assimilating U.S. technologies. It pushes for the Department of State and the Department of Commerce to work with allies and partners, specifically Taiwan and South Korea, in order to synchronize with existing U.S. export controls on advanced AI chips.
New Interest in AI Strategy
The report compiled by the congressmen is the second of four on AI strategy. Along with the Bipartisan Policy Center, the pair of representatives have closely worked together to release another report earlier this month. That report focused on reforming education, all the way from kindergarten through grad school, in order to prepare the workforce for a changing economy due to AI. One of the future papers set to be released is about AI research and development, and the other is on AI ethics.
The congressmen are working on drafting a resolution based on their ideas about AI, and then work will be done to introduce legislation in Congress.
Pentagon’s Joint AI Center (JAIC) Testing First Lethal AI Projects
The new acting director of the Joint Artificial Intelligence Center (JAIC), Nand Mulchandani, gave his first-ever Pentagon press conference on July 8, where he laid out what is ahead for the JAIC and how current projects are unfolding.
The press conference comes two years after Google pulled out of Project Maven, also known as the Algorithmic Warfare Cross-Functional Team. According to the Pentagon, the project that was launched in April 2017 aimed to develop “computer-vision algorithms needed to help military and civilian analysts encumbered by the sheer volume of full-motion video data that DOD collects every day in support of counterinsurgency and counterterrorism operations.”
One of the Pentagon’s main objectives was to have algorithms implemented into “warfighting systems” by the end of 2017.
The proposal was met by strong opposition, including 3,000 Google employees who signed a petition protesting against the company’s involvement in the project.
According to Mulchandani, that dynamic has changed and the JAIC is now receiving support from tech firms, including Google.
“We have had overwhelming support and interest from tech industry in working with the JAIC and the DoD,” Mulchandani said. “[we] have commercial contracts and work going on with all of the major tech and AI companies – including Google – and many others.”
Mulchandani sits in a much better position than his predecessor, Lt. Gen. Jack Shanahan, when it comes to the relationship between the JAIC and Silicon Valley. Shanahan founded the JAIC in 2018 and had a tense relationship with the tech industry, whereas Mulchandani spent much of his life as a part of it. He has co-founded and led multiple startup companies.
The JAIC 2.0
The JAIC was created in 2018 with a focus on low technology risk areas, like disaster relief and predictive maintenance. Now with these projects advancing, there is work being done to transition them into production.
Termed JAIC 2.0, the new plan includes six mission initiatives that are all underway, including joint warfighting operations, warfighter health, business process transformation, threat reduction and protection, joint logistics, and the newest one, joint information warfare. The latest addition includes cyber operations.
There is special focus now being turned to the joint warfighting operations mission, which adopts the priorities of the National Defense Strategy in regard to technological advances in the United States military.
The JAIC has not laid out many specifics about the new project, but Mulchandani referred to it as “tactical edge AI” and said that it will be controlled fully by humans.
Mulchandani answered a question from a reporter about General Shanahan’s statements as director about lethal AI application by 2021, which “could be the first lethal AI in the industry.”
Here is how he responded:
“I don’t want to start straying into issues around autonomy and lethality versus lethal — or lethality itself. So yes, it is true that many of the products we work will go into weapon systems.”
“None of them right now are going to be autonomous weapon systems. We’re still governed by 3000.09, that principle still stays intact. None of the work or anything that General Shanahan may have mentioned crosses that line period.”
“Now we do have projects going under Joint Warfighting, which are going to be actually going into testing. They are very tactical edge AI is the way I describe it. And that work is going to be tested, it’s actually very promising work, we’re very excited about it. It’s — it’s one of the, as I talked about the pivot from predictive maintenance and others to Joint Warfighting, that is the — probably the flagship product that we’re sort of thinking about and talking about that will go out there.”
“But, it will involve, you know, operators, human in the loop, full human control, all of those things are still absolutely valid.”
In his statement, Mulchandani also talked about the “huge potential for using AI in offensive capabilities” like cybersecurity.
“You can read the news in terms of what our adversaries are doing out there, and you can imagine that there’s a lot of room for growth in that area,” he said.
Mulchandani revealed what the JAIC is doing in regard to challenges brought on by the COVID-19 pandemic, through a recent $800 million contract with Booz Allen Hamilton, and Project Salus. JAIC developed a series of algorithms for NORTHCOM and National Guard units to predict supply chain resource challenges.
- Andrew Stein, Software Engineer Waymo – Interview Series
- Michael Schrage, Author of Recommendation Engines (The MIT Press) – Interview Series
- Scientists Detect Loneliness Through The Use Of AI And NLP
- Engineers Develop New Machine-Learning Method Capable of Cutting Energy Use
- Artificial Intelligence Enhances Speed of Discoveries For Particle Physics