Connect with us

Regulation

The Different Challenges and Approaches to AI by Country

Published

 on

The Different Challenges and Approaches to AI by Country

With the complete transformation of society being imminent due to artificial intelligence (AI) technologies, it is important to look at the different approaches being taken by countries around the globe. Whether it is for reasons of prosperity or surveillance, there is no doubt that nations are increasingly investing in AI. 

China

China is taking a very strategic approach to artificial intelligence, with the Chinese government declaring its hopes for the nation to become the world’s leading AI innovator by 2030. The government has released a national AI strategy and plans on investing tens of billions of dollars in AI research and development. Cities are investing their own money as well, with Beijing’s US$ 2.1 billion AI technology park and Tianjin’s planned US$16 billion AI fund. 

The private sector is playing the other big role in China, with Chinese AI startups competing with the United States for AI venture funding. The country is second, behind the United States, when it comes to the number of AI companies. 

China’s advancement in AI has also brought along serious concerns about human rights abuses and surveillance. AI companies within China are exporting surveillance technology to countries like Kenya, Laos, Mongolia, Uganda, and Uzbekistan. The biggest concerns are stemming from the use of facial recognition technology to track individuals. 

United States

The United States has constantly been a leader in public and private AI research, with massive venture capital investment taking place in the industry. In 2012, AI initiatives received US$282 million from venture capitalists, and that number reached US$8 billion by 2018. 

The United States is facing big problems in areas like cybersecurity and the skills gap. Organizations are increasingly implementing big AI initiatives, which comes with an increased security risk. There is a big concern among executives about proprietary and sensitive data being stolen, as well as outside actors influencing training data and algorithms. As for the skills gap, it is being widened due to the increasing implementation of this technology. This will have big implications for the economy and could lead to massive unemployment if not addressed immediately. Companies are beginning to implement retraining and upskilling programs for their employees. 

Germany

Germany is accelerating the development of AI technologies, with plans to invest €3 billion in AI research by 2025. Their national strategy is called “AI Made in Germany,” and they hope that AI will expand the economy and improve the competitiveness of existing industries. According to a study commissioned by the German government, AI will add around €32 billion to Germany’s manufacturing output within 5 years. 

Germany has put an increased focus on the ethical issues surrounding AI. They have big concerns about disinformation and the technology being manipulated. Besides manipulation, there is a concern about the economic impact of the technology. Because of this, the country puts forth a strong effort to train workers in AI. They see it as a way of enhancing performance and enabling partnerships between humans and machines.

United Kingdom 

The United Kingdom has an impressive AI startup scene, along with £1 billion in government support going to industry and academia. There is a focus on large-scale initiatives, as well as implementing a comprehensive strategy for AI adoption. While there is concern among the government about legal liability and autonomous decision-making, the biggest challenges are seen by many as proving the business value for AI projects, as well as integrating AI into roles and functions. 

The UK could also see a big disruption of the workforce due to new technology. They have piloted retraining programs such as the National Retraining Scheme, which are likely to be expanded in the near future. It consists of various initiatives meant to prepare workers for the evolution of the workspace. 

France

Mathematician Cédric Villani was appointed by President Emmanuel Macron in 2017 to develop a national AI strategy. He came up with “AI for Humanity,” which was released in 2018 with €1.5 billion in funding. 

The plan focuses on the nation’s resources and talent, an open data ecosystem, research institutions, ethical issues, and implications for the economy. The government has a strong relationship with the European Union, but the nation also develops AI domestically. 

France has a larger number of small AI projects, not yet taking part in the large-scale ones. This could be due to competing priorities like the General Data Protection Regulation (GDPR) compliance. 

Some of the nation’s biggest challenges are integrating AI into organizations and obtaining talent. Because of the extreme skills gap, the government is working on developing a system that relies on graduates of the French education system. 

Canada

Canada is taking a slow approach to AI technology, which could hurt innovation and implementation. There is a lack of urgency, with only around 51 percent of executives believing AI will transform their company. 

As the rest of the world moves forward with AI technology, Canadian companies could fall behind. However, the government is taking steps to try to avoid that situation. They have implemented policies to make immigration easier for those with AI-related skill sets. Since they are not producing enough of the talent within their own borders, they are looking to bring it in. There is not a strong push for AI-training within the country, but that could change with partnerships with academic institutions. The University of Toronto is investing around CAN$100 million to support the work coming from individuals like AI scientists. 

Preparing for the Future

With AI technology set to take over many aspects of society within a decade, the impact in each country will depend on their current approach. It can be argued that none of the nations listed here are taking a drastic enough approach to what will be the Fourth Industrial Revolution, but a lot can be learned by studying them. Most of these initiatives will likely have to be upscaled very quickly to prepare for the future of AI. 

 

 

 

Spread the love

Alex McFarland is a historian and journalist covering the newest developments in artificial intelligence.

Regulation

Growing Calls for AI Regulation After Weeks of News Reports

Published

on

Growing Calls for AI Regulation After Weeks of News Reports

Over the past few weeks, there have been growing calls for stronger regulation of artificial intelligence (AI). The concern comes after various news stories broke showing the potential abuses of the technology. Now, even more questions are being raised with the release of the European Commission’s (EU) much anticipated white paper on artificial intelligence, which is the first pan-national attempt to regulate AI. Around the same time, the White House Office of Science and Technology Policy (OSTP) released a report on its American Artificial Intelligence Initiative. 

Prior to the release of the EU’s white paper, the Intercept broke a news story regarding leaked internal European Union documents. According to the documents, the EU was considering the creation of a network of facial recognition databases throughout Europe. The national police forces of 10 EU member states produced a report that called for the creation and interconnection of the databases in every member state. Many are worried that the creation of such databases will inevitably be connected with similar ones in the United States, allowing massive amounts of biometric data to be consolidated. Many were expecting the EU’s white paper to propose a ban on facial regulation, but it was not there. 

According to Edin Omanovic, advocacy director for Privacy International,  “This is concerning on a national level and on a European level, especially as some EU countries veer towards more authoritarian governments.”

One of the big news stories over the last few weeks, kicked off by a New York Times investigation in January, has to do with the start-up Clearview AI. The company’s facial recognition app identifies people through the use of a database of images taken from social media. The app compares a photo to the database of over 3 billion pictures coming from sites such as Facebook, Venmo, and Youtube. Once the app finds matches, it responds with links to the site where the photos originally came from. This technology could lead to the discovery of personal details about an individual. According to the report, the app has been used by over 600 law enforcement agencies. While Clearview’s database has over 3 billion images, the FBI’s database only contains 641 million images of US citizens. 

The story has erupted once again with a new report by BuzzFeed News last week. According to the report, the company has considered expanding into more than just law enforcement purposes, with retail, real estate, banking, and international markets being considered. The report also stated that the facial recognition app has already been sold to thousands of organizations across the globe and used by the Justice Department, ICE, Macy’s, Walmart, and the NBA. Perhaps most problematic on the list of clients is a sovereign wealth fund in the United Arab Emirates and thousands of government entities.

The new developments have caused concerns among privacy advocates about potential mass surveillance uses. Another concern is the possibility of inaccuracy with the technology, leading to the law enforcement targeting of innocent individuals. On top of all of the privacy concerns, companies such as Facebook, Google, and Twitter are threatening legal action. Besides private legal action, at least two United States senators have stated that they intend to probe the company.

An example of the abuse of this technology can be seen within the U.S. Immigration and Customs Enforcement Agency (ICE). The Washington Post reported last week that ICE officials have been permitted to use facial-recognition technology to search through millions of Maryland driver’s license photos, and they can do it without seeking state or court approval. 

According to Harrison Rudolph, a senior associate at Georgetown University Law School’s Center on Privacy and Technology, “ICE is using biometric information in the shadows, without government notice or public approval, to hunt down the most vulnerable people.” 

These are just some of the most public examples of how this technology is being used, but much more is happening behind the scenes. Because of this, there has been an increased call for scrutiny and regulation. Increased transparency, whether willing or through investigative reporting, is bringing many practices to light. Without the public having an understanding of artificial intelligence and what it could mean for the economy, government, law enforcement, surveillance, and every other aspect of society, there is little hope for governments and companies to self-regulate. Society is seeing both the enormous benefits of AI and the massive concerns, with all of it happening too fast for companies, governments, and individuals to keep up. 

 

Spread the love
Continue Reading

Regulation

Google’s CEO Calls For Increased Regulation To Avoid “Negative Consequences of AI”

mm

Published

on

Google's CEO Calls For Increased Regulation To Avoid "Negative Consequences of AI"

Last year saw an increasing amount of attention drawn to the regulation of the AI industry, and this year seems to be continuing the trend. Just recently, Sundar Pichai, the CEO of Google and Alphabet Inc., supported the regulation of AI at an economic think tank taking place in Brugel.

Pichai’s comments were likely made in anticipation of new EU plans to regulate AI, which will be revealed in a few weeks. It’s possible that the EU regulations could contain policies legally enforcing certain standards for AI used in transportation, healthcare, and other high-risk sectors. The new EU regulations may also require increased transparency regarding AI systems and platforms.

According to Bloomberg, Google has previously tried to challenge antitrust fines and copyright enforcement in the EU. Despite previous attempts to push back against certain regulatory frameworks in Europe, Pichai stated that regulation is welcome as long as it takes “a proportionate approach, balancing potential harms with social opportunities.”

Pichai recently wrote an opinion piece in Financial Times, where he acknowledged that along with many opportunities to improve society, AI also has the potential to be misused. Pichai stated that regulations should help avoid the “negative consequences of AI”, citing abusive use of facial recognition and deepfakes as negative applications of AI. Pichai stated that international alignment is necessary for regulatory principles to work, and as such, there needs to be agreement on core values. Beyond that, Pichai said that it is the responsibility of AI companies like Google to give consideration to how AI can be used in an ethical manner and that this is why Google adopted its own standards for ethical AI use in 2018.

Pichai stated that government regulatory bodies and policies will play an important role in ensuring AI is used ethically, but that these bodies need not start from scratch. Pichai suggests that regulatory entities can look to previously established regulations for inspiration, such as Europe’s General Data Protection Regulation. Pichai also wrote that ethical AI regulation can potentially be both broad and flexible, with regulation providing general guidance that can be tailored for specific implementations in specific AI sectors. Newer technologies like self-driving vehicles will require new rules and policies that weigh benefits and costs against one another, while for more well-tread ground like medical devices, existing frameworks can be a good starting point.

Finally, Pichai stated that Google wants to partner with regulators to develop policies and find solutions that will balance trade-offs, Pichai wrote in Financial Times:

“We want to be a helpful and engaged partner to regulators as they grapple with the inevitable tensions and trade-offs. We offer our expertise, experience and tools as we navigate these issues together.”

While some have applauded Google for taking a stance on the need for regulation to ensure ethical AI usage, the debate continues over the extent to which it’s appropriate that AI companies should be involved with the creation of regulatory frameworks.

As for the upcoming EU regulations themselves, it’s possible that the EU is pursuing a risk-based rules system, which would put tighter restrictions on high-risk applications of AI. This includes restrictions that could be much tighter than Google hopes for, including a potential multi-year ban on facial recognition technology (with exceptions for research and security). In contrast to the EU’s more restrictive approaches, the US has pushed for relatively light regulations. It remains to be seen how the different regulation strategies will impact AI development, and society at large, in the two different regions of the globe.

Spread the love
Continue Reading

Regulation

U.S. Government Will Limit Exports of Artificial Intelligence

Published

on

U.S. Government Will Limit Exports of Artificial Intelligence

The U.S. government will take steps next week to limit the export of artificial intelligence (AI) software. The decision by the Trump administration comes at a time when powerful rival nations, such as China, are becoming increasingly dominant in the field. The move is meant to keep certain sensitive technologies from falling into the hands of those nations. 

The new rule goes into effect on January 6, 2020,  and it will be aimed at certain companies that export geospatial imagery software from the United States. Those companies will be required to apply for a license to export it. The only exception is that a license will not be required to export to Canada. 

The new measure was the first of its kind to be finalized by the Commerce Department under a mandate from a 2018 law passed by Congress. That law updated arms controls to include emerging technology. 

The new rules will likely have an effect on a growing part of the tech industry. These algorithms are currently being used in order to analyze satellite images of crops, trade patterns and other changes within the economy and environment. 

Chinese companies are responsible for having exported artificial intelligence surveillance technology to over 60 countries. Some of those countries have dismal human rights records and include Iran, Myanmar, Venezuela, and Zimbabwe. 

Within the nation of China itself, the Communist Party is using facial recognition technology systems to target Uighurs and other Muslim minorities located in China’s far western Xinjang region. According to a report released by a U.S. think tank, Beijing has been involved in “authoritarian tech.”

The think tank that released the report was the Carnegie Endowment for International Peace, and they did so after rising concerns of authoritarian regimes using the technology as a way to gain power. 

“Technology linked to Chinese companies — particularly Huawei, Hikvision, Dahua and ZTE — supply AI surveillance technology in 63 countries, 36 of which have signed onto China’s Belt and Road Initiative,” the report said.

One of China’s leading technology companies, Huawei Technologies Co., alone provides AI surveillance technology to at least 50 countries. 

“Chinese product pitches are often accompanied by soft loans to encourage governments to purchase their equipment,” according to the report. “This raises troubling questions about the extent to which the Chinese government is subsidizing the purchase of advanced repressive technology.”

China has faced increased scrutiny after an investigative report by the International Consortium of Investigative Journalists was released detailing the nation’s surveillance and policing systems, which are being used to oppress Uighurs and send them to internment camps. 

The new rules implemented by the U.S. government will at first only go into effect within the country. However, U.S. authorities have said that they could be submitted to international bodies at a later time. 

There has been recent bi-partisan frustration over the long amount of time it is taking to roll-out new export controls for the technology. 

“While the government believes that it is in the national security interests of the United States to immediately implement these controls, it also wants to provide the interested public with an opportunity to comment on the control of new items,” according to Senate Minority Leader Chuck Schumer.

 

Spread the love
Continue Reading