stub Google's CEO Calls For Increased Regulation To Avoid "Negative Consequences of AI" - Unite.AI
Connect with us

Regulation

Google’s CEO Calls For Increased Regulation To Avoid “Negative Consequences of AI”

mm

Published

 on

Last year saw an increasing amount of attention drawn to the regulation of the AI industry, and this year seems to be continuing the trend. Just recently, Sundar Pichai, the CEO of Google and Alphabet Inc., supported the regulation of AI at an economic think tank taking place in Brugel.

Pichai’s comments were likely made in anticipation of new EU plans to regulate AI, which will be revealed in a few weeks. It’s possible that the EU regulations could contain policies legally enforcing certain standards for AI used in transportation, healthcare, and other high-risk sectors. The new EU regulations may also require increased transparency regarding AI systems and platforms.

According to Bloomberg, Google has previously tried to challenge antitrust fines and copyright enforcement in the EU. Despite previous attempts to push back against certain regulatory frameworks in Europe, Pichai stated that regulation is welcome as long as it takes “a proportionate approach, balancing potential harms with social opportunities.”

Pichai recently wrote an opinion piece in Financial Times, where he acknowledged that along with many opportunities to improve society, AI also has the potential to be misused. Pichai stated that regulations should help avoid the “negative consequences of AI”, citing abusive use of facial recognition and deepfakes as negative applications of AI. Pichai stated that international alignment is necessary for regulatory principles to work, and as such, there needs to be agreement on core values. Beyond that, Pichai said that it is the responsibility of AI companies like Google to give consideration to how AI can be used in an ethical manner and that this is why Google adopted its own standards for ethical AI use in 2018.

Pichai stated that government regulatory bodies and policies will play an important role in ensuring AI is used ethically, but that these bodies need not start from scratch. Pichai suggests that regulatory entities can look to previously established regulations for inspiration, such as Europe’s General Data Protection Regulation. Pichai also wrote that ethical AI regulation can potentially be both broad and flexible, with regulation providing general guidance that can be tailored for specific implementations in specific AI sectors. Newer technologies like self-driving vehicles will require new rules and policies that weigh benefits and costs against one another, while for more well-tread ground like medical devices, existing frameworks can be a good starting point.

Finally, Pichai stated that Google wants to partner with regulators to develop policies and find solutions that will balance trade-offs, Pichai wrote in Financial Times:

“We want to be a helpful and engaged partner to regulators as they grapple with the inevitable tensions and trade-offs. We offer our expertise, experience and tools as we navigate these issues together.”

While some have applauded Google for taking a stance on the need for regulation to ensure ethical AI usage, the debate continues over the extent to which it’s appropriate that AI companies should be involved with the creation of regulatory frameworks.

As for the upcoming EU regulations themselves, it’s possible that the EU is pursuing a risk-based rules system, which would put tighter restrictions on high-risk applications of AI. This includes restrictions that could be much tighter than Google hopes for, including a potential multi-year ban on facial recognition technology (with exceptions for research and security). In contrast to the EU's more restrictive approaches, the US has pushed for relatively light regulations. It remains to be seen how the different regulation strategies will impact AI development, and society at large, in the two different regions of the globe.

Blogger and programmer with specialties in Machine Learning and Deep Learning topics. Daniel hopes to help others use the power of AI for social good.