stub Tech Leaders Highlighting the Risks of AI & the Urgency of Robust AI Regulation - Unite.AI
Connect with us

Artificial Intelligence

Tech Leaders Highlighting the Risks of AI & the Urgency of Robust AI Regulation

mm

Published

 on

Featured Blog Image-Tech Leaders Highlighting the Risks of AI & the Urgency of Robust AI Regulation

AI growth and advancements have been exponential over the past few years. Statista reports that by 2024, the global AI market will generate a staggering revenue of around $3000 billion, compared to $126 billion in 2015. However, tech leaders are now warning us about the various risks of AI.

Especially, the recent wave of generative AI models like ChatGPT has introduced new capabilities in various data-sensitive sectors, such as healthcare, education, finance, etc. These AI-backed developments are vulnerable due to many AI shortcomings that malicious agents can expose.

Let’s discuss what AI experts are saying about the recent developments and highlight the potential risks of AI. We’ll also briefly touch on how these risks can be managed.

Tech Leaders & Their Concerns Related to the Risks of AI

Geoffrey Hinton

Geoffrey Hinton – a famous AI tech leader (and godfather of this field), who recently quit Google, has voiced his concerns about rapid development in AI and its potential dangers. Hinton believes that AI chatbots can become “quite scary” if they surpass human intelligence.

Hinton says:

“Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has, and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning. And given the rate of progress, we expect things to get better quite fast. So we need to worry about that.”

Moreover, he believes that “bad actors” can use AI for “bad things,” such as allowing robots to have their sub-goals. Despite his concerns, Hinton believes that AI can bring short-term benefits, but we should also heavily invest in AI safety and control.

Elon Musk

Elon Musk's involvement in AI began with his early investment in DeepMind in 2010, to co-founding OpenAI and incorporating AI into Tesla's autonomous vehicles.

Although he is enthusiastic about AI, he frequently raises concerns about the risks of AI. Musk says that powerful AI systems can be more dangerous to civilization than nuclear weapons. In an interview at Fox News in April 2023, he said:

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production. In the sense that it has the potential — however, small one may regard that probability — but it is non-trivial and has the potential of civilization destruction.”

Moreover, Musk supports government regulations on AI to ensure safety from potential risks, although “it’s not so fun.”

Pause Giant AI Experiments: An Open Letter Backed by 1000s of AI Experts

Future of Life Institute published an open letter on 22nd March 2023. The letter calls for a temporary six months halt on AI systems development more advanced than GPT-4. The authors express their concerns about the pace with which AI systems are being developed poses severe socioeconomic challenges.

Moreover, the letter states that AI developers should work with policymakers to document AI governance systems. As of June 2023, the letter has been signed by more than 31,000 AI developers, experts, and tech leaders. Notable signatories include Elon Musk, Steve Wozniak (Co-founder of Apple), Emad Mostaque (CEO, Stability AI), Yoshua Bengio (Turing Prize winner), and many more.

Counter Arguments on Halting AI Development

Two prominent AI leaders, Andrew Ng, and Yann LeCun, have opposed the six-month ban on developing advanced AI systems and considered the pause a bad idea.

Ng says that although AI has some risks, such as bias, the concentration of power, etc. But the value created by AI in fields such as education, healthcare, and responsive coaching is tremendous.

Yann LeCun says that research and development shouldn’t be stopped, although the AI products that reach the end-user can be regulated.

What Are the Potential Dangers & Immediate Risks of AI?

Potential Dangers & Immediate Risks of AI

1. Job Displacement

AI experts believe that intelligent AI systems can replace cognitive and creative tasks. Investment bank Goldman Sachs estimates that around 300 million jobs will be automated by generative AI.

Hence, there should be regulations on the development of AI so that it doesn’t cause a severe economic downturn. There should be educational programs for upskilling and reskilling employees to deal with this challenge.

2. Biased AI Systems

Biases prevalent among human beings about gender, race, or color can inadvertently permeate the data used for training AI systems, subsequently making AI systems biased.

For instance, in the context of job recruitment, a biased AI system can discard resumes of individuals from specific ethnic backgrounds, creating discrimination in the job market. In law enforcement, biased predictive policing could disproportionately target specific neighborhoods or demographic groups.

Hence, it is essential to have a comprehensive data strategy that addresses AI risks, particularly bias. AI systems must be frequently evaluated and audited to keep them fair.

3. Safety-Critical AI Applications

Autonomous vehicles, medical diagnosis & treatment, aviation systems, nuclear power plant control, etc., are all examples of safety-critical AI applications. These AI systems should be developed cautiously because even minor errors could have severe consequences for human life or the environment.

For instance, the malfunctioning of the AI software called Maneuvering Characteristics Augmentation System (MCAS) is attributed in part to the crash of the two Boeing 737 MAX, first in October 2018 and then in March 2019. Sadly, the two crashes killed 346 people.

How Can We Overcome the Risks of AI Systems? – Responsible AI Development & Regulatory Compliance

Responsible AI Development & Regulatory Compliance

Responsible AI (RAI) means developing and deploying fair, accountable, transparent, and secure AI systems that ensure privacy and follow legal regulations and societal norms. Implementing RAI can be complex given AI systems’ broad and rapid development.

However, big tech companies have developed RAI frameworks, such as:

  1. Microsoft’s Responsible AI
  2. Google’s AI Principles
  3. IBM’S Trusted AI

AI labs across the globe can take inspiration from these principles or develop their own responsible AI frameworks to make trustworthy AI systems.

AI Regulatory Compliance

Since, data is an integral component of AI systems, AI-based organizations and labs must comply with the following regulations to ensure data security, privacy, and safety.

  1. GDPR (General Data Protection Regulation) – a data protection framework by the EU.
  2. CCPA (California Consumer Privacy Act) – a California state statute for privacy rights and consumer protection.
  3. HIPAA (Health Insurance Portability and Accountability Act) – a U.S. legislation that safeguards patients’ medical data.   
  4. EU AI Act, and Ethics guidelines for trustworthy AI – a European Commission AI regulation.

There are various regional and local laws enacted by different countries to protect their citizens. Organizations that fail to ensure regulatory compliance around data can result in severe penalties. For instance, GDPR has set a fine of €20 million or 4% of annual profit for serious infringements such as unlawful data processing, unproven data consent, violation of data subjects’ rights, or non-protected data transfer to an international entity.

AI Development & Regulations – Present & Future

With every passing month, AI advancements are reaching unprecedented heights. But, the accompanying AI regulations and governance frameworks are lagging. They need to be more robust and specific.

Tech leaders and AI developers have been ringing alarms about the risks of AI if not adequately regulated. Research and development in AI can further bring value in many sectors, but it’s clear that careful regulation is now imperative.

For more AI-related content, visit unite.ai.