stub How to Operationalize AI Ethics? - Unite.AI
Connect with us

Thought Leaders

How to Operationalize AI Ethics?

mm

Published

 on

AI is about optimizing processes, not eliminating humans from them. Accountability remains crucial in the overarching idea that AI can replace humans. While technology and automated systems have helped us achieve better economic outputs in the past century, can they truly replace services, creativity, and deep knowledge? I still believe they cannot, but they can optimize the time spent developing these areas.

Accountability heavily relies on intellectual property rights, foreseeing the impact of technology on collective and individual rights, and ensuring the safety and protection of data used in training and sharing while developing new models. As we continue to advance in technology, the topic of AI ethics has become increasingly relevant. This raises important questions about how we regulate and integrate AI into society while minimizing potential risks.

I work closely with one aspect of AI—voice cloning. Voice is an important part of an individual's likeness and biometric data used to train voice models. The protection of likeness (legal and policy questions), securing voice data (privacy policies and cybersecurity), and establishing the limits of voice cloning applications (ethical questions measuring impact) are essential to consider while building the product.

We must evaluate how AI aligns with society's norms and values. AI must be adapted to fit within society's existing ethical framework, ensuring it does not impose additional risks or threaten established societal norms. The impact of technology covers areas where AI empowers one group of individuals while eliminating others. This existential dilemma arises at every stage of our development and societal growth or decline. Can AI introduce more disinformation into information ecosystems? Yes. How do we manage that risk at the product level, and how do we educate users and policymakers about it? The answers lie not in the dangers of technology itself, but in how we package it into products and services. If we do not have enough manpower on product teams to look beyond and assess the impact of technology, we will be stuck in a cycle of fixing the mess.

The integration of AI into products raises questions about product safety and preventing AI-related harm. The development and implementation of AI should prioritize safety and ethical considerations, which requires resource allocation to relevant teams.

To facilitate the emerging discussion on operationalizing AI ethics, I suggest this basic cycle for making AI ethical at the product level:

1. Investigate the legal aspects of AI and how we regulate it, if regulations exist. These include the EU's Act on AI, Digital Service Act, UK's Online Safety Bill, and GDPR on data privacy. The frameworks are works in progress and need input from industry frontrunners (emerging tech) and leaders. See point (4) that completes the suggested cycle.

2. Consider how we adapt AI-based products to society's norms without imposing more risks. Does it affect information security or the job sector, or does it infringe on copyright and IP rights? Create a crisis scenario-based matrix. I draw this from my international security background.

3. Determine how to integrate the above into AI-based products. As AI becomes more sophisticated, we must ensure it aligns with society's values and norms. We need to be proactive in addressing ethical considerations and integrating them into AI development and implementation. If AI-based products, like generative AI, threaten to spread more disinformation, we must introduce mitigation features, moderation, limit access to core technology, and communicate with users. It is vital to have AI ethics and safety teams in AI-based products, which requires resources and a company vision.

Consider how we can contribute to and shape legal frameworks. Best practices and policy frameworks are not mere buzzwords; they are practical tools that help new technology function as assistive tools rather than looming threats. Bringing policymakers, researchers, big tech, and emerging tech together is essential for balancing societal and business interests surrounding AI. Legal frameworks must adapt to the emerging technology of AI, ensuring that they protect individuals and society while also fostering innovation and progress.

4. Think of how we contribute to the legal frameworks and shape them. The best practices and policy frameworks are not empty buzzwords but quite practical tools to make the new technology work as assistive tools, not as looming threats. Having policymakers, researchers, big tech and emerging tech in one room is essential to balance societal and business interests around AI. Legal frameworks must adapt to the emerging technology of AI. We need to ensure that these frameworks protect individuals and society while also facilitating innovation and progress.

Summary

This is a really basic circle of integrating Ai-based emerging technologies into our societies. As we continue to grapple with the complexities of AI ethics, it is essential to remain committed to finding solutions that prioritize safety, ethics, and societal well-being. And these are not empty words but the tough work of putting all puzzles together daily.

These words are based on my own experience and conclusions.

Anna is Head of Ethics and Partnerships at Respeecher, an Emmy-awarded voice cloning technology based in Ukraine. Anna is a former Policy Advisor at Reface, an AI powered synthetic media app and a tech co-founder of the counter disinformation tool Cappture funded by the Startup Wise Guys accelerator program. Anna has 11 years of experience in security and defence policies, technologies and resilience building. She is a former Research Fellow at the International Centre for Defence and Security in Tallinn and Prague Security Studies Institute. She has also been advising major Ukrainian companies on resilience building as part of the Hybrid warfare Task Force at Kyiv School of Economics.