Connect with us

Interviews

Andrew Pery, Ethics Evangelist, ABBYY – Interview Series

mm

Published

 on

Andrew Pery, is the Ethics Evangelist at ABBYY, a digital intelligence company. They empower organizations to access the valuable, yet often hard to attain, insight into their operations that enables true business transformation.

ABBYY recently released a Global Initiative Promoting the Development of Trustworthy Artificial Intelligence. We decided to ask Andrew questions regarding ethics in AI, abuses of AI, and what the AI industry can do about these concerns moving forward.

What is it that initially instigated your interest in AI ethics?

What initially sparked my interest in AI ethics was a deep interest in the intersection of law and AI technology. I started my career in the criminal justice system, representing indigent defendants who didn’t have the financial means to afford legal services. I then transitioned into technology focusing on the application of advanced search and AI in the practice of law.  As a Certified Data Privacy professional, I am especially passionate about privacy law and the proper use of AI technology that aligns with and enhances privacy rights.

Facial recognition technology is often abused by sources of authority including the US government. How big of a societal concern should this be?

The implications of facial recognition technology should be a significant societal concern as this technology can impact so many fundamental rights including economic, social and security rights.

There is clear and compelling evidence that facial recognition algorithms are unreliable. An NIST study of facial recognition algorithms found that African Americans were up to 100 times more likely to be misidentified. A similar study by Georgetown Law School found that facial recognition algorithms based on mug shots produced a high percentage of false positives associated with African American individuals, and a  2018 MIT study of Amazon Facial Recognition Software found that while white men were accurately identified 99% of the time, dark skinned individuals generated up to 35% false positives.

It is for these reasons that regulators are starting to take steps to prohibit the use of facial recognition technologies, particularly in the context of surveillance and profiling. For example, the California Senate passed a statewide ban on the use of facial recognition technology and other biometric surveillance methods.

There is also momentum within the tech community to retrench from the marketing of facial recognition software. IBM announced that it will stop selling facial recognition software, citing that it violates basic human rights and basic freedoms. Microsoft also announced a similar decision to discontinue selling its facial recognition technology, also citing fundamental human rights.

COMPAS software which has been used by US courts for criminal risk assessment has mistakenly labeled visible minority offenders as more likely to re-offend than white offenders. Knowing this should using AI in courts even be an option?

Studies of COMPAS found it is no better at predicting crimes than random people. While predictive analytics delivers significant value in a commercial context, it has been proven to result in inequities in its application to criminal justice. This is evident in a report published by Pro Publica which found that COMPAS, the most popular software used by US courts for criminal risk assessment, mistakenly labeled visible minority offenders as more likely to reoffend at twice the rate as white offenders.

These inequities prompted then Attorney General Eric Holder to question the fairness and express concerns about the potential inequities raised by the application of AI technology to predict recidivism rates within the offender population.

How can society avoid unintentionally programming in negative biases into AI systems?

First, there must be an acknowledgment that AI may have the propensity to produce unintentional bias referred to as unconscious bias. Mitigating its adverse impact requires a holistic approach consisting of development of ethical AI guidelines, such as proposed by the OECD Ethical Frameworks. This is based on seven foundational principles including human agency and oversight, technical robustness and safety, privacy and governance, transparency, diversity and nondiscrimination, societal well being and accountability.

Second, adherence to legal frameworks that protect against automated profiling is key. For example, the EU has had a long history of respecting and protecting privacy rights, with the most recent and powerful expression being the General Data Protection Regulation (GDPR) which specifically protects against automated decision-making.

Third, there needs to be industry accountability for how AI technology is developed and a commitment to provide transparency without impacting their proprietary IP rights. There are positive developments here. For example, the Partnership on AI is an organization with over 80 members in 13 countries dedicated to research and discussion on key AI issues.

Lastly, technology organizations should uphold a commitment to undertake a bias impact assessment before the commercial release of AI algorithms, particularly when their applications impact privacy and security rights. Questions technology leaders should consider before releasing new AI-enabled applications include: Is the data that this technology was trained on sufficiently diverse? How will bias be detected? How is it tested and by whom? What are the developer incentives? Who will gain commercially from it?

It’s important to remember that AI solutions reflect the underlying data these applications are trained on, so ensuring that this data is not biased and properly reflects the diversity of the constituents which the solution will serve is critical.  In the words of AI researcher Professor Joanna Bryson, “if the underlying data reflects stereotypes, or if you train AI from human culture, you will find bias. And if we’re not careful, we risk integrating that bias into the computer programs that are fast taking over the running of everything from hospitals to schools to prisons – programs that are supposed to eliminate those biases in the first place.”

ABBYY recently launched a Global Initiative promoting the development of Trustworthy Artificial Intelligence. How important is this initiative to the future of ABBYY?

The decision to develop our global initiative for trustworthy AI is an important decision driven by our belief that as an organization whose technologies impact enterprises around the world and serve tens of thousands of individual users, we have an obligation to be good corporate citizens with a social conscience. Not only does this help engender greater trust from our global customers and partners, but it also helps ensure that our technologies continue to have a profound impact in a manner that benefits the common good.

It also makes good business sense to promote and support trustworthy use of AI. It strengthens our brand equity, engenders trust with our constituency of global customers and partner ecosystem. Trust translates into repeat business and into sustainable growth.

Part of this initiative is ABBYY committing to fostering a culture that promotes the ethical use of AI and its social utility. What needs to be done for other companies to foster a similar culture?

Technology companies need to build out a framework for the ethical use of AI. The extent of such commitment depends on the intended applications of AI algorithms. The best way to gauge its impact is to apply the do no harm principle and have a top down commitment to the trustworthy use of AI.

Equally important is a commitment to transparency. While IP rights ought to be balanced with the principle of trustworthy AI, the importance of transparency and accountability for the responsible use of AI should be clearly articulated and communicated throughout the organization.

One of the initiatives that ABBYY has committed to is delivering AI technologies that are socially and economically beneficial. This can be a difficult balance to achieve. How can companies best implement this?

Leaders can achieve this balance by taking into account recommendations offered by Deloitte: Designing ethics into AI starts with determining what matters to stakeholders such as customers, employees, regulators, and the general public. Companies should set up a dedicated AI governance and advisory committee from cross-functional leaders and external advisers that would engage with multi-stakeholder working groups and establish and oversee governance of AI-enabled solutions including their design, development, deployment, and use.

How we balance our commercial interests and social good was most recently exemplified during the Coronavirus pandemic. We made many of our technologies available for free or at discounted rates to help individuals and organizations navigate the crisis, maintain business continuity, and support their customers amid unpredictable circumstances.

We also recently made our technology frameworks available through NeoML, our cross-platform open-source machine learning library that enables software developers to leverage proven and reliable models to advance AI innovation.

Could you share views behind using AI for COVID-19 apps for contact tracing and the needs of balancing the right to privacy?

This is a particularly vexing issue. There are those who argue that the implementation of mobile tracking applications represents an unwelcome intrusion to privacy rights. On the other hand, when it comes to public safety, there are always limits on rights in general. It’s a trade off between rights and responsibilities. However, rights are not absolute. Containment of a global pandemic requires coordinated action and application of resources and technologies that can be responsibly used to protect the population at large.

This balance between individual privacy rights and the common good is recognized within a legal framework. For example, the GDPR identifies six grounds for lawful processing of personally identifiable information, including a provision that when processing is indispensable for the public interest, such processing of PII is warranted without obtaining data subject consent.

Furthermore, Recital 46 of GDPR directly stipulates that processing of PII is warranted without obtaining data subject consent in humanitarian emergencies including epidemics.  However, it is important to note that Recital 46 construes the application of the public interest provision very narrowly, noting that it should only take place when it cannot be based on another legal basis.

Apart from the balancing of privacy rights with public interest, it is also incumbent on the technology sector and on policymakers to implement ethically sound, transparent, and fair guidelines relating to the use of AI-driven profiling and sharing highly sensitive health information. Major tech companies including Google and Facebook have already begun initiatives to track the movements of users to identify how the virus is spreading and evaluate the effectiveness of social distancing measures.

I believe that as long as the information is anonymized, its use has social utility.

Is there anything else that you would like to share regarding ABBYY’s new Global Initiative Promoting the Development of Trustworthy Artificial Intelligence?

Regulations tend to lag technological innovation, therefore, companies need to take a proactive role in fostering transparency and accountability with AI technologies and how they impact privacy and security rights. Our initiative reflects our commitment in this endeavor. Since our founding, we have created innovative technologies that preserve valuable information and turn it into actionable knowledge.

Furthermore, ethical use of artificial intelligence technologies shouldn’t only be considered on a legal basis but also as a moral obligation and a business imperative. If your solutions have a far reach, and impact business-critical functions, it should be an industry standard to be transparent in the application of AI technologies.

Thank you so much for answering these important questions.  To view ABBYY’s guiding principles and approach to adhering to and advocating for trustworthy AI principles please click here. Anyone who wishes to learn more about ABBYY in general may visit the ABBYY website.

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.AI. He is also a member of the Forbes Technology Council.