Connect with us

Interviews

Andrew Pery, Ethics Evangelist, ABBYY – Interview Series

mm

Published

 on

Andrew Pery, is the Ethics Evangelist at ABBYY, a digital intelligence company. They empower organizations to access the valuable, yet often hard to attain, insight into their operations that enables true business transformation.

ABBYY recently released a Global Initiative Promoting the Development of Trustworthy Artificial Intelligence. We decided to ask Andrew questions regarding ethics in AI, abuses of AI, and what the AI industry can do about these concerns moving forward.

What is it that initially instigated your interest in AI ethics?

What initially sparked my interest in AI ethics was a deep interest in the intersection of law and AI technology. I started my career in the criminal justice system, representing indigent defendants who didn’t have the financial means to afford legal services. I then transitioned into technology focusing on the application of advanced search and AI in the practice of law.  As a Certified Data Privacy professional, I am especially passionate about privacy law and the proper use of AI technology that aligns with and enhances privacy rights.

Facial recognition technology is often abused by sources of authority including the US government. How big of a societal concern should this be?

The implications of facial recognition technology should be a significant societal concern as this technology can impact so many fundamental rights including economic, social and security rights.

There is clear and compelling evidence that facial recognition algorithms are unreliable. An NIST study of facial recognition algorithms found that African Americans were up to 100 times more likely to be misidentified. A similar study by Georgetown Law School found that facial recognition algorithms based on mug shots produced a high percentage of false positives associated with African American individuals, and a  2018 MIT study of Amazon Facial Recognition Software found that while white men were accurately identified 99% of the time, dark skinned individuals generated up to 35% false positives.

It is for these reasons that regulators are starting to take steps to prohibit the use of facial recognition technologies, particularly in the context of surveillance and profiling. For example, the California Senate passed a statewide ban on the use of facial recognition technology and other biometric surveillance methods.

There is also momentum within the tech community to retrench from the marketing of facial recognition software. IBM announced that it will stop selling facial recognition software, citing that it violates basic human rights and basic freedoms. Microsoft also announced a similar decision to discontinue selling its facial recognition technology, also citing fundamental human rights.

COMPAS software which has been used by US courts for criminal risk assessment has mistakenly labeled visible minority offenders as more likely to re-offend than white offenders. Knowing this should using AI in courts even be an option?

Studies of COMPAS found it is no better at predicting crimes than random people. While predictive analytics delivers significant value in a commercial context, it has been proven to result in inequities in its application to criminal justice. This is evident in a report published by Pro Publica which found that COMPAS, the most popular software used by US courts for criminal risk assessment, mistakenly labeled visible minority offenders as more likely to reoffend at twice the rate as white offenders.

These inequities prompted then Attorney General Eric Holder to question the fairness and express concerns about the potential inequities raised by the application of AI technology to predict recidivism rates within the offender population.

How can society avoid unintentionally programming in negative biases into AI systems?

First, there must be an acknowledgment that AI may have the propensity to produce unintentional bias referred to as unconscious bias. Mitigating its adverse impact requires a holistic approach consisting of development of ethical AI guidelines, such as proposed by the OECD Ethical Frameworks. This is based on seven foundational principles including human agency and oversight, technical robustness and safety, privacy and governance, transparency, diversity and nondiscrimination, societal well being and accountability.

Second, adherence to legal frameworks that protect against automated profiling is key. For example, the EU has had a long history of respecting and protecting privacy rights, with the most recent and powerful expression being the General Data Protection Regulation (GDPR) which specifically protects against automated decision-making.

Third, there needs to be industry accountability for how AI technology is developed and a commitment to provide transparency without impacting their proprietary IP rights. There are positive developments here. For example, the Partnership on AI is an organization with over 80 members in 13 countries dedicated to research and discussion on key AI issues.

Lastly, technology organizations should uphold a commitment to undertake a bias impact assessment before the commercial release of AI algorithms, particularly when their applications impact privacy and security rights. Questions technology leaders should consider before releasing new AI-enabled applications include: Is the data that this technology was trained on sufficiently diverse? How will bias be detected? How is it tested and by whom? What are the developer incentives? Who will gain commercially from it?

It’s important to remember that AI solutions reflect the underlying data these applications are trained on, so ensuring that this data is not biased and properly reflects the diversity of the constituents which the solution will serve is critical.  In the words of AI researcher Professor Joanna Bryson, “if the underlying data reflects stereotypes, or if you train AI from human culture, you will find bias. And if we’re not careful, we risk integrating that bias into the computer programs that are fast taking over the running of everything from hospitals to schools to prisons – programs that are supposed to eliminate those biases in the first place.”

ABBYY recently launched a Global Initiative promoting the development of Trustworthy Artificial Intelligence. How important is this initiative to the future of ABBYY?

The decision to develop our global initiative for trustworthy AI is an important decision driven by our belief that as an organization whose technologies impact enterprises around the world and serve tens of thousands of individual users, we have an obligation to be good corporate citizens with a social conscience. Not only does this help engender greater trust from our global customers and partners, but it also helps ensure that our technologies continue to have a profound impact in a manner that benefits the common good.

It also makes good business sense to promote and support trustworthy use of AI. It strengthens our brand equity, engenders trust with our constituency of global customers and partner ecosystem. Trust translates into repeat business and into sustainable growth.

Part of this initiative is ABBYY committing to fostering a culture that promotes the ethical use of AI and its social utility. What needs to be done for other companies to foster a similar culture?

Technology companies need to build out a framework for the ethical use of AI. The extent of such commitment depends on the intended applications of AI algorithms. The best way to gauge its impact is to apply the do no harm principle and have a top down commitment to the trustworthy use of AI.

Equally important is a commitment to transparency. While IP rights ought to be balanced with the principle of trustworthy AI, the importance of transparency and accountability for the responsible use of AI should be clearly articulated and communicated throughout the organization.

One of the initiatives that ABBYY has committed to is delivering AI technologies that are socially and economically beneficial. This can be a difficult balance to achieve. How can companies best implement this?

Leaders can achieve this balance by taking into account recommendations offered by Deloitte: Designing ethics into AI starts with determining what matters to stakeholders such as customers, employees, regulators, and the general public. Companies should set up a dedicated AI governance and advisory committee from cross-functional leaders and external advisers that would engage with multi-stakeholder working groups and establish and oversee governance of AI-enabled solutions including their design, development, deployment, and use.

How we balance our commercial interests and social good was most recently exemplified during the Coronavirus pandemic. We made many of our technologies available for free or at discounted rates to help individuals and organizations navigate the crisis, maintain business continuity, and support their customers amid unpredictable circumstances.

We also recently made our technology frameworks available through NeoML, our cross-platform open-source machine learning library that enables software developers to leverage proven and reliable models to advance AI innovation.

Could you share views behind using AI for COVID-19 apps for contact tracing and the needs of balancing the right to privacy?

This is a particularly vexing issue. There are those who argue that the implementation of mobile tracking applications represents an unwelcome intrusion to privacy rights. On the other hand, when it comes to public safety, there are always limits on rights in general. It’s a trade off between rights and responsibilities. However, rights are not absolute. Containment of a global pandemic requires coordinated action and application of resources and technologies that can be responsibly used to protect the population at large.

This balance between individual privacy rights and the common good is recognized within a legal framework. For example, the GDPR identifies six grounds for lawful processing of personally identifiable information, including a provision that when processing is indispensable for the public interest, such processing of PII is warranted without obtaining data subject consent.

Furthermore, Recital 46 of GDPR directly stipulates that processing of PII is warranted without obtaining data subject consent in humanitarian emergencies including epidemics.  However, it is important to note that Recital 46 construes the application of the public interest provision very narrowly, noting that it should only take place when it cannot be based on another legal basis.

Apart from the balancing of privacy rights with public interest, it is also incumbent on the technology sector and on policymakers to implement ethically sound, transparent, and fair guidelines relating to the use of AI-driven profiling and sharing highly sensitive health information. Major tech companies including Google and Facebook have already begun initiatives to track the movements of users to identify how the virus is spreading and evaluate the effectiveness of social distancing measures.

I believe that as long as the information is anonymized, its use has social utility.

Is there anything else that you would like to share regarding ABBYY’s new Global Initiative Promoting the Development of Trustworthy Artificial Intelligence?

Regulations tend to lag technological innovation, therefore, companies need to take a proactive role in fostering transparency and accountability with AI technologies and how they impact privacy and security rights. Our initiative reflects our commitment in this endeavor. Since our founding, we have created innovative technologies that preserve valuable information and turn it into actionable knowledge.

Furthermore, ethical use of artificial intelligence technologies shouldn’t only be considered on a legal basis but also as a moral obligation and a business imperative. If your solutions have a far reach, and impact business-critical functions, it should be an industry standard to be transparent in the application of AI technologies.

Thank you so much for answering these important questions.  To view ABBYY’s guiding principles and approach to adhering to and advocating for trustworthy AI principles please click here. Anyone who wishes to learn more about ABBYY in general may visit the ABBYY website.

Spread the love

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is also the Co-Founder of Securities.io a news website focusing on digital securities, and is a founding partner of unite.ai

Ethics

AI Ethics Coalition Condemn Criminality Prediction Algorithms

mm

Published

on

On Tuesday, a number of AI researchers, ethicists, data scientists, and social scientists released a blog post arguing that academic researchers should stop pursuing research that endeavors to predict the likelihood that an individual will commit a criminal act, as based upon variables like crime statistics and facial scans.

The blog post was authored by the Coalition for Critical Technology, who argued that the utilization of such algorithms perpetuates a cycle of prejudice against minorities. Many studies of the efficacy of face recognition and predictive policing algorithms find that the algorithms tend to judge minorities more harshly, which the authors of the blog post argue is due to the inequities in the criminal justice system. The justice system produces biased data, and therefore the algorithms trained on this data propagate those biases, the Coalition for Critical Technology argues. The coalition argues that the very notion of “criminality” is often based on race, and therefore research done on these technologies assumes the neutrality of the algorithms when in truth no such neutrality exists.

As The Verge reports, a large publisher of academic works, Springer, was planning on publishing a study entitled “A Deep Neural Network Model to Predict Criminality using Image Processing”. The authors of the study claimed to have engineered a facial recognition algorithm capable of predicting the chance that an individual would commit a crime with no bias and approximately 80% accuracy. Yet the Coalition for Critical Technology penned an open letter to Springer, urging that the publisher refrains from publishing the study or future studies involving similar research.

“The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world,” argues the coalition.

Springer stated that it would not be publishing the paper, as reported by MIT Technology Review. Springer stated that the paper was submitted for an upcoming conference, but after the peer review process the paper was rejected for publication.

The Coalition for Critical Technology argues that the criminality prediction paper is just a single instance for larger, harmful trend where AI engineers and researchers try to predict behavior based on data comprised of sensitive, socially constructed variables. The coalition also argues that much of the research is based on scientifically dubious ideas and theories, which are not supported by the available evidence in biology and psychology. As an example, researchers from Princeton and Google published an article warning that algorithms claiming to be able to predict criminality based on facial features are based on discredited and dangerous pseudosciences like physiognomy.  The researchers warned against letting machine learning be used to reinvigorate long-debunked theories used to support racist systems.

The recent momentum of the Black Lives Matter movement has prompted many companies utilizing facial recognition algorithms to re-evaluate their use of these systems. Research has found that these algorithms are frequently biased, based on non-representative, biased training data.

The signatories of the letter, in addition to arguing that AI researchers should forgo research on criminality prediction algorithms, they have also recommended that researchers re-evaluate how the success of AI models is judged. The coalition members recommend that the societal impact of algorithms should be a metric for success, in addition to metrics like precision, recall, and accuracy. As the authors of the paper write:

“If machine learning is to bring about the ‘social good’ touted in grant proposals and press releases, researchers in this space must actively reflect on the power structures (and the attendant oppressions) that make their work possible.”

Spread the love
Continue Reading

Deep Learning

Researchers Believe AI Can Be Used To Help Protect People’s Privacy

mm

Published

on

Two professors of information science have recently published a piece in The Conversation, arguing that AI could help preserve people’s privacy, rectifying some of the issues that it has created.

Zhiyuan Chen and Aryya Gangopadhyay argue that artificial intelligence algorithms could be used to defend people’s privacy, counteracting some of the many privacy concerns other uses of AI have created. Chen and Gangopadhyay acknowledge that many of the AI-driven products we use for convenience wouldn’t work without access to large amounts of data, which at first glance seems at odds with attempts to preserve privacy. Furthermore, as AI spreads out into more and more industries and applications, more data will be collected and stored in databases, making breaches of those databases tempting. However, Chen and Gangopadhyay believe that when used correctly, AI can help mitigate these issues.

Chen and Gangopadhyay explain in their post that the privacy risks associated with AI come from at least two different sources. The first source is the large datasets collected to train neural network models, while the second privacy threat is the models themselves. Data can potentially “leak” from these models, with the behavior of the models giving away details about the data used to train them.

Deep neural networks are comprised of multiple layers of neurons, with each layer connected to the layers around them. The individual neurons, as well as the links between neurons,  encode for different bits of the training data. The model may prove to be too good as remembering patterns of the training data, even if the model isn’t overfitting. Traces of the training data exist within the network and malicious actors may be able to ascertain aspects of the training data, as Cornell University found during one of their studies. Cornell researchers found that facial recognition algorithms could be exploited by attackers to reveal which images, and therefore which people, were used to train the face recognition model. The Cornell researchers discovered that even if an attacker doesn’t have access to the original model used to train the application, the attacker may still be able to probe the network and determine if a specific person was included in the training data simply by using models was that were trained on highly similar data.

Some AI models are currently being used to protect against data breaches and try to ensure people’s privacy. AI models are frequently used to detect hacking attempts by recognizing the patterns of behavior that hackers use to penetrate security methods. However, hackers often change up their behavior to try and fool pattern-detecting AI.

New methods of AI training and development aim to make AI models and applications less vulnerable to hacking methods and security evasion tactics. Adversarial learning endeavors to train AI models on simulations of malicious or harmful inputs and in doing so make the model more robust to exploitation, hence the “adversarial” in the name. According to Chen and Gangopadhyay, their research has discovered methods of combatting malware designed to steal people’s private info. The two researchers explained that one of the methods they found to be most effective at resisting malware was the introduction of uncertainty into the model. The goal is to make it more difficult for bad actors to anticipate how the model will react to any given input.

Other methods of utilizing AI to protect privacy include minimizing data exposure when the model is created and trained, as well as probing to discover the network’s vulnerabilities. When it comes to preserving data privacy, federated learning can help protect the privacy of sensitive data, as it allows a model to be trained by without the training data ever leaving the local devices that contain the data, insulating the data, and much of the model’s parameters from spying.

Ultimately, Chen and Gangopadhyay argue that while the proliferation of AI has created new threats to people’s privacy, AI can also help protect privacy when designed with care and consideration.

Spread the love
Continue Reading

Ethics

ABBYY Launches Global Initiative Promoting the Development of Trustworthy AI

mm

Published

on

ABBYY, a Digital Intelligence company, recently launched a global initiative to promote the development of trustworthy artificial intelligence (AI) technology. As AI becomes ubiquitous across consumer and enterprise high-value and large-scale uses and more open source tools become available for digitizing data, the ethical use of accessing and training data is imperative.

How big of a concern are AI ethics?  In a recent study, there were eight themes that kept recirculating. Privacy and accountability were two of the most commonly appearing ethical themes, as was AI safety/security. Transparency/explainability was also a commonly cited goal, with making AI algorithms more explainable being classified as extremely important.

Tackling the current problems associated with AI is so important, that by 2025, Gartner estimates 30 percent of large enterprise and government contracts for the purchase of digital products and services that incorporate AI will require the use of explainable and ethical AI. Furthermore, three-fourths of consumers say they won’t buy from unethical companies, while 86% say they’re more loyal to ethical companies.

These are some of the reasons that ABBYY made public its core guiding principles on developing, maintaining and promoting trustworthy AI technologies and why they advocate for other technology leaders to do the same.

We recently interviewed Andrew Pery, Ethics Evangelist, ABBYY. He has this to say:

“First, there must be an acknowledgment that AI may have the propensity to produce unintentional bias referred to as unconscious bias. Mitigating its adverse impact requires a holistic approach consisting of development of ethical AI guidelines, such as proposed by the OECD Ethical Frameworks. This is based on seven foundational principles including human agency and oversight, technical robustness and safety, privacy and governance, transparency, diversity and nondiscrimination, societal well being and accountability.

Second, adherence to legal frameworks that protect against automated profiling is key. For example, the EU has had a long history of respecting and protecting privacy rights, with the most recent and powerful expression being the General Data Protection Regulation (GDPR) which specifically protects against automated decision-making.

Third, there needs to be industry accountability for how AI technology is developed and a commitment to provide transparency without impacting their proprietary IP rights. There are positive developments here. For example, the Partnership on AI is an organization with over 80 members in 13 countries dedicated to research and discussion on key AI issues.”

Andrew went into far more details regarding various ethical issues in our in-depth interview.

ABBYY, whose Digital Intelligence solutions leverage AI technologies including machine learning (ML), natural language processing (NLP), neural networks, and optical character recognition (OCR) to transform data, affirmed its commitment to the following principles and advocates for other leading technology organizations to also commit to trustworthy AI standards:

  • Incorporating a privacy-by-design principle as an integral part of its software development processes
  • Protecting confidential customer and partner data
  • Developing AI technologies that meet or exceed industry standards for performance, accuracy and security
  • Empowering customers and partners to successfully implement digital transformation in their organizations by delivering solutions that provide a greater understanding of content and processes
  • Providing visibility into the performance characteristics and metrics of its technologies, as well as providing opportunities for product feedback
  • Delivering AI technologies that are socially and economically beneficial
  • Fostering a culture that promotes the ethical use of AI and its social utility

To view ABBYY’s guiding principles and approach to adhering to and advocating for trustworthy AI principles, click here.

To learn more about ABBYY’s suite of Digital Intelligence solutions, please click here.

Spread the love
Continue Reading