ABBYY, a Digital Intelligence company, recently launched a global initiative to promote the development of trustworthy artificial intelligence (AI) technology. As AI becomes ubiquitous across consumer and enterprise high-value and large-scale uses and more open source tools become available for digitizing data, the ethical use of accessing and training data is imperative.
How big of a concern are AI ethics? In a recent study, there were eight themes that kept recirculating. Privacy and accountability were two of the most commonly appearing ethical themes, as was AI safety/security. Transparency/explainability was also a commonly cited goal, with making AI algorithms more explainable being classified as extremely important.
Tackling the current problems associated with AI is so important, that by 2025, Gartner estimates 30 percent of large enterprise and government contracts for the purchase of digital products and services that incorporate AI will require the use of explainable and ethical AI. Furthermore, three-fourths of consumers say they won’t buy from unethical companies, while 86% say they’re more loyal to ethical companies.
These are some of the reasons that ABBYY made public its core guiding principles on developing, maintaining and promoting trustworthy AI technologies and why they advocate for other technology leaders to do the same.
We recently interviewed Andrew Pery, Ethics Evangelist, ABBYY. He has this to say:
“First, there must be an acknowledgment that AI may have the propensity to produce unintentional bias referred to as unconscious bias. Mitigating its adverse impact requires a holistic approach consisting of development of ethical AI guidelines, such as proposed by the OECD Ethical Frameworks. This is based on seven foundational principles including human agency and oversight, technical robustness and safety, privacy and governance, transparency, diversity and nondiscrimination, societal well being and accountability.
Second, adherence to legal frameworks that protect against automated profiling is key. For example, the EU has had a long history of respecting and protecting privacy rights, with the most recent and powerful expression being the General Data Protection Regulation (GDPR) which specifically protects against automated decision-making.
Third, there needs to be industry accountability for how AI technology is developed and a commitment to provide transparency without impacting their proprietary IP rights. There are positive developments here. For example, the Partnership on AI is an organization with over 80 members in 13 countries dedicated to research and discussion on key AI issues.”
Andrew went into far more details regarding various ethical issues in our in-depth interview.
ABBYY, whose Digital Intelligence solutions leverage AI technologies including machine learning (ML), natural language processing (NLP), neural networks, and optical character recognition (OCR) to transform data, affirmed its commitment to the following principles and advocates for other leading technology organizations to also commit to trustworthy AI standards:
- Incorporating a privacy-by-design principle as an integral part of its software development processes
- Protecting confidential customer and partner data
- Developing AI technologies that meet or exceed industry standards for performance, accuracy and security
- Empowering customers and partners to successfully implement digital transformation in their organizations by delivering solutions that provide a greater understanding of content and processes
- Providing visibility into the performance characteristics and metrics of its technologies, as well as providing opportunities for product feedback
- Delivering AI technologies that are socially and economically beneficial
- Fostering a culture that promotes the ethical use of AI and its social utility
To view ABBYY’s guiding principles and approach to adhering to and advocating for trustworthy AI principles, click here.
To learn more about ABBYY’s suite of Digital Intelligence solutions, please click here.
- Acronis SCS and Leading Academics Partner to Develop AI-based Risk Scoring Model
- Developers Create Open Source Software To Help AI Researchers Reduce Carbon Footprint
- How AI Will Impact Both Cybersecurity and Cyber Attacks
- Researchers Create Robot That Displays Basic Empathy to a Robot Partner
- What Is K-Means Clustering?