Connect with us

Interviews

Shay Sabhikhi, CEO of CognitiveScale – Interview Series

mm

Published

 on

Akshay (Shay) Sabhikhi is the CEO of CognitiveScale, an enterprise AI software company with solutions that helps customers win with intelligent, transparent, and trusted AI/ML-powered digital systems.

Shay is responsible for overall company growth and strategic direction, he has more than 18 years of entrepreneurial leadership, product development, and management experience with growth-stage, venture-backed companies and high-growth software divisions within Fortune 50 companies.

How Important is explainable AI?

Explainable AI is critical – Not being able to explain a decision is a guaranteed way to erode trust. As many as 80% of AI and ML projects stall due to lack of trust, transparency and explainability in models and data. Having the ability to understand and more importantly, trust the results and output of ML algorithms ensures organizations can accurately describe models, understand impact and protect against potential and unforeseen biases. Explainable AI ensures accuracy, fairness and transparency of AI-powered decision making and is essential for building trust and confidence when putting models into production. Simply put, explainable AI helps digitally savvy organizations adopt a responsible approach to AI.

Could you go into some detail on when and how CognitiveScale originally pioneered the concept of ‘Trusted AI’?

We created CognitiveScale with one goal in mind: to give businesses confidence and trust in the digital systems they need to thrive. We understood that as businesses increasingly adopt digital systems of engagement, fostering trust within your decisioning processes across all your stakeholders – Customers, Employees, Auditors, would be a necessity. So, we defined anchors of trustworthy AI— Explainability, Personalization, Fairness, Robustness and Compliance—principles upon which automated decision-making systems become trusted and responsible. We learnt by applying these principles in heavily regulated industries such as Healthcare and Financial Services to drive exponential results for our clients in the form of increasing patient engagement, reducing the cost of care, and improving the servicing of customers to increase loyalty. We were also the founding member of the Responsible AI Institute (RAI), a non-profit, independent organization that is focused on defining industry specific policies and standards for enterprise adoption of Responsible AI.  CognitiveScale was also recognized by the World Economic Forum in 2019 as a Technology Pioneer shaping the adoption of responsible AI in regulated industries such as Healthcare and Financial Services.

What are some ways that CognitiveScale works to both offer explainable AI and enhance AI transparency?

By giving businesses the tools to rapidly build, deploy and manage AI systems using trusted AI principles, we ensure process and operational transparency. Trusted AI systems ensure that the data and models being used are representative of the real world and models are free of inherent biases that can skew decision making and reasoning, leading to decisioning errors and unintended consequences. Through the Cortex platform, we’re delivering an end-to-end platform for designing, deploying and managing intelligent systems that are both explainable and responsible, fostering confidence and transparency.

Could you elaborate on how CognitiveScale’s Cortex Platform is able to leverage virtually any data and any Black Box model, to offer layered AI control, automated build, and ready-made applications?

Cortex Fabric is our low code developer platform for automating development of trusted AI applications like an AI Middleware. It deploys on any cloud—AWS, Azure, Google, IBM Cloud, or On-prem —and works by simplifying the integration of data, models, rules, and even analytics built on legacy enterprise systems. With over 127 connectors, Fabric connects to data from first- and third-party sources, both batch and streaming, and stitches them into a semantic data fabric focused on entities such as customers, products, employees etc. It draws unique and personalized inferences via machine learning models to identify intelligent insights that are then delivered into systems of engagement. As the AI middleware layer, it is tightly integrated into our AI Governance layer delivered through Cortex Certifai, for enabling continuous, end to end governance of AI applications to detect and remediate AI Risks such as fairness, explainability, robustness, and performance.  Cortex also offers industry specific blueprints called Cortex Application Blueprints, that speed the time to build and deploy Trusted AI Applications. Last month we launched Fabric Version 6, expanding functionality by making it easy for citizen developers to design and track goal driven AI apps, aligned to business KPIs for even faster time to value.

With CognitiveScale’s Cortex Certifai, enterprises can build trust into their digital systems by detecting and scoring black-box model risk. Could you elaborate on this solution and how it works to create the first-ever composite trust score, the AI Trust Index?

Cortex Certifai is our model intelligence platform for designing and deploying transparent, fair, and performant AI systems, while reducing business risk associated with hidden bias in AI models, ensuring they are explainable, and resilient to changes in data and data quality. Cortex Certifai’s visual interface lets key stakeholders (data scientists, audit and compliance departments, line of business owners) understand, evaluate and remediate potential AI business risk.  We provide a composite score called the AI Trust Index across four dimensions of AI Risk (Bias, Explainablity, Robustness, Performance) that can be configured by an organization based on Industry standards and policies.  We partner with the Responsible AI Institute (RAI) as an independent body to help organizations interpret and score their business risk against regulatory compliance standards. Cortex Certifai works with any black box model, including machine learning models, statistical models, business rules, and other predictive models.

How can AI applications help with improving patient outcomes with personalized and personalized healthcare?

AI in healthcare is having a profound impact on patient outcomes, service and operational costs. Cortex pre-built AI applications in healthcare can accelerate member and patient engagement in areas such as care optimization (cost of care, Star Rating improvement) and service experience (chatbot, intelligent call routing, self or agent assisted service) delivering predictive, proactive and personalized interactions for improved outcomes.

The power of CognitiveScale’s personalization comes from our Profile-of-One technology —a rich and unique knowledge base built for every entity (Customers, products, agents). The Profile-of-one drives hyper-personalized interventions for outreach—smart recommendations— designed to change the trajectory or course of action for the better. In other words, by combining declared attributes, observed behaviors and inferences, our Profile-of-One delivers personalized and contextual outreach, whether it’s helping members find a more suitable healthcare plan, make an appointment at a time they’ll likely make or discover a provider that’s closer to home. These intelligent interventions are then delivered into existing systems of engagement that include mobile apps, web portals, care management platforms.

What are some ways AI Applications improve client engagement by lowering operating costs through predictive, proactive and personalized care?

Within the CognitiveScale Cortex platform, our Profile-of-One technology helps drive proactive and personalized interventions.  These proactive interventions help improve engagement by providing customers with timely information that are delivered how they want it, e.g., Care insights delivered to your smart phone alerting you to required medications, or timely product recommendations in banking or insurance.  These personalized interventions not only improve engagement and loyalty, but also reduce costs, e.g., hospitalizations caused due to negligent care. In Marketing, AI Powered lead generation provides insight into specific target markets or cohorts and lead attributes, which can be used to determine more accurate keyword spend and marketing activities. AI Powered Intelligent sales assistant can boost salesperson productivity and conversion through prescriptive recommendations, proactive alerts, and market intelligence.

Is there anything else that you would like to share about CognitiveScale?

From the early days of the company, we’ve developed an ethos and culture to solve hard problems that deliver tremendous value for our customers.  We have proven this in heavily regulated industries (Healthcare and Financial Services) and our AI is being used to serve 90m+ healthcare members in the United States across leading healthcare organizations.  Our technology is backed by over 170 patents filed with ~100 approved making us one of the leading AI patent portfolios amongst all companies Private and Public.  We were recognized by the World Economic Forum as a Technology pioneer in 2019 for our work to promote Responsible AI and we continue to partner with them to further its adoption globally.

Thank you for the great interview, readers who wish to learn more should visit CognitiveScale.

Antoine Tardif is a Futurist who is passionate about the future of AI and robotics. He is the CEO of BlockVentures.com, and has invested in over 50 AI & blockchain projects. He is the Co-Founder of Securities.io a news website focusing on digital assets, digital securities and investing. He is a founding partner of unite.AI & a member of the Forbes Technology Council.