Connect with us


Brian Sathianathan, Chief Technology Officer & Co-Founder of – Interview Series




Brian Sathianathan is the Chief Technology Officer and a co-founder at, creator of the Interplay low-code platform for rapidly building AI-based applications across industries. Previously, Sathianathan worked at Apple on various emerging technology projects that included the Mac operating system and the first iPhone.

What initially attracted you to working with AI technologies?

I always had an interest in algorithm-driven learning, and I started working with AI systems during my college days. In addition, I spent quite a lot of time early in my career building cryptography and other security technologies for Apple, and video compression technologies for a prior company I co-founded. Both video and crypto technologies are very algorithm-intensive, and that really made my learning curve in AI/ML faster. Around 2016, I started to play with open source AI frameworks/GPUs, realizing how far they have come in the past five years – both from an algorithm perspective and their ability to do a broader range of classifications. Then I realized a need to make this easier and simpler for everyone to use.

You have some strong views on cognitive bias and data bias in AI, could you share these concerns?

AI bias occurs when engineers let their own viewpoints and preconceptions shape their AI training data sets. Doing so quickly undermines what they’re trying to accomplish with AI. Most often, this influence is subconscious, so they might not even be aware bias has seeped into their data sets. Without effective checks and balances, data can be constrained to only those points of focus or demographics that engineers are prone to consider. Even when engineers have a high quality and volume of data to work with, biases in data sets can render the results delivered by AI applications incorrect and, in many cases, largely useless.

A Gartner report estimated that through 2030, 85% of AI projects will provide false results due to bias. That’s a big gap to overcome. Businesses that invest in, trust, and make strategic decisions based on AI – only to be misled by false conclusions rooted in bias – risk high-cost failures and damage to their reputations. With AI rapidly shifting from an emerging technology to an omnipresent cornerstone across both customer-facing applications and internal processes, removing bias is essential to realizing AI’s true potential going forward.

What are some ways to prevent these types of biases from showing up?

AI bias must be systematically and proactively detected and removed. Biases might be hardcoded into algorithms. Inaccuracies might be introduced via cognitive biases that simply omit necessary data. Aggregation bias is yet another risk here, where a series of small decisions add up to skewed AI results.

Detecting and eliminating AI bias in all its forms requires organizations to utilize frameworks, toolkits, processes, and policies built to effectively mitigate these issues. For example, AI frameworks such as the Aletheia Framework from Rolls Royce and Deloitte’s AI framework – supplemented by automatically-enforced benchmarks – can promote bias-free practices across AI application development and deployment. Toolkits like AI Fairness 360 and IBM Watson OpenScale can recognize and remove bias and bias patterns in machine learning models and pipelines. Finally, processes that test data against defined bias metrics, combined with policies that provide governance to deter bias through enforced practices, enable organizations to be systematic in checking their blind spots and curtailing AI bias.

You’re the CTO and a co-founder at – how did it get started?

That story begins in 2013 when co-founder Jon Nordmark (our CEO) and I both served as board members of an Eastern European accelerator based in Ukraine, designed to help entrepreneurs there build and operate Silicon Valley-style startups. Those experiences with amazingly innovative new companies led us to the idea of pairing promising (but perhaps less known) startups with large enterprises in need of innovation support. We subsequently launched what was then called Iterate Studio, offering a specialized search engine for enterprises to find startup partners based on the innovative capabilities those larger organizations were seeking. In 2015, the company became to highlight our AI-driven startup curation. Today, our Signals database indexes more than 15.7 million startup technologies based on myriad factors (and using proprietary AI to make it happen at that scale).

We expanded in 2017 and launched the first version of our Interplay low-code application development platform. Interplay provides an AI-fueled software layer that modernizes enterprises’ legacy stacks by enabling drag-and-drop utilization of innovative technologies while accelerating software development by ten-fold. The low-code platform has 475 pre-built components, so users can mix and match the technologies they need to quickly spin up applications. AI empowerment is at the core of the platform, as well as other low-code components for IoT, data integration, and even blockchain.

Iterate is a low-code platform for developing AI-fueled applications; what are some of the AI applications that can be built?

Our low-code platform has enabled AI applications for a really interesting variety of use cases – the breadth of deployment is something we’re really proud of. Ulta Beauty, the billion-dollar global beauty retailer, used our platform to build a smart AI retail guest chatbot in just two weeks. In contrast, primitive chatbots are keyword-centric, and most vendor chatbot applications can’t integrate seamlessly with legacy systems to access customer information or allow smooth transitions to human-assisted support. Ulta’s smart AI chatbot eliminated those issues with natural language processing functionality and the ability to recognize customer “intents” to provide really accurate responses. Our platform made it simple for Ulta to build the chatbot’s AI engine in just hours, and to configure, refine, and improve the chatbot’s training and responses extremely rapidly.

In another example, Jockey utilized our platform to enable AI-powered FAQs ready to automatically (and successfully) respond to rather complex and subjective customer service scenarios. Our platform also enabled a global convenience store and gas network’s pandemic response of touchless gas pumps, relying on AI-based image recognition of customer license plates. Our AI capabilities are also being applied to empower camera-centric security strategies at retail locations. Through image recognition, trained AI applications can identify threats and the presence of weapons outside of storefronts, trigger store lockdowns to protect customers, and contact authorities.

How small are the actual coding requirements? How much development skill do users need to have?

In my opinion, the 80/20 rule applies. 80% of applied AI use cases are already built and have established models and training data around them. A traditional organization can easily use a low code platform (ours, Interplay, is one such platform) and implement these cases. Here are some examples:

  • AI driven FAQs
  • AI-powered product finders
  • Product recommendations and bundling
  • OCR
  • Visual product identification
  • Tabular data analysis: things like AOV, basket analysis, churn predictions, etc
  • Object extraction/detection
  • Object permanence

The above cases could be implemented by an engineer with server-side programming knowledge and some basic understanding of machine learning APIs. It’s very similar to video streaming, cryptography, and key management techniques that are widely used via APIs today. Most engineers who use these APIs often don’t know how they work underneath.

Why is low-code AI important for scaling AI technology?

Businesses pursuing AI capabilities in their application development can quickly face major challenges when not utilizing low-code. In the world today, there are only 300,000 AI engineers, and only 60,000 of those are data scientists. Because of this, the talent needed to develop and scale AI solutions is expensive and going up. In contrast, low-code development really democratizes access to AI. With low-code, any of the world’s 25 million software developers and even those without training, can easily implement AI engines, refine their capabilities, and produce and scale effective solutions.

Going back to’s AI-powered Signals platform, what are some of the more interesting trends emerging? 

We are seeing rapid growth across five forces of innovation: AI, IoT, blockchain, data, and emerging startup solutions. These are all very large markets. We are seeing thousands of data points on news, patents, and new startup products everyday. Interplay is built to harness these forces as well, by including pre-built components to take advantage of these growing forces.

Is there anything else that you would like to share about

I think there are still misconceptions around low-code and its role in building AI applications. It’s not uncommon to see IT professionals questioning whether a low-code strategy can meet their requirements for enterprise-grade scalability, extensibility, and security. I think that low-code options that are intended for prototyping – but misapplied as tools for production applications – have contributed to this weariness. That said, the right low-code platforms are absolutely up to the task of building and supporting production-ready AI applications. Enterprises should perform their due diligence in selecting low-code tooling, making sure those tools have a transparent and thorough security layer, and a proven record of delivering applications at enterprise scale.

Thank you for the great interview, readers who wish to learn more should visit

A founding partner of unite.AI & a member of the Forbes Technology Council, Antoine is a futurist who is passionate about the future of AI & robotics.

He is also the Founder of, a website that focuses on investing in disruptive technology.