Jonathan Dambrot is the CEO & Co-Founder of Cranium AI, an enterprise that helps cybersecurity and data science teams understand everywhere that AI is impacting their systems, data or services.
Jonathan is a former Partner at KPMG, cyber security industry leader, and visionary. Prior to KPMG, he led Prevalent to become a Gartner and Forrester industry leader in 3rd party risk management before its sale to Insight Venture Partners in late 2016. In 2019 Jonathan transitioned the Prevalent CEO role as the company looks to continue its growth under new leadership. He has been quoted in a number of publications and routinely speaks to groups of clients regarding trends in IT, information security, and compliance.
Could you share the genesis story behind Cranium AI?
I had the idea for Cranium around June of 2021 when I was a partner at KPMG leading Third-Party Security services globally. We were building and delivering AI-powered solutions for some of our largest clients, and I found that we were doing nothing to secure them against adversarial threats. So, I asked that same question to the cybersecurity leaders at our biggest clients, and the answers I got back were equally horrible. Many of the security teams had never even spoken to the data scientists – they spoke completely different languages when it came to technology and ultimately had zero visibility into the AI running across the enterprise. All of this combined with the steadily growing development of regulations was the trigger to build a platform that could provide security to AI. We began working with the KPMG Studio incubator and brought in some of our largest clients as design partners to guide the development to meet the needs of these large enterprises. In January of this year, Syn Ventures came in to complete the Seed funding, and we spun out independently of KPMG in March and emerged from stealth in April 2023.
What is the Cranium AI Card and what key insights does it reveal ?
The Cranium AI Card allows organizations to efficiently gather and share information about the trustworthiness and compliance of their AI models with both clients and regulators and gain visibility into the security of their vendors’ AI systems. Ultimately, we look to provide security and compliance teams with the ability to visualize and monitor the security of the AI in their supply chain, align their own AI systems with current and coming compliance requirements and frameworks, and easily share that their AI systems are secure and trustworthy.
What are some of the trust issues that people have with AI that are being solved with this solution?
People generally want to know what’s behind the AI that they are using, especially as more and more of their daily workflows are impacted in some way, shape, or form by AI. We look to provide our clients with the ability to answer questions that they will soon receive from their own customers, such as “How is this being governed?”, “What is being done to secure the data and models?”, and “Has this information been validated?”. AI card gives organizations a quick way to address these questions and to demonstrate both the transparency and trustworthiness of their AI systems.
In October 2022, the White House Office of Science and Technology Policy (OSTP) published a Blueprint for an AI Bill of Rights, which shared a nonbinding roadmap for the responsible use of AI. Can you discuss your personal views on the pros and cons of this bill?
While it’s incredibly important that the White House took this first step in defining the guiding principles for responsible AI, we don’t believe it went far enough to provide guidance for organizations and not just individuals worried about appealing an AI-based decision. Future regulatory guidance should be not just for providers of AI systems, but also users to be able to understand and leverage this technology in a safe and secure manner. Ultimately, the major benefit is AI systems will be safer, more inclusive, and more transparent. However, without a risk based framework for organizations to prepare for future regulation, there is potential for slowing down the pace of innovation, especially in circumstances where meeting transparency and explainability requirements is technically infeasible.
How does Cranium AI assist companies with abiding by this Bill of Rights?
Cranium Enterprise helps companies with developing and delivering safe and secure systems, which is the first key principle within the Bill of Rights. Additionally, the AI Card helps organizations with meeting the principle of notice and explanation by allowing them to share details about how their AI systems are actually working and what data they are using.
What is the NIST AI Risk Management Framework, and how will Cranium AI help enterprises in achieving their AI compliance obligations for this framework?
The NIST AI RMF is a framework for organizations to better manage risks to individuals, organizations, and society associated with AI. It follows a very similar structure to their other frameworks by outlining the outcomes of a successful risk management program for AI. We’ve mapped our AI card to the objectives outlined in the framework to support organizations in tracking how their AI systems align with the framework and given our enterprise platform already collects a lot of this information, we can automatically populate and validate some of the fields.
The EU AI Act is one of the more monumental AI legislations that we’ve seen in recent history, why should non-EU companies abide by it?
Similar to GDPR for data privacy, the AI Act will fundamentally change the way that global enterprises develop and operate their AI systems. Organizations based outside of the EU will still need to pay attention to and abide by the requirements, as any AI systems that use or impact European citizens will fall under the requirements, regardless of the company’s jurisdiction.
How is Cranium AI preparing for the EU AI Act?
At Cranium, we’ve been following the development of the AI Act since the beginning and have tailored the design of our AI Card product offering to support companies in meeting the compliance requirements. We feel like we have a great head start given our very early awareness of the AI Act and how it has evolved over the years.
Why should responsible AI become a priority for enterprises?
The speed at which AI is being embedded into every business process and function means that things can get out of control quickly if not done responsibly. Prioritizing responsible AI now at the beginning of the AI revolution will allow enterprises to scale more effectively and not run into major roadblocks and compliance issues later.
What is your vision for the future of Cranium AI?
We see Cranium becoming the true category king for secure and trustworthy AI. While we can’t solve everything, such as complex challenges like ethical use and explainability, we look to partner with leaders in other areas of responsible AI to drive an ecosystem to make it simple for our clients to cover all areas of responsible AI. We also look to work with the developers of innovative generative AI solutions to support the security and trust of these capabilities. We want Cranium to enable companies across the globe to continue innovating in a secure and trusted way.
Thank you for the great interview, readers who wish to learn more should visit Cranium AI.
- The Black Box Problem in LLMs: Challenges and Emerging Solutions
- Alex Ratner, CEO & Co-Founder of Snorkel AI – Interview Series
- Circleboom Review: The Best AI-Powered Social Media Tool?
- Stable Video Diffusion: Latent Video Diffusion Models to Large Datasets
- Donny White, CEO & Co-Founder of Satisfi Labs – Interview Series