āđ€āļŠāļ·āđˆāļ­āļĄāļ•āđˆāļ­āļāļąāļšāđ€āļĢāļē

Steve Tait, Chief Technology Officer at Skyhigh Security – Interview Series

āļšāļ—āļŠāļąāļĄāļ āļēāļĐāļ“āđŒ

Steve Tait, Chief Technology Officer at Skyhigh Security – Interview Series

mm

āļāļēāļĢāļ•āļĩāļžāļīāļĄāļžāđŒ

 on

Steveâ€ŊTait, Chief Technology Officer at Skyhigh Security, is a seasoned executive technology leader with over 25 years of experience across cybersecurity, defense, financial services, and healthcare sectors. He joined Skyhigh in August 2024 to spearhead the company's Security Service Edge (SSE) technological vision, architecture, and cloud infrastructure strategy.

āļāļēāļĢāļĢāļąāļāļĐāļēāļ„āļ§āļēāļĄāļ›āļĨāļ­āļ”āļ āļąāļĒāļĢāļ°āļ”āļąāļšāļŠāļđāļ‡ is a privately held, cloud-native cybersecurity company headquartered in San Jose, California, offering a comprehensive Security Service Edge (SSE) platform. The platform unifies solutions such as CASB, Secure Web Gateway, Zero Trust Private Access, CNAPP, DLP, and Remote Browser Isolation to protect data and ensure secure collaboration across web, cloud, email, and private applications. With a focus on real-time data protection, threat prevention, and compliance, Skyhigh Security serves over 3,000 customers globally—including many Fortune 500 companies and major financial institutions—through a scalable, data-aware architecture designed for modern hybrid work environments.

You began your career in mobile data before rising through engineering and leadership roles across different sectors—what early experience shaped your passion for cybersecurity and led you to where you are today?

When I joined BAE systems, I gained insight into the work of the incredible teams that decompile the most malicious viruses and malware to learn how to defend against them. The sheer scale and organized professionalism of the cybercrime industry was a genuine eye opener. For example, the code behind cyber breaches can sometimes have its heritage traced back to multiple nation-state actors and criminal organizations. It’s not teenagers in bedrooms, it’s a serious global business. Defending against this is a genuine good for society, and I wanted to be a part of that.

You’ve stated that “digital transformation is over” and we’re now entering an era of AI transformation—how do you distinguish one phase from the other in terms of company strategy and outcomes?

Digital Transformation was about using technology to re-engineer business processes to be more efficient, effective and provide a better experience to the customer. AI transformation from a business perspective seeks to achieve the same goal. The fundamental difference, however, is that digital transformation achieves this through process automation, data aggregation and advanced data visualization, whereas AI Transformation achieves this through original content creation, companion analysis and autonomous decision-making. Digital Transformation aimed to optimize and streamline the human decision making processes. AI transformation has the ability to eliminate many of them entirely!

What do you consider the biggest organizational challenges companies face when making the shift from classic automation to integrating generative AI?

The level of transformation required to really take advantage of this is huge. Business could – and perhaps should – look totally different a few years from now. Now though, despite the hype, it’s still in its early days. The biggest organizational issue today is actually training. Lots of companies have rolled out the usual '20-minute' corporate training video on AI, but that simply does not cut it. Employees need to learn how to leverage this technology, the real risks it presents and understand, even if just a little, about how it works. That way employees at all levels can help the business transform with the technology.

In Security Magazine, you highlighted prompt injections and hallucinations among top risks—what threat vector worries you the most, and how is Skyhigh addressing it?

Unintentional data exfiltration is by far the biggest corporate threat by volume. Just from the data we track at Skyhigh, we saw an increase in data uploaded to LLMs of a staggering 80% within the past year. Many interfaces operate like business assistants and encourage more and more information to be uploaded. The act of sharing information with another person – taking a file and zipping for upload to an SFTP location for third party analysis – makes an employee stop and think about what they’re doing and any potential risks. It’s just quite a few steps to do it and you become very conscious you are sending it to someone. Being halfway through some analysis using an AI tool and just pasting a block of data into a prompt for AI to give a provide a quick answer is seconds of effort, yet the question remains: where did that data go and how was it used? Because of this, Skyhigh's main focus is on data loss prevention for AI applications, especially copilots.

You’ve cited statistics showing that 94% of AI apps carry LLM risk and 11% of files uploaded to AI are sensitive—what trends do you see in how businesses are responding to address those issues?

Businesses are still using “user policy” and “blocking” as their primary techniques, but many businesses remain blind to the amount of AI that is being used every day. We see a lot of interest in increasing discovery, visibility, and in the extension of Data Loss Prevention techniques to AI, particularly for major copilot applications.

Corporate copilots can access vast amounts of proprietary data—what are the most effective strategies to prevent unauthorized data leakage or misuse via these systems?

As always, it starts with policy and training. Following this, a combination of data labeling and DLP techniques are vital. Microsoft AIP labeling, for example, can prevent confidential data from being indexed by MSFT Copilot. In combination with CASB and DLP tools, AIP labels can be added automatically based on data classifications. DLP performed on document and prompt data can ensure the prevention of unintended data upload.

You’ve emphasized the risk posed by citizen developers creating their own applications—how can companies strike a balance between fostering innovation and ensuring secure development?

It always comes back to training. Just because someone is a ‘citizen developer' doesn’t mean they can opt out of security fundamentals that would be a part of standard engineer training. They don't need to know everything that a skilled software engineer may know, of course, but basic concepts of privileged access and horizontal privilege escalation are important concepts when building an application. Personally, I would only enable the access to such tools after appropriate training has been completed. Then it’s about implementing security tooling to trap the inadvertent mistakes, which comes back to techniques such as DLP.

As CTO of Skyhigh Security, which area of AI-risk mitigation—copilots, citizen dev, or compliance infrastructure—are you prioritizing for the next 12–18 months?

Awareness is vital and Skyhigh already provides comprehensive Shadow AI discovery tooling. Microsoft Copilot and ChatGPT Enterprise are our main focus in 2025. We have already rolled out controls on both and we are extending these further over the remainder of 2025. As we move into 2026, we are looking to turn our sights more to prompt control to secure against malicious prompts, jailbreaking, and other key LLM risks.

What’s one breakthrough or shift you foresee that could completely reshape how we think about enterprise security in an AI-first world?

Agentic AI. It’s just starting, but the impact is potentially huge. As more and more of these agents chain together, the attack vectors increase along the chain. A lot of cybercrime is “spotted” by humans because something does not look right. In these chains of agents how to spot signs of compromise will be a real challenge.

āļ‚āļ­āļšāļ„āļļāļ“āļŠāļģāļŦāļĢāļąāļšāļšāļ—āļŠāļąāļĄāļ āļēāļĐāļ“āđŒāļ—āļĩāđˆāļ”āļĩ āļœāļđāđ‰āļ­āđˆāļēāļ™āļ—āļĩāđˆāļ•āđ‰āļ­āļ‡āļāļēāļĢāđ€āļĢāļĩāļĒāļ™āļĢāļđāđ‰āđ€āļžāļīāđˆāļĄāđ€āļ•āļīāļĄāļ„āļ§āļĢāđ€āļĒāļĩāđˆāļĒāļĄāļŠāļĄ āļāļēāļĢāļĢāļąāļāļĐāļēāļ„āļ§āļēāļĄāļ›āļĨāļ­āļ”āļ āļąāļĒāļĢāļ°āļ”āļąāļšāļŠāļđāļ‡.

Antoine āđ€āļ›āđ‡āļ™āļœāļđāđ‰āļ™āļģāļ—āļĩāđˆāļĄāļĩāļ§āļīāļŠāļąāļĒāļ—āļąāļĻāļ™āđŒāđāļĨāļ°āđ€āļ›āđ‡āļ™āļŦāļļāđ‰āļ™āļŠāđˆāļ§āļ™āļœāļđāđ‰āļāđˆāļ­āļ•āļąāđ‰āļ‡ Unite.AI āļ‹āļķāđˆāļ‡āļ‚āļąāļšāđ€āļ„āļĨāļ·āđˆāļ­āļ™āđ‚āļ”āļĒāļ„āļ§āļēāļĄāļĄāļļāđˆāļ‡āļĄāļąāđˆāļ™āļ­āļĒāđˆāļēāļ‡āđ„āļĄāđˆāļĨāļ”āļĨāļ°āđƒāļ™āļāļēāļĢāļāļģāļŦāļ™āļ”āđāļĨāļ°āļŠāđˆāļ‡āđ€āļŠāļĢāļīāļĄāļ­āļ™āļēāļ„āļ•āļ‚āļ­āļ‡ AI āđāļĨāļ°āļŦāļļāđˆāļ™āļĒāļ™āļ•āđŒ āđƒāļ™āļāļēāļ™āļ°āļœāļđāđ‰āļ›āļĢāļ°āļāļ­āļšāļāļēāļĢāļ—āļĩāđˆāļ›āļĢāļ°āļŠāļšāļ„āļ§āļēāļĄāļŠāļģāđ€āļĢāđ‡āļˆāļ­āļĒāđˆāļēāļ‡āļ•āđˆāļ­āđ€āļ™āļ·āđˆāļ­āļ‡ āđ€āļ‚āļēāđ€āļŠāļ·āđˆāļ­āļ§āđˆāļē AI āļˆāļ°āļŠāļĢāđ‰āļēāļ‡āļ„āļ§āļēāļĄāļ›āļąāđˆāļ™āļ›āđˆāļ§āļ™āđƒāļŦāđ‰āļāļąāļšāļŠāļąāļ‡āļ„āļĄāđ„āļ”āđ‰āđ„āļĄāđˆāđāļžāđ‰āđ„āļŸāļŸāđ‰āļē āđāļĨāļ°āļĄāļąāļāļˆāļ°āļžāļđāļ”āļ–āļķāļ‡āļĻāļąāļāļĒāļ āļēāļžāļ‚āļ­āļ‡āđ€āļ—āļ„āđ‚āļ™āđ‚āļĨāļĒāļĩāļ—āļĩāđˆāļŠāļĢāđ‰āļēāļ‡āļ„āļ§āļēāļĄāļ›āļąāđˆāļ™āļ›āđˆāļ§āļ™āđāļĨāļ° AGI āļ­āļĒāļđāđˆāđ€āļŠāļĄāļ­

āđƒāļ™āļāļēāļ™āļ°āļ—āļĩāđˆāđ€āļ›āđ‡āļ™ āļœāļđāđ‰āđ€āļ›āđ‡āļ™āđ€āļˆāđ‰āļēāļĒāļąāļ‡āļĄāļēāđ„āļĄāđˆāļ–āļķāļ‡āđ€āļ‚āļēāļ­āļļāļ—āļīāļĻāļ•āļ™āđ€āļžāļ·āđˆāļ­āļŠāļģāļĢāļ§āļˆāļ§āđˆāļēāļ™āļ§āļąāļ•āļāļĢāļĢāļĄāđ€āļŦāļĨāđˆāļēāļ™āļĩāđ‰āļˆāļ°āļŦāļĨāđˆāļ­āļŦāļĨāļ­āļĄāđ‚āļĨāļāļ‚āļ­āļ‡āđ€āļĢāļēāļ­āļĒāđˆāļēāļ‡āđ„āļĢ āļ™āļ­āļāļˆāļēāļāļ™āļĩāđ‰ āđ€āļ‚āļēāļĒāļąāļ‡āđ€āļ›āđ‡āļ™āļœāļđāđ‰āļāđˆāļ­āļ•āļąāđ‰āļ‡ āļŦāļĨāļąāļāļ—āļĢāļąāļžāļĒāđŒ.ioāđāļžāļĨāļ•āļŸāļ­āļĢāđŒāļĄāļ—āļĩāđˆāļĄāļļāđˆāļ‡āđ€āļ™āđ‰āļ™āļāļēāļĢāļĨāļ‡āļ—āļļāļ™āđƒāļ™āđ€āļ—āļ„āđ‚āļ™āđ‚āļĨāļĒāļĩāļĨāđ‰āļģāļŠāļĄāļąāļĒāļ—āļĩāđˆāļāļģāļĨāļąāļ‡āļ™āļīāļĒāļēāļĄāļ­āļ™āļēāļ„āļ•āđƒāļŦāļĄāđˆāđāļĨāļ°āļ›āļĢāļąāļšāđ€āļ›āļĨāļĩāđˆāļĒāļ™āļĢāļđāļ›āļĨāļąāļāļĐāļ“āđŒāļ‚āļ­āļ‡āļ āļēāļ„āļŠāđˆāļ§āļ™āļ•āđˆāļēāļ‡āđ†