Interviews
Jack Cherkas, Global CISO at Syntax – Interview Series

Jack Cherkas, Global CISO at Syntax, is a cybersecurity executive with deep experience across cloud security, cyber resilience, enterprise architecture, and AI security. He has held senior roles at Syntax, PwC UK, Kyndryl, and IBM, where he helped build and scale security operations, managed major incident response efforts, and developed cyber resilience strategies for large enterprise environments. At Syntax, he leads global cybersecurity across the company’s people, systems, data centers, managed cloud services, and customer-facing security offerings, overseeing a team of more than 65 security professionals across eight countries.
Syntax is a global IT services and managed cloud provider specializing in mission-critical enterprise applications, particularly SAP and Oracle environments. The company supports organizations with cloud migration, managed hosting, cybersecurity, enterprise application management, and AI-enabled operations across hybrid and multi-cloud infrastructure. Its work focuses on helping enterprises modernize, secure, and operate complex business systems at scale.
You’ve led cyber security initiatives at IBM, Kyndryl, PwC, and now Syntax. Across that journey, how has your perspective on securing emerging technologies like AI evolved, particularly as organizations move from experimentation to production?
My career has tracked a series of disruptions, each demanding that security catch up to a new control surface. At IBM in the early days of cloud, the question was whether we could trust someone else’s infrastructure to run mission-critical workloads. The answer was a shared responsibility model and a generation of cloud-native controls.
Then came the ransomware era. NotPetya in 2017 disabled companies in hours, and the industry learned that wormable malware could take down global supply chains overnight. The response was preparing for when (not if) a cyber-attack would happen, network segmentation, immutable backups, and a serious push on identity.
Throughout my time at Kyndryl and PwC, SaaS went from the edge to the center of every estate. Workloads moved out of data centers and onto someone else’s stack, identity became the perimeter, and Zero Trust stopped being a diagram and started being an operating model.
Now at Syntax, we are in the GenAI wave, where the system itself reasons, generates, and acts. Each wave gave us a new control surface, not enough warning, and a shorter window between experiment and production. Cloud took years. SaaS took quarters. GenAI takes weeks. The CISOs who keep up are the ones who stopped treating each wave as an exception and started treating fast adoption as the steady state.
As organizations accelerate AI adoption, how do you assess the risk that trust, not just compliance, is being compromised? What are the earliest indicators that this is starting to happen?
Trust is the foundation of any good AI adoption. The earliest indicators are not in the audit report, they are in the operational signals. Shadow AI deployments that nobody owns. Procurement approving GenAI vendors without security review. Data lineage that breaks the moment you ask where the training data came from. AI agents granted admin permissions because no one wanted to slow the project. When you see those four signals in one organization, trust is already being spent faster than it is being earned. Leadership is usually the last to know.
Many companies are adopting AI faster than they can secure it. What are the most common real-world risks you’re seeing today when governance lags behind innovation?
When governance lags, three things happen, and none of them show up as security incidents until much later. First, regulatory exposure compounds quietly: an AI deployment that breaches the EU AI Act’s transparency requirements does not trip an alarm; it shows up in an audit two years later as a fine. Second, customer trust erodes in transactions you never see: prospects choose competitors who can prove governance, and your sales team never finds out why. Third, decision quality decays: the organization makes more AI-influenced decisions but cannot explain or audit them, and bad decisions accumulate in places no one is looking. The cost of weak AI governance is the slow erosion of audit, sales, and decision quality, ending in a reputationally damaging breach.
From your experience building and scaling managed security services and SOC operations, how should organizations rethink their security models to handle AI-driven systems and autonomous decision-making?
AI is a new attack vector, a threat multiplier, and a critical defensive puzzle piece, and the security model has to adapt to cover all three at the same time.
As an attack vector, GenAI platforms themselves become targets to defend. As a threat multiplier, attackers are using GenAI to craft phishing at scale, generate exploit code, automate reconnaissance, and discover vulnerabilities at machine speed. As a defensive piece, the same technology turned the other way is the only realistic answer: AI-driven triage, automated threat hunting, and analyst augmentation are no longer optional; they are how a SOC keeps pace with an AI-augmented adversary. If they are AI-augmented and we are not, the gap compounds with every cycle.
That also creates a new actor type the model has to govern. At Syntax, we already think about AI agents as joining the org chart, alongside humans, which sets the bar for how we secure them. AI agents need everything we give human users (identity, role-based permissions, activity logs, behavioral baselines) plus the same containment levers we use on compromised accounts: the ability to disable, isolate, and revoke. The difference is speed. Agents act in milliseconds, so those levers have to be immediate and automated, not the back end of an incident response workflow.
At Syntax, our Global Security Operations Center has been evolving so that AI augments the human analyst, while our employees build agentic workflows and agents inside the Syntax GenAI Platform, which provides out-of-the-box guardrails against bias, toxicity, and controls for data privacy and security by default.
That is the rethink. Defend AI as a target. Deploy AI as a defender. Govern use of AI.
There’s often tension between speed and control. How can organizations maintain innovation velocity while still implementing meaningful oversight and guardrails for AI systems?
Speed and control look like opposites until you build governance that travels with the project rather than blocking it. The mistake is putting governance at the gate: a committee, a sign-off, a quarterly review. By the time the gate opens, the team has either gone around it or lost momentum. The model that works is processes redefined with governance baked in. Clear and consistent communication is the starting point, followed by pre-approved patterns, pre-cleared data flows, and pre-defined permission templates. Teams get speed, security teams get visibility, and the trade-off everyone assumes exists turns out to be a poorly designed process. This is all about balancing security with innovation against each organization’s risk appetite.
You’ve worked on large-scale cyber resilience strategies and incident response. How does the introduction of AI change the nature of cyber threats and the way organizations should prepare for them?
AI is turbocharging the threats across different vectors. Scale: phishing and reconnaissance at machine speed against thousands of targets simultaneously. Sophistication: deepfake-driven social engineering that defeats voice and video verification. Identity: synthetic identities that pass identity checks designed for humans.
For incident response, the implications are operational. You need detection that does not rely on humans recognizing patterns at human speed. You need verification protocols that assume voice and video can be faked. And you need incident response playbooks that explicitly cover AI-related incidents, because the recovery steps are not the same as recovering from a ransomware event.
At Syntax, what does “secure-by-design” AI actually look like in a complex, real-world enterprise environment?
At Syntax it means balancing innovation and security, through adoption of our GenAI Platform with baked in guardrails, our approved, restricted, and prohibited GenAI Services & Apps, Models and Platforms, and driving a security-first culture through our AI Governance Office. For our Global Security Organization, it means positioning ourselves as an enabler to the business, not a blocker, supporting the business with its strategic priorities while protecting Syntax in line with our risk appetite.
There’s a growing narrative that security and compliance are no longer blockers but enablers of growth. What has to change culturally and operationally for organizations to truly embrace that mindset?
The biggest shift is what success looks like. Security teams have been measured for decades on what did not happen: no breaches, no incidents, no audit findings. That metric rewards saying no. Teams that operate as enablers measure something different: deals won because controls were demonstrable, launches that hit their date because security cleared the path, and innovations that moved through governance rather than around it.
Operationally, it needs processes redefined with governance baked in, paired with active enablement like our GenAI Platform that makes secure the easier path, and accessible GenAI education and programs, such our AI Champions initiative.
Culture follows what you incentivize and what you enable. Change what you reward, equip people with the right tools and the right training at the right time, and you change what they do. This is the journey Syntax is undertaking.
With AI increasingly embedded into enterprise workflows, how should CISOs collaborate with AI leaders, data scientists, and product teams to ensure accountability without slowing progress?
The CISO who waits to be invited will be late. The CISO who shows up early, with practical patterns rather than policy objections, becomes the partner that AI projects actually want at the table. In practice that means joint design sessions with AI teams, security signoffs that sit beside functional ones rather than after them, and an open door policy. This changes the conversation from being the “Department of No” to “Yes, but” or “No, but” as a willing and collaborative partner for the business.
Looking ahead, do you believe we’ll see a standardized global framework for AI governance, or will organizations need to build their own internal trust architectures regardless of regulation?
Both, in that order. We will see phased convergence on a small number of regional frameworks, the EU AI Act first, others following with local variation. We will not see one global standard in this decade due to geopolitical fragmentation. So, organizations will end up doing two things in parallel: complying with the framework that applies to their largest market and running an internal trust architecture that exceeds whichever framework is weakest. The internal architecture matters more than the external standard, because regulators move slowly and threats do not. The companies that build internal trust architectures now will spend the next decade saying “we already do that” to every new regulator that arrives.
Thank you for the great interview, readers who wish to learn more should visit Syntax.












