Connect with us

Interviews

Denis Romanovskiy, Chief AI Officer at SOFTSWISS – Interview Series

mm

Denis Romanovskiy, Chief AI Officer at SOFTSWISS, is a seasoned technology executive with more than 25 years of experience leading large-scale engineering programs across gaming, enterprise software, IoT, and high-load online platforms. Having spent the last five years in the iGaming sector, he previously served as Deputy CTO at SOFTSWISS, overseeing technical governance across multiple product teams with a strong focus on casino and sportsbook platforms before stepping into his current role to define and implement the company’s AI strategy.

SOFTSWISS is a Malta-headquartered iGaming technology company that provides turnkey solutions for online casinos and sportsbooks, including a casino platform, game aggregator, sportsbook solution, and managed services. The company supports operators worldwide with infrastructure designed for scalability, compliance, and reliability, positioning itself at the intersection of gaming technology and emerging AI-driven optimization.

Having led large-scale technical programs across multiple industries and now defining enterprise AI strategy at SOFTSWISS, how has your background in high-load, high-availability systems shaped the way you approach embedding AI across a 2,000+ person organisation?

My experience in high-load, high-availability systems taught me one fundamental lesson: any complex change at scale requires a systems approach. You can’t just deploy a technology and hope it works – you need to design the entire ecosystem around it and ensure that processes, structure, and technology all work together.

We apply exactly this principle to AI adoption at SOFTSWISS. It starts at the individual level. We explain to every employee how to use AI safely and effectively – what it can do, where its limits are, and what the associated risks are. Critically, we make it clear that their responsibility for outcomes doesn’t disappear when AI enters the picture. AI expands your capabilities, but the accountability stays with you. You still own the quality of the output, the decisions, and the results.

Then we move to the team level, and this is where the dynamics shift. New opportunities emerge – faster planning cycles, automated verification, enhanced analysis – but so do new risks: over-reliance on AI outputs, erosion of critical thinking, inconsistent adoption across the team. This is where managers play a decisive role. They need to adapt how they review work, what questions they ask, and what signals they look for. When someone delivers a result twice as fast, the manager’s job is to understand whether the quality held up and whether the person actually understands what they delivered.

This layered approach – individual awareness, team-level adaptation, management oversight – is what lets us scale AI across a large organisation without compromising the stability and reliability that our regulated environment demands. It’s not about technology alone. It’s about building the system around it that makes adoption sustainable.

What separates AI deployed as a productivity tool from AI embedded directly into core infrastructure and decision-making systems, and how does that distinction change long-term business outcomes?

Productivity AI – chat assistants and code copilots – is where people first encounter AI at work. This step matters, and you can’t skip it. It builds AI literacy, teaches people to evaluate outputs, and creates habits of responsible usage across the organisation.

But there’s a fundamental difference between AI that helps an individual and AI embedded into how the organisation operates. Infrastructure-level AI – integrated into your enterprise systems through AI platforms – becomes part of the management system. It involves planning, control, and audit. It respects governance frameworks and feeds directly into decision chains.

The impact gap is significant. Productivity tools deliver 20-30% efficiency gains on individual tasks – valuable, but incremental. Infrastructure AI accelerates entire processes by 3-5x. And over time, it reshapes the organisation itself – eliminating some roles partially or fully, creating new ones, and compressing workflows that once required multiple handoffs.

That’s why these two categories demand different approaches. Productivity AI is an enablement challenge. Infrastructure AI is an organisational transformation that requires careful planning, change management, and continuous oversight.

What architectural and cultural shifts are required to transition from isolated AI experiments to a centralized, organization-wide AI platform?

Architecturally, a centralized platform is essential – one that provides secure access to multiple model vendors while maintaining strict data governance. Without this layer, experimentation scales fragmentation instead of value.

Culturally, the bigger shift is moving from execution-focused thinking to design-focused thinking. As execution becomes cheaper and faster with AI, competitive advantage shifts to how well teams architect workflows. Employees should design processes where AI handles repetitive operations, while humans remain in control of orchestration and decision quality.

How can large enterprises systematically increase their learning velocity when deploying AI, and what operational mechanisms make that measurable?

Learning velocity increases when experimentation is structured. At SOFTSWISS, we appoint AI champions inside product teams who identify use cases, refine best practices, and share them across the organization. Workshops further accelerate knowledge transfer.

Measurement is tied to business KPIs. We track indicators such as Time to Resolution in support or automation levels in code review. If AI adoption doesn’t improve measurable metrics, it remains superficial.

Which legacy processes most commonly limit the impact of AI adoption in established technology companies?

The main constraint is attempting to integrate AI into rigid management structures with long planning cycles and fixed resource allocation. AI’s advantage is speed, and outdated governance models slow that advantage down.

Another limiting factor is weak data classification. Without structured and well-governed data, secure and scalable AI integration becomes extremely difficult.

Can you share examples where integrating AI directly into core systems produced measurable gains in efficiency, revenue, or operational performance?

In technical support, AI embedded into Jira analyzes ticket history and documentation to propose solution paths, significantly reducing resolution time.

In HR, automated assistants handling benefit and leave inquiries save hundreds of hours each month.

In development, AI-driven code review automation reaches 60–80%, accelerating the development lifecycle by two to four times. These gains are operationally measurable and directly impact efficiency.

How do you design governance frameworks that ensure auditability, security, and accountability when AI is deeply embedded into enterprise workflows?

Governance must create a controlled environment rather than restrict innovation. We rely on enterprise-grade vendor agreements and apply data masking before sending information to cloud models.

Accountability is built into system design. AI-driven actions operate within defined rollback windows, allowing human override. Responsibility ultimately remains with the team leader who designs and owns the workflow.

What structural advantages allow small AI-native teams to scale faster than traditional enterprises, and how can larger organizations adapt without losing stability?

The core difference is architectural. Traditional companies break work into sequential stages – each owned by a separate role, with handoffs and queues between them. AI-native teams can execute across all stages simultaneously. There are no queues, no waiting for the next person in the chain. The entire process is automated end to end, which gives them a massive speed advantage.

For larger organisations, the path forward is gradual. First – build AI literacy and equip teams with AI tools. Give people time to learn, experiment, and integrate AI into their existing workflows. At this stage, innovation happens within current processes, not instead of them.

Once teams gain experience and confidence, you can set more ambitious goals – optimising entire processes rather than individual steps. This is where the real transformation begins, but it only works when people and processes are ready for it.

The key is pace. Move too fast and you break stability. Move too slow and the market leaves you behind. The right approach is deliberate, sequential progression – so the organisation evolves without losing what already works.

How does operating in the iGaming sector, with its regulatory and reliability demands, influence the way AI infrastructure is architected and deployed?

iGaming is a unique environment. It involves real money, real-time transactions, and regulatory oversight across multiple jurisdictions. At SOFTSWISS, we operate under multiple licenses – each with its own compliance requirements. This means every technology decision, including AI, must account for a complex regulatory landscape that goes well beyond standard data protection.

Regulated markets require strict compliance with data storage, deletion, and processing rules, including GDPR. But in iGaming, the scope is broader – anti-money laundering requirements, responsible gambling obligations, licensing conditions that dictate how data flows and where it can be processed. Infrastructure must guarantee that sensitive data is not used for external model training and that every AI-driven decision remains auditable.

At the same time, reliability standards are exceptionally high. Systems operate 24/7 with massive transaction volumes. Any AI system we deploy must meet the same standards – always available, fully auditable, and capable of handling the data volumes we see in support and compliance operations. In this industry, an AI failure isn’t just an inconvenience – it’s a regulatory and financial risk.

As enterprise AI matures, what capabilities will distinguish companies that truly integrate AI into their operating model from those that remain surface-level adopters?

In mature AI organisations, every employee will have AI at their fingertips – with secure access to corporate data across systems, without barriers or manual requests. Processes will be automated end to end, with no queues or handoffs between roles. Work will flow continuously, not in stages.

But automation alone isn’t enough. What separates leaders from the rest is the ability to control AI-driven work at scale. Teams and organisations will adapt to automated quality monitoring – detecting issues early and correcting them before they compound.

The role of the individual employee shifts fundamentally. Instead of executing tasks, they define specifications for AI – providing sufficient context, clear goals, and quality control methods. Their value lies in steering AI and optimising its output, not in doing the work manually.

The role of leaders changes too. Managers and executives become the architects of systems thinking across the organisation. Their job is to connect different workstreams, tools, and artifacts into value streams that solve customer problems better than competitors can, not optimising individual tasks – but designing how everything fits together.

This depth of integration – AI in every hand, automated processes, systematic quality control, and leadership focused on end-to-end value – will define long-term competitive advantage.

Thank you for the great interview, readers who wish to learn more should visit SOFTSWISS.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.