Connect with us

Interviews

Shane Eleniak, Chief Product Officer at Calix – Interview Series

mm

Shane Eleniak serves as the Chief Product Officer at Calix, where he leads the strategic vision and execution of the company’s industry-leading platform and SaaS solutions. With a focus on enabling communications service providers to simplify their business and deliver exceptional subscriber experiences, Shane oversees the entire product lifecycle—from conceptualization to market-leading deployment.

Under his leadership, Calix has solidified its position as a pioneer in the broadband industry, consistently delivering innovative tools that empower providers to compete and win.

Calix is a U.S.-based technology company that provides cloud, software, and managed service platforms designed for broadband and communications service providers. Its core offering centers around an AI-enabled broadband platform that integrates cloud infrastructure, data, and network systems to help providers simplify operations, improve customer engagement, and deliver more personalized digital experiences. By enabling these providers to transition from basic connectivity services to full “experience providers,” Calix helps them grow revenue, increase subscriber loyalty, and support the digital transformation of communities through more advanced, scalable broadband services.

Your career spans more than three decades across engineering, networking, cloud platforms, and large-scale product leadership. How did those experiences shape your perspective on what it really takes to make AI perform real work inside businesses rather than remain a side experiment?

I started in traditional telecommunications and networking, where the whole game was the data path and reliability at scale. If you can’t deliver a clean, reliable service, nothing you build on top of it really matters. Back then, the phone was on the kitchen wall, the inside wiring never moved, and as long as there was dial tone, everything was fine.

Broadband and the Internet blew that up. Suddenly, it wasn’t just “is it on?” It was Ethernet and then Wi‑Fi, kids on gaming consoles and tablets, you on a Zoom call collaborating on a cloud spreadsheet, and constant mobility—devices inside the home, in the backyard, at the soccer game, at the coffee shop. The subscriber experience became much more complex than a binary on/off state, and the world for service providers became highly dynamic. In that world, a rear‑view‑mirror view of data—classic data warehouses and historical reports a month later—just doesn’t cut it. You have to collect data, understand the experience, and generate insights in real time because subscribers now expect problems to be fixed proactively, not in hours or days.

That evolution shaped how I think about AI. Most people want to put AI “on top,” the same way they put business intelligence or SaaS on top of existing data lakes. My experience says you have to think much deeper than that and design for real‑time, actionable insight, and the ability to take timely action.

For subscribers, though, the expectation hasn’t changed much in the last 25 years. They still just want secure, managed connectivity that feels as simple as dial tone—they want everything to “just work” without thinking about all the layers and complexity, and they want it everywhere in their lives. My career in telecom and cloud made me very comfortable with that paradox: you build extremely complex systems so you can abstract all of that away and deliver a simple, great experience at the edge. That’s exactly how I think about AI doing real work inside any business, broadband or otherwise.

At Calix, you often emphasize that operational AI is built rather than bought. What are the most common mistakes organizations make when they try to add AI without rethinking how work flows through the business?

For me, it’s less about “built versus bought” and more about whether you’ve stepped back and looked at the whole tech stack. A lot of companies decided AI was simply using some APIs to get access to an LLM, wiring it into your stack with a wrapper, and buying tokens—then you had yourself an AI strategy. That’s not how this works.

Too many of us get fascinated with the tech instead of the outcome. We’ve seen this movie before. When PCs showed up, everybody wanted to argue about whether you had a 286 or a 386, how much memory it had, and what DOS it was running. Today, nobody can tell you the specs of their laptop or phone, and nobody cares until it stops doing what they need it to do. What matters is: does this make me more effective in my job? It’s the same with AI. If you can’t tie it back to real workflows, real value, and real ROI, the tech specs are just noise.

Another big mistake is trying to bolt AI onto whatever you already have without asking what it does to your architecture, your security model, and your costs. AI is fundamental technology, not an incremental feature upgrade. When you treat it as incremental, you end up with poor data, security issues, hallucinations, runaway costs, or a lot of activity that doesn’t solve a problem for anyone.

Finally, you can’t ignore context and the importance of vertical expertise. Action is all about context, and that context differs across telecom, fintech, and healthcare. At Calix, we started with deep experience in one industry and built a vertical platform around it. We already understood the data, insights, workflows, and context, so the stack could reflect that reality. Most companies know their vertical industry inside out. The opportunity is to encode that knowledge into a vertical tech stack rather than relying on a thin horizontal layer and a generic AI model, then trying to stitch everything together. Businesses are about outcomes, not models. The real question is how this technology helps you deliver those outcomes in the way your work flows.

You have outlined a five-layer architecture for operational AI that includes data, knowledge, orchestration, trust, and action. Why is it important to explicitly separate these layers, and which one do enterprises most often underestimate or skip entirely?

For a long time, the stack was pretty simple: data, insights, dashboards, workflows, people. You built data warehouses, put BI on top, created workflow engines, and handed the hard work to humans. In an agentic world, that doesn’t hold up. You need data, knowledge, orchestration, trust, and action because each layer performs a distinct function.

The visible part everyone wants to talk about is the action layer—the agents. That’s the tip of the iceberg. What determines whether you can ever let agents touch real systems is all the “boring” stuff under the waterline: data pipelines and clean data, the knowledge layer that gives you context, the orchestration that coordinates dynamic workflows, and the trust model that decides what should be allowed in the first place. When the Titanic went down, it wasn’t the little piece you could see that sank it; it was the giant mass of ice underneath. Operational AI is the same. The plumbing under the surface is what makes or breaks you.

Historically, we never treated orchestration and trust as separate layers because humans did most of that work. Orchestration meant managers and ticket queues; trust meant usernames and passwords. Now you have to trust entities—agents—to do things, and you have to coordinate multiple agents in real time around dynamic data. That’s a completely different design problem, which is why those layers need to be explicit.

The layer most people underestimate is trust. A lot of organizations think they’re handling trust because they have access controls—who can log in to which system. But real trust in an agentic world is not “does this user have access?” It’s “is this particular action appropriate for this individual or this agent at this point in time?” That’s a governance question, not an access‑control question. If you don’t make that layer explicit, you get stuck in demo land, because you’re never going to be comfortable letting agents do real work in production.

So, trust is obviously a foundational part of your AI strategy. How do you design systems so automated decisions remain observable, auditable, and reversible while still moving fast enough to deliver business value?

You have to start from a zero‑trust mindset. The first question is not “can this agent technically do this?” The first question is “should this agent, on behalf of this person, be trying to do this at all?” If the answer is no, then don’t proceed.

If the answer is yes, you move into guardrails: auditability, traceability, and the need for a human in the loop. Our model relies on a trust layer that acts a bit like a traffic cop at the start of every interaction: who are you, what are you doing, and why are you doing this? That eliminates a lot of the security issues, because you’re not letting agents run off and do things and then hoping you notice after the fact.

The alternative is to turn the agents loose, then raise an alarm if they go off and do something bad. You’re assuming you can see it, figure it out, identify it, and stop it in real time, at the pace and scale these systems operate. That’s a really hard problem, and it’s why so many people are struggling—they’re trying to look for bad actors in real time instead of preventing bad actions up front.

On top of that, we’ve added layered gateways. Even if an agent is acting on behalf of the right person, we’re still looking at the session and the content—are they trying to poison a model, abuse an API, or push something outside policy? All of that is wrapped in full observability so you can audit what happened and roll it back if you need to. That’s how you move quickly and still sleep at night.

Many companies succeed at generating AI insights but struggle to translate them into action. What design decisions allowed Calix to push AI directly into day-to-day workflows across marketing, operations, and customer support?

Long before AI was the star of the show, at Calix we were already obsessed with one question: what makes an insight genuinely actionable for a real person in a real job? Since 2018, we’ve been working with service providers to understand how different personas work—what a marketer does on a Tuesday morning, what an operations team does when an alarm fires, what support teams do when a subscriber calls in frustrated. That forced us to get very crisp about which insights mattered to whom, in which context, and what “good action” looked like.

So, when agentic AI came along, we weren’t starting from scratch. We already had real‑time systems generating actionable insights tied to specific personas and workflows. The design question became: given a different toolset and a different tech stack, how would you re‑architect those same workflows in an agentic AI world, instead of trying to invent all of that from scratch?

When you pair this deep persona knowledge with agentic AI, you can build dynamic workflows over dynamic data. Agents can figure out, in real time, which steps and which personas need to be involved based on what’s happening, instead of forcing you to hard‑code hundreds of rigid flows in micro-services. For most companies, the hard problem right now is trying to make real‑time decisions based on context and then design the right workflow around that. For us, that part was already in place; we’d been doing real‑time, persona‑based, actionable insights for years. Agentic AI is just a new set of tools on top of that foundation.

Your platform vision includes agent-to-agent (A2A) interoperability and federated AI systems. How does this approach change the way enterprise tools collaborate compared with traditional point integrations?

If you look at the last 20 years, the default pattern has been “buy a bunch of SaaS tools and wire them together around a data lake.” Every new system meant another point integration, another data pipeline, and another place to reconcile the truth. In an agentic world, that doesn’t scale. You want the data to stay where it belongs and have agents talk to each other over well‑defined interfaces.

That’s why we talk about touching the system at two layers: MCP at the knowledge layer, and A2A at the orchestration and trust layers. MCP is how agents discover and use tools and data without a new custom integration every time. A2A is how agents coordinate work with each other under clear guardrails.

Once you have that, collaboration stops looking like a pile of brittle connectors and starts looking like a network of specialists that can dynamically team up around real work. Here’s where the Eisenhower Matrix analogy comes in. Not everything is equally urgent and equally important. Some work is truly time‑critical, some is important but can be scheduled, some just needs to get done, and some is noise. With agent‑to‑agent coordination sitting on top of a trust and orchestration layer, you can treat those categories differently at scale: agents can swarm the urgent‑and‑important problems, queue or schedule the important but not urgent, and keep the low‑value busywork from crowding everything else.

That’s a very different world from “let’s add one more connector and hope the queue drains.” You’re effectively seeing trusted, carefully orchestrated dynamic workflows around dynamic events and data, instead of a tangle of one‑off integrations where everything shouts at the same priority.

Once AI agents are allowed to act autonomously, governance quickly becomes a challenge. How do you balance speed, accountability, and human oversight when AI systems are making or executing decisions at scale?

The mistake I see is that people think they can bolt agentic AI onto whatever they have and somehow try to “balance” speed, accountability, and human oversight after the fact. You can’t. You have to start by acknowledging that this is a vertical tech stack problem and by intentionally building a trust layer and an orchestration layer. Without those two layers, it turns into a free‑for‑all—everything is first‑come, first‑served, or whoever yells the loudest.

Again, it’s the Eisenhower Matrix: not all work is created equal. Trust and orchestration are how you operationalize that in an agentic world. You don’t want every agent treating every task like a fire drill; you want the system to know what’s truly time‑critical, what can be scheduled, and what should be quietly handled in the background.

And then there’s the “narrow over fat” part. Most companies mistake greater impact from AI with staying broad. You’re much better off picking a narrow vertical slice—one concrete use case, one set of workflows—and building the trust and orchestration you need there first. Get thinner in the vertical, get it right, keep humans in the loop at the edges, and then expand. That’s how you move fast, stay accountable, and avoid creating a mess you can’t unwind later.

From your experience leading large global product and engineering teams, what organizational or cultural shifts are required for AI to become a durable enterprise capability rather than a collection of disconnected pilots?

Most enterprises don’t have an “AI problem”; they have a knowledge and workflow problem. The first shift is to stop playing with point solutions and move from data warehouses to a federated knowledge warehouse that everyone can see and act on. As long as knowledge lives in silos and AI is a cherry on top of each silo, you’ll get pilots, not transformation.

From there, you have to be willing to go after the harder problems in a specific order. Step one is to separate hype from reality and adopt what’s working, not whatever’s loudest in your feed. Step two is to re‑architect the knowledge layer so you can turn data into shared, federated context instead of one more report buried in a system. Step three is to rethink workflows around that knowledge and a real trust layer—most work today is organized around people, skills, and local knowledge silos. If you don’t change that, agents will just be another tool orbiting the same old bottlenecks.

Only then do you get to the cultural shift, which is often the hardest. You need a culture where people are not primarily worried about losing their jobs, tools, or identity, but are genuinely excited to work with new capabilities. That’s a change‑management problem, not a technology problem. It looks a lot like real distributed leadership: people at the pointy end of the spear understand the workflows, feel safe naming the friction, and are excited to put agents to work on it.

Looking beyond broadband and telecom, which industries do you believe are best positioned to adopt operational, agent-driven AI next, and what conditions make them ready?

I don’t really think about this as picking winners by industry label; I think in terms of patterns. Almost every vertical has the same underlying challenge: they’ve built data silos and function silos instead of one view across three lifecycles—customer, employee, and product. The ones that are ready are the ones willing to see that, admit they don’t have a real knowledge layer, and fix it.

From there, the conditions look pretty similar regardless of whether you’re in healthcare, fintech, retail, or critical infrastructure. You need complex workflows where people are stretched, real friction points you can name, and enough high‑quality data to give agents context. If you can map current workflows, see where work slows down or piles up, understand which handoffs create delays, and then back that with a federated knowledge warehouse, agentic AI becomes an incredible toolset.

In that world, “industry readiness” comes down to leadership. Are a company’s leaders willing to move beyond marketing tools and thin horizontal dashboards, and instead invest in a vertical tech stack—turning data into knowledge, federating that knowledge, putting orchestration and trust frameworks in place, and having honest conversations about where the real ROI is? Any company in any industry that does that work is well-positioned for operational, agent‑driven AI; those that don’t will be stuck adding one more tool to an already noisy pile.

As enterprise AI evolves toward multi-agent and multi-cloud environments, what does good AI architecture look like five years from now, and what principles should leaders commit to today to avoid rebuilding their systems later?

Five years from now, the interesting part of AI won’t be the individual agents or models; it will be the agentic workflows they enable and the business value those workflows deliver. Agents themselves will come and go. The layers beneath them—data, knowledge, orchestration, trust, and action—will continue to evolve, but the need for them is not going away.

That’s why I’m more focused on architecture than on any specific tool. We’re moving from data warehouses to federated knowledge warehouses, from brittle point integrations to open, layered stacks. In that world, you’ll have agents running in different clouds, touching different knowledge sources, and coordinating over well‑defined interfaces—MCP at the knowledge layer, agent‑to‑agent protocols at the orchestration and trust layers. As the technology improves, you want to be able to swap better pieces into those layers without rebuilding the whole thing every time.

So, the principles for leaders are simple. Don’t build monolithically. Design for layers so data, knowledge, orchestration, trust, and action can each evolve independently. Design for flows, not features, so you’re clear which workflows matter and what “good” looks like in customer, employee, and product lifecycles. And design for governance at the agent level: assume zero trust by default, define clear “agent cards,” and use orchestration to decide what is urgent, what is important, and what just needs to get done. If you do that, you can let the tech change—as it always does—without constantly worrying about rebuilding.

Thank you for the great interview, readers who wish to learn more should visit Calix.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.