Interviews
Piotr Tomasik, Co-Founder and President of TensorWave – Interview Series

Piotr Tomasik, Co-Founder and President of TensorWave, is a veteran technology entrepreneur and AI infrastructure executive with more than two decades of experience spanning AI, SaaS, cloud computing, fintech, and the creator economy. Before co-founding TensorWave in 2023, he co-founded Influential, an AI-powered influencer marketing platform later acquired by Publicis for approximately $500 million, where he served as CTO before transitioning to an advisory role.
Throughout his career, Tomasik has also founded or led companies including Lets Rolo, On Guard Data, and ActiveSide, while holding senior technology positions at CARD.com and Marker Trax. In addition to his operating roles, he is a General Partner at 1864 Fund and a co-founder of StartUp Vegas, where he actively supports the Las Vegas startup ecosystem and emerging tech talent. A UNLV computer science graduate and recognized tech leader, Tomasik has become known for helping position TensorWave as a fast-growing AI compute infrastructure company focused on large-scale GPU cloud platforms powered by AMD accelerators.
TensorWave is an AI infrastructure company focused on delivering high-performance cloud computing powered by AMD GPUs, positioning itself as an alternative to more closed AI ecosystems. Founded in 2023 and headquartered in Las Vegas, the company builds large-scale GPU clusters optimized for training and deploying advanced AI models, with an emphasis on performance, flexibility, and cost efficiency. By leveraging open hardware and software ecosystems, TensorWave aims to broaden access to powerful AI compute resources for enterprises, researchers, and developers, enabling scalable AI workloads without the constraints of traditional vendor lock-in.
Nvidia dominates most of the GPU market—why did you decide to go all-in on AMD, and what advantages does that choice give TensorWave and its customers?
Following the launch of ChatGPT, demand for AI skyrocketed. GPUs snapped up fast, and NVIDIA was basically the only option if you could get it at all, and if you could handle the cost. That shortage sparked a huge interest in alternatives. Now that we’re past the initial hype, there’s a real opportunity to challenge Nvidia’s dominance with solutions that are accessible, cost-effective, and easy to use.
As a startup, we’ve always made business decisions with a strong focus and purpose. That’s why we haven’t experimented with Nvidia, and we’ve continued building out our capabilities on AMD. The next phase of our company is about leaning into those focused capabilities so anyone can jump in and do something meaningful with AI. AMD is a credible alternative with real manufacturing scale, an open software posture, and a memory-first roadmap for modern AI.
How does TensorWave’s approach to AI infrastructure differ from traditional GPU cloud providers?
Our differentiation is straightforward: we’re the only AMD-exclusive cloud at scale, setting out to restore choice in AI compute, break Nvidia’s dominance, and democratize access. But it’s also about our ethos and commitment to bringing a true alternative to the market. First and foremost, we want to deliver exceptional AMD-based infrastructure at scale. From there, we’ll expand into top-tier services on top of it– Models-as-a-Service, AI-as-a-Service, making everything simpler.
As an all-AMD cloud, we have software experience built specifically for AMD from day one. This focus lets us optimize silicon, networking, and software end-to-end, ensuring that teams can scale when they need to.
What role does your strategic partnership with AMD play in TensorWave’s growth and differentiation?
It’s foundational. AMD invested in TensorWave, invited us into the MI300X Instinct launch, and we continue to collaborate tightly on hardware, software enablement, and ecosystem growth. Being an all-AMD cloud means we can move quickly with each Instinct generation, and serve as a living lab that provides, at scale, alternatives within our market. Our AMD-only differentiation has allowed us to work at a pace that is not as achievable in the AI infrastructure market. Their partnership lets us close gaps quickly, ship first on new GPUs, and publish real performance at scale.
GPU access remains a major bottleneck for AI teams—how is TensorWave tackling this challenge?
We are tackling these bottlenecks first through supply independence: by building on AMD, we avoid the worst of other chip manufacturers’ supply constraints, and pass on availability to customers. Supply independence through AMD makes sure our customers aren’t stuck waiting in the same queue as everyone else.
Gaps in the AI infrastructure ecosystem exist because so many players are building similar solutions, creating a lot of overlap. That often comes from a lack of awareness about what’s happening across the market. The first step to closing those gaps is understanding who’s doing what, where there are opportunities for collaboration, where competition can drive innovation, and ultimately, how the ecosystem can improve as a whole. One unique gap in the AI infrastructure market is power; even if GPUs are available, there often isn’t enough energy to support the growing number of AI applications. Solving these resource challenges are our key to enabling sustainable growth and innovation in the years ahead.
How do features like direct liquid cooling and UEC-ready networking (Universal Ethernet Consortium) enhance performance and cost efficiency?
Direct liquid cooling and UEC-ready networking are foundational to what makes a modern AI cloud economically viable at scale, and both are central to how we’ve designed TensorWave.
On DLC: the newest accelerator generations, AMD’s MI355X and MI455X, run at thermal envelopes air simply can’t handle efficiently. We’re talking 1400W+ TDPs per GPU. Direct liquid cooling removes heat at the source via cold plate or immersion designs, which does three things for our customers. First, it enables substantially higher rack density, 120-300kW+ per rack instead of 30 to 40kW, which compresses the footprint and cuts per megawatt real estate and power distribution costs. Second, it drives PUE toward 1.1, versus 1.4 to 1.5 for legacy air-cooled facilities; at our scale, that translates to tens of millions of dollars in annual utility savings. Third, and often underappreciated, DLC holds silicon at lower, more stable junction temperatures, which preserves sustained clock rates during long training runs and extends the useful life of the hardware. That last point matters enormously when you’re underwriting a six-year asset.
On UEC: the Ultra Ethernet Consortium spec, which AMD helped found and which reached 1.0 in 2025, gives us an open, merchant-silicon fabric that meets or exceeds InfiniBand on the metrics that actually matter for distributed training. Tail latency on collectives, effective bandwidth under contention, and scaling behavior past the 100,000 GPU threshold. The cost story is structural. Ethernet has a half dozen credible merchant silicon vendors competing on price, versus a single-source alternative that carries a well-documented premium. For a 100MW site, choosing UEC-ready networking over proprietary fabric is typically a nine-figure CAPEX decision, and the operational advantages compound because our network engineers already know Ethernet.
Taken together, these choices let us deliver better training economics than legacy clouds. Customers see higher effective FLOPs per dollar, more predictable step times on large jobs, and a clear runway as models scale. For us, they mean a more defensible cost structure and the flexibility to offer genuinely competitive rate cards.
Can you share examples of how customers are leveraging TensorWave to train large-scale AI models?
TensorWave customers need high-performance AI compute without GPU scarcity, vendor lock-in, or runaway costs. TensorWave provides exclusive AMD cloud – open, memory-optimized, and production-ready, which gives teams scalable AI infrastructure that’s accessible, flexible, and cost-effective.
For Example, Modular chose to run its MAX inference stack on TensorWave’s AMD GPU infrastructure because TensorWave delivers significantly better cost‑performance economics for large‑scale AI inference. By running Modular’s MAX on TensorWave’s AMD compute, they achieve up to 70% lower cost per million tokens, 57% faster throughput, and less overall cost than other GPU stacks.
With Nvidia’s continued dominance, where do you see the biggest opportunities for challengers like TensorWave?
In an AI compute space that’s dominated by a few major players, the greatest challenges are achieving speed to market, delivering the latest technology, and providing exceptional support. Hyperscalers often offer a wide range of options, but struggle to provide the focus or personalized guidance customers need. To break through this dominated space, TensorWave focuses on our strengths, while collaborating to provide the best technology possible and ensuring customers have alternative options.
The two biggest opportunities for challengers of NVIDIA’s AI infrastructure dominance are in open ecosystems and memory. Open ecosystems eliminate lock-in at every layer (hardware, interconnect, and software). Additionally, memory partnered with network-optimized training/inference flips the cost curve.
Looking five years ahead, how do you envision the future of AI infrastructure and TensorWave’s role in it?
For years, the goal in AI infrastructure was to make it good, make it stable, and make it easy to use. The next phase will be about what you can deliver on top of that—managed services, AI-as-a-Service, anything that helps customers deploy and scale more easily.
We’re at the beginning of a major transformation. AI technology keeps advancing, and alternatives like AMD are becoming more and more viable. As that happens, customers will get more comfortable deploying them at scale, and the entire ecosystem will start to open up and grow.
Thank you for the great interview, anyone wanting to learn more about this innovative AI infrastructure company should visit TensorWave.












