Connect with us

Interviews

Cam Myers, CEO and Founder of CreateMe – Interview Series

mm

Cam Myers, CEO and founder of CreateMe, launched the company in 2019 with a vision to modernize apparel manufacturing through advanced automation. Based in the San Francisco Bay Area, he brings a diverse background spanning investment advisory at ADM Investment Partnership, early-stage leadership on the founding team of Group Commerce, and business development roles at Downtown Music Holdings and Publicis Groupe. He is also a member of the World Economic Forum’s Global Innovators Community, reflecting his broader commitment to technology-driven industrial transformation.

CreateMe is an AI robotics company reinventing how clothing is made by replacing traditional sewing with automated, adhesive-based assembly powered by robotics, computer vision, and machine learning. Its proprietary manufacturing platform enables faster, more localized, and more sustainable apparel production, reducing waste and shortening supply chains while positioning the company at the forefront of next-generation soft goods manufacturing.

Before founding CreateMe, you were part of founding teams, worked in investment and advisory roles, and held positions at companies like DoubleClick and Group Commerce. How did that mix of technology, finance, and operating experience shape your decision to start CreateMe and tackle something as complex as automated apparel manufacturing?

Before CreateMe, I came up as a technology generalist, working across software, ecommerce, investing, and early-stage operating roles. Being part of startup teams, including Group Commerce, was an on-the-job MBA. You’re forced to think across disciplines and see how technology, economics, and operations actually interact under real constraints.

That perspective led me to a different conclusion about apparel. Through ecommerce startups, I kept seeing the same failures repeat: low sell-through, heavy discounting, and large volumes of inventory ultimately written off or sent to landfill. Most people framed those as merchandising or forecasting problems. Looking at it through a technology lens, it was clear they were symptoms of something deeper—manufacturing systems that couldn’t respond to real demand.

The insight came from connecting those dots across disciplines. Apparel wasn’t broken because any single part of the system was poorly run. We realized this wasn’t something you could tune or optimize; it required a clean-slate, first-principles rethink of materials, machines, and software as one system.

CreateMe came out of that conviction. This was fundamentally a technology problem, and it needed a technology solution. Being interdisciplinary is what made that visible in the first place, and it’s why CreateMe’s approach looks different. We set out to treat apparel manufacturing as a systems and automation challenge, and build a platform capable of changing how the industry actually works.

CreateMe now holds a significant portfolio of patents across robotics, materials science, and automation. What were the earliest technical insights that convinced you this problem was solvable with Physical AI?

When we founded CreateMe in 2019, we believed there was finally a credible path to automating apparel manufacturing, but only if the process itself was rethought. Fabric is a deformable, state-dependent material. It stretches, shifts, and changes behavior as it’s handled. Small variations compound quickly. Under those conditions, open-loop control and preprogrammed motion break down. The problem wasn’t robot precision. It was understanding material state well enough to act on it.

Our first real progress came from changing the assembly model. By replacing continuous stitching with adhesive bonding, we could assemble garments in a static, fixtured state rather than while the fabric was in motion. That removed a major source of variability and allowed alignment and joining to be controlled directly. Combined with traditional machine vision, ML-based computer vision, rules-based logic, and robotics, this made reliable automation possible for a defined set of operations. It proved something important early on: deformable materials could be handled mechanically if the process was structured correctly.

Those early systems also made the limits clear. Traditional rules based machine vision work well when geometry is simple and conditions are tightly constrained. They don’t scale to the hardest problems in apparel, especially complex three-dimensional joining where fabric shape, orientation, and contact evolve continuously in space. End-to-end automation of those operations simply wasn’t achievable with the perception and modeling tools available at the time.

That’s where Physical AI has begun to change the equation. Advances in perception, sensing, and embodied intelligence now make it possible to understand deformable materials in three dimensions and close the loop between seeing, deciding, and acting. We’re still in the early innings of applying these models to physical assembly, but even early implementations are already expanding the range of garments, fabrics, and complex 3D joining operations that can be automated. Instead of scripting behavior, the system can increasingly reason about material state, adapt in real time, and execute joining operations end to end. Each bonded operation generates data about how a textile responds to force, heat, and geometry, which allows performance to improve and generalize through use.

In short, our early tools proved feasibility. Physical AI is what’s unlocking completeness and scale. That progression, from scripted automation to end-to-end intelligent assembly, is what convinced us this problem was not only solvable, but extensible across garments and materials. The breadth of our patent portfolio reflects that path. Solving deformable-material assembly required invention across robotics, materials science, and automation, with Physical AI opening up the most complex forms of joining.

Apparel manufacturing has long resisted full automation due to the complexity of soft goods. What breakthroughs allowed CreateMe to finally cross that threshold?

For CreateMe, crossing the automation threshold has been driven by two related shifts: how garments are physically assembled, and how machines perceive and act on fabric during that assembly.

The first breakthrough was architectural. By moving from stitching to adhesive bonding, we eliminated the need to access both sides of the fabric during assembly. Garments can be built using single-side access, in a static, fixtured state, rather than being folded, flipped, and tensioned through a sewing machine. That significantly reduced manipulation complexity and removed a major source of variability. With fabric supported and accessible from one side, alignment and joining became controllable problems, and traditional machine vision and robotics could reliably automate a meaningful portion of garment construction.

From first principles, this is fundamentally more automatable than robotic sewing. Sewing attempts to replicate human dexterity in continuous motion while fabric is actively deforming. Adhesive-based assembly reframes the problem around controlled positioning and discrete joins, which is far better suited to robotics.

That approach also clarified the remaining challenge. As we moved into more complex three-dimensional joining—where surfaces meet at changing angles and material behavior shifts as contact is made—rules and traditional machine vision based approaches reached their limits. End-to-end automation across the full variability of garments and textiles required more adaptive perception and control.

That’s where Physical AI plays a critical role. Advances in perception, sensing, and embodied control make it possible to interpret fabric geometry and material state in three dimensions and respond in real time during assembly. At CreateMe, even early applications of these capabilities are expanding the range of garments, fabrics, and complex 3D joining operations that can be automated with minimal intervention.

In short, process redesign—bonding, single-side access, and static assembly—made automation feasible. Physical AI is what enables that automation to move toward end-to-end operation and scale across real-world variability, allowing apparel manufacturing to move beyond narrow automation and toward systems that improve as complexity increases.

MeRA™ introduces a modular, robotic assembly approach to garment production. How does this system fundamentally differ from traditional factory automation?

MeRA™ fundamentally differs from traditional factory automation because it was designed around the specific constraints of apparel manufacturing, rather than adapted from industries built on rigid parts and stable processes.

Conventional automation assumes fixed geometry, predictable materials, and limited variability. Changeover is managed through tooling-intensive, mechanically constrained setups and process-specific fixturing. That model works when products rarely change. It breaks down in apparel, where materials are deformable, styles turn quickly, and production must run at high velocity to be economically viable.

MeRA™ starts from the opposite assumptions. Apparel requires a system that can handle soft materials, constant variation, and frequent changeover without stopping production. To do that, MeRA™ uses a modular, software-defined assembly architecture. Each module performs a discrete operation and can be reconfigured, duplicated, or redeployed as products, fabrics, or volumes change. Changeover happens digitally, in software, rather than through physical retooling.

Architecturally, MeRA™ is designed to maximize both speed and control. Assembly is kept in two dimensions as long as possible, where vision, alignment, and motion are fastest and most precise, before transitioning into tightly managed three-dimensional operations only when forming or joining requires it. Traditional automation pushes parts through fixed 3D work cells; MeRA™ minimizes 3D complexity by design to preserve throughput.

Paired with digital adhesive bonding, MeRA™ replaces mechanically constrained joining with a programmable, single-sided operation. There’s no need to flip garments, manage continuous tension, or access both sides mid-process. That reduces cycle time, lowers error rates, and enables rapid digital changeover across garments and textiles.

In short, traditional automation hard-codes process into hardware. MeRA™ defines process in software and adapts them to the material. That shift—from physical retooling to digital changeover, and from fixed workflows to modular assembly—is what allows MeRA™ to operate at the speed and variability of apparel demands.

Pixel™ replaces stitching with micro-adhesive bonding. Beyond speed and efficiency, what new design or performance possibilities does this unlock for apparel brands?

Pixel™ redefines apparel construction at the seam. By replacing stitching with digitally controlled micro-adhesive bonding, brands gain far greater precision and consistency, resulting in garments that are smoother, stronger, and more comfortable in wear. Because the process is software-defined, seams become a design surface rather than a constraint, allowing stretch, moisture management, thermal regulation, and lightweight reinforcement to be engineered directly into the garment structure.

Those benefits extend beyond how a garment performs on the body. The same digital control that enables performance also allows apparel to be designed for end-of-life from the start. With our Thermo(re)set™ adhesive formulation, bonds can be reversed, enabling automated disassembly and large-scale textile recycling. For brands, Pixel™ makes design, performance, and circularity integrated outcomes of construction itself, not competing priorities layered on after the fact.

There’s a lot of hype around Physical AI right now. From your perspective, where does Physical AI actually work today, and where does reality still lag behind expectations?

Physical AI works today when problems are structured for intelligence rather than brute force. We’re seeing real progress in environments where perception, learning, and control are deployed together inside engineered systems—places where tasks are repeatable but still require adaptation, and where the machine can actually observe and reason about what matters.

Where expectations still run ahead of reality is around general-purpose embodied intelligence. Soft, deformable materials remain one of the hardest problems in robotics because they introduce partial observability, nonlinear behavior, and constant variation. Physical AI isn’t a drop-in replacement for human dexterity, and it doesn’t succeed in chaotic or legacy environments by default.

In practice, the difference comes down to design. Physical AI works when the physical process has been deliberately rethought to reduce uncertainty—when access is simplified, states are observable, and variability is managed by architecture rather than ignored. In those conditions, learning systems can adapt and improve. Without that, AI is often just compensating for poor physical design.

That’s the lens we apply at CreateMe. We don’t treat Physical AI as a shortcut around manufacturing complexity. We treat it as a scaling layer that only works once the underlying assembly system has been redesigned from first principles. The lesson we’ve learned is simple: Physical AI scales when the physical world has been engineered to let intelligence do real work.

With tariffs, geopolitical risk, and supply-chain fragility becoming structural issues, how do technologies like MeRA™ change the economics of bringing manufacturing back to the U.S.?

For a long time, offshoring made economic sense on a narrow labor-cost basis, and it still does for certain products and volumes. The challenge is that the model also comes with structural downsides: long lead times, poor supply–demand matching, excess inventory, and growing exposure to tariffs, geopolitical risk, and logistics disruption. Those costs were often hidden or tolerated until recent shocks forced a closer look.

Technologies like MeRA™ change the economics by making a different operating model viable in the U.S. MeRA™ reduces dependence on manual labor and replaces it with high-throughput, automated production that can run in a compact, reconfigurable footprint. That matters domestically, where labor is expensive and flexibility is more valuable than sheer scale.

Just as importantly, MeRA™ shifts apparel production away from dexterity-based sewing toward static, bonded assembly. That removes reliance on scarce, highly trained sewing labor and replaces it with roles that are faster to train and easier to scale in the U.S. This turns labor from a structural bottleneck into a manageable input, which is critical for any realistic reshoring strategy.

The key shift isn’t about bringing everything back. In practice, even a modest layer of near-market production—often 5–10% of volume—can materially change the economics of the entire supply chain. That flexible capacity allows brands to respond to real demand, chase winners, and avoid overproducing months in advance. MeRA™ makes that layer economically viable by supporting fast digital changeover, smaller batch sizes, and consistent output without depending on specialized labor pools.

In that context, reshoring stops being a binary or political decision. Technologies like MeRA™ turn it into a portfolio choice. Offshore manufacturing still plays a role for scale and cost efficiency, but automated, near-market capacity becomes a strategic lever for speed, resilience, and capital efficiency. The result is a more balanced supply chain, where even limited U.S. production can significantly reduce risk and improve overall economics.

How should apparel brands think differently about product design when manufacturing constraints are no longer the same as they were in traditional cut-and-sew environments?

Traditional apparel design reflects the prevailing logic of cut-and-sew manufacturing: two-sided access, needle penetration, seam allowances sized for human hands, and construction methods optimized for manual repeatability. These are not inherent requirements of garments; they are artifacts of how garments have been made.

Automated, bonded assembly introduces a different design logic. Designing for automation means assuming single-sided access, digitally controlled adhesive deposition, and highly repeatable execution. That enables smaller internal seam tolerances, more precise glue lines, and lower-profile assemblies that are both structurally sound and aesthetically cleaner than stitched equivalents.

Because adhesive is dispensed rather than stitched, designers can work confidently with complex and irregular edges, fluid geometries, and fabric conversions or laminations that would be difficult or impossible to reproduce with sewing. Visual complexity no longer has to be supported by physical bulk. The result is a more minimalist, refined construction language that is native to automation rather than adapted from handwork.

This approach also expands material freedom. Unlike seam tape, which is typically high-temperature and largely limited to synthetics, dispensed adhesive allows automation across a wide range of fabrics, including organics and delicate materials such as cashmere, silk, wool, and leather. Material selection shifts from “what can be sewn reliably” to “what best serves the product.”

In this context, designing for automation is not restrictive; it is generative. Creative intent, aesthetic expression, and manufacturing logic are aligned from the outset. Design becomes both more precise and more expressive, with automation handling consistency and execution while designers focus on form, function, and differentiation.

What does the human role look like inside a highly automated apparel factory, and what new skills become critical as robotics takes over repetitive tasks?

In a highly automated apparel factory, the human role shifts from repetitive manual execution to operating, supervising, and improving automated assembly systems end to end. Instead of long sewing lines, smaller teams are organized around robotic cells, with manufacturing technicians, cell supervisors, and process specialists responsible for performance, quality, and uptime across the entire production flow.

Manufacturing technicians work hands-on with robotics, vision systems, and adhesive-based bonding equipment. They monitor robotic cells, tune dispense paths and bonding parameters, manage material interactions across different fabrics, and intervene when variability or edge cases arise. Quality assurance is continuous rather than sampled: vision systems inspect placement, alignment, and bond consistency in real time, while humans oversee thresholds, interpret anomalies, and decide when and how to adjust the process.

This model delivers materially higher quality and repeatability than manual production. Automated deposition and placement reduce variability, while digital QA enables consistent execution across every unit rather than reliance on downstream inspection. Human judgment is applied where it adds the most value—evaluating exceptions, refining tolerances, and improving system performance over time.

Realizing this requires a deliberate training and upskilling model embedded directly into manufacturing operations. Workers are trained to read production dashboards, interpret vision and sensor data, understand bond quality metrics, and safely collaborate with robotic systems. They learn how adhesive behavior, material properties, and process parameters interact, and how those variables show up in QA data.

Over time, upskilling progresses from basic system operation to deeper process ownership. Through structured on-the-job training, certification-style modules, and mentorship, technicians develop skills in root-cause analysis, preventive maintenance, and continuous improvement. The result is a technically fluent workforce capable of sustaining high-quality, repeatable production at scale—one where automation elevates both product consistency and human capability rather than replacing it.

Looking ahead five to ten years, how do you see Physical AI reshaping not just apparel, but manufacturing more broadly—and where do you want CreateMe to have the biggest impact?

Our view is that the biggest opportunity for Physical AI in manufacturing over the next five to ten years lies in tasks with the highest variability and complexity, not in areas already well served by rigid automation. Among the hardest problems are where materials are soft, flexible, or three-dimensional, and where real-world variability has historically limited automation.

That challenge is most acute in soft material assembly. Apparel is the clearest example, but the same dynamics exist in consumer electronics with flexible components, in medical products, in furniture, and in automotive interiors. Across these categories, sewing and soft-goods assembly account for the highest labor content and remain the least automated parts of the manufacturing process.

From our perspective, early progress in Physical AI will be driven by highly verticalized systems. Mechanical design and robotic form factors will be tuned to specific applications and materials, rather than generalized embodiments. What scales across these verticals is not the hardware, but the intelligence: the perception, control, and learning systems that enable machines to understand deformable materials, align complex edges, adapt to variability, and execute bonded assembly reliably.

Over the next 10 years and beyond, we believe more general and humanoid embodiments will become increasingly prevalent as embodied intelligence matures and deployment accelerates. As humanoid robots move from pilots to millions, and potentially tens of millions, of deployed units over the next decade, textile-based exo-skins and soft outer layers will become critical human-machine interface systems. Meeting that demand at scale will require adhesive-based, automation-native assembly, opening a new industrial category in intelligent soft-material fabrication.

This is the context in which CreateMe’s vision sits.

CreateMe’s vision is to lead the transformation of soft-material assembly. To make the automated assembly of textiles and flexible materials as programmable, scalable, and adaptive as software. While mechanical and robotic implementations will vary by vertical in the near term, the core challenge remains consistent: soft-material handling and sewing dominate labor content and resist traditional automation.

What unifies these markets is a shared Physical AI capability set—the systems that govern perception, deformable material manipulation, edge alignment, bonding logic, and adaptive assembly across fabrics and form factors. By proving this capability in apparel, one of the most demanding manufacturing environments, CreateMe aims to unlock automation across a far broader set of industries and enable both the next generation of soft-goods manufacturing and the soft interfaces that will increasingly surround intelligent machines.

Thank you for the great interview and your detailed responses, readers who wish to learn more should visit CreateMe.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.