Interviews
Dr. Yair Adato, CEO and Founder of Bria – Interview Series

Dr. Yair Adato, CEO and founder of Bria, is a machine learning and computer vision expert recognized for his ability to bridge advanced technology with real-world business applications. Before founding Bria, he served as CTO of Trax Retail, where he played a central role in transforming the company from a 20-person startup into a global unicorn with over 850 employees. Throughout his career, Yair has also contributed as an advisor to several AI-driven ventures, including Sparx, Vicomi, Tasq, DataGen, and Anima. His leadership is marked by a strong commitment to responsible innovation, data ownership, and the democratization of AI technology.
Bria is a pioneering company in the field of responsible visual generative AI, founded with the mission to create an open and ethical platform for image generation. The company’s unique approach rewards data owners for their contributions through an attribution engine, ensuring transparency and fairness in the AI ecosystem. By focusing on creativity, collaboration, and compliance, Bria empowers organizations to integrate generative AI safely into their workflows while setting new standards for accountability and trust in the visual content industry..
You founded Bria to create a responsible and open platform for visual generative AI. What inspired you to start the company, and what early challenges or insights shaped its direction?
I saw Goodfellow present the GAN paper in 2014, and it was immediately clear that creative production was going to change fundamentally. Watching that presentation, the implications were obvious—this wasn’t just an incremental improvement, it was a different paradigm for how machines could learn to generate visual content.
But from the beginning, I recognized a fundamental gap in how these systems were being built: no accountability for training data, no framework for responsible deployment, no consideration for the creators whose work made it all possible.
The early challenges weren’t technical—they were structural. How do you build generative AI that enhances creative work without undermining the people who create? How do you make these systems usable in production environments where legal certainty matters as much as output quality? Those questions shaped everything we built. We founded Bria on the principle that innovation and responsibility aren’t opposing forces—they must advance together, or technology fails everyone.
Your academic background in computer vision and your 50+ patents bridge research and real-world innovation. How has that experience influenced Bria’s technical roadmap and long-term strategy?
My research background taught me to think in systems—how different layers of understanding connect to form meaning. Many of my patents focus on how machines interpret the structure of visual information, and that mindset naturally translated into Bria’s approach. We look at image generation as a compositional process, not a random one.
But the patents aren’t just about technology—they’re about bridging technology to business reality. A significant portion of our IP portfolio addresses the systems layer: how do you create attribution frameworks that connect generated content back to its training sources? How do you build economic models that compensate creators at scale? These aren’t purely technical problems—they’re questions of infrastructure, business models, and market design.
That broader view shaped our long-term strategy. Innovation isn’t only about advancing the underlying models. It’s about creating new economic structures, new contractual frameworks, new ways for the industry to operate sustainably. The goal isn’t just to produce better results—it’s to understand how those results are formed, who contributed to them, and how value flows through the system. That’s where science meets product thinking meets business architecture.
Bria just announced FIBO, described as the world’s first deterministic visual foundation model for professional-grade AI generation. What makes FIBO fundamentally different from existing visual AI systems?
The name itself signals our approach: FIBO stands for Fibonacci, the mathematical sequence famous for its inherent aesthetic properties. The golden ratio—the ratio between consecutive numbers in the Fibonacci sequence—emerges in what we perceive as visually pleasing proportions across mathematics, visual art, geometry, and architecture. You see it in the dimensions of the Roman Pantheon and the White House, in the human body and face as illustrated in Leonardo da Vinci’s Vitruvian Man, and throughout natural forms. That connection between mathematical structure and visual beauty is exactly what FIBO embodies: aesthetic quality through formal structure.
FIBO changes the relationship between intent and output. Most visual AI systems insert layers of interpretation between what you want and what you get—you write a prompt, the model translates it through language encoders, diffuses it through noise, and you hope the result matches your vision. FIBO removes those layers entirely.
We made visual AI work like code—every creative element becomes editable and repeatable. That’s a breakthrough for professionals who’ve been stuck with prompt roulette.. This means every element, lighting direction, camera angle, color palette, composition, style exists as an explicit, controllable property. The JSON structure allows you to modify only the parameters you want, while locking all others. You can adjust lighting intensity without affecting composition, or shift camera angle without altering the color palette. The system does exactly what you specify, every time.
We’re running hackathons with Fal and NVIDIA to show developers how deterministic generation actually works in practice. The JSON structure itself opens the black box—you can see exactly what parameters created an image, reproduce it, and modify it with precision. It’s a completely different paradigm from prompt engineering.
Traditional text-to-image systems rely on increasingly elaborate prompts to achieve specific results. How does FIBO’s approach solve the prompt complexity problem?
Two problems need to be addressed. First, the prompt randomness problems exist because current models are trying to extract the user intent and add what model “thinks” is aesthetic or desirable via prompt enhancement. Second, lack of control on professional properties
FIBO inverts this. The model was trained on more than 1,000-word visual descriptions per image that explicitly encode over 100 independent attributes in JSON format. This wasn’t post-processed or extracted—it was the native training format. Because each attribute is represented structurally from the beginning, the model learned visual composition as a set of discrete, controllable parameters rather than as a fuzzy interpretation of text.
What this means in practice: you define aesthetic intent through structure, not through “prompt and pray”. The level of text-to-image alignment is fundamentally higher because there’s no translation layer. You’re speaking the model’s native language. And because properties are independent, you can iterate on lighting without accidentally changing composition, or adjust color palette without affecting style. The control is surgical.
FIBO introduces a “refine” workflow that’s different from typical iterative generation. How does this change how professionals approach visual production?
Most generative workflows are iterative in a frustrating way—you generate, evaluate, adjust your prompt, generate again, hope it’s closer. this “prompt and pray”. You’re never quite sure what changed or why.
Refine turns experimentation into design. You’re not guessing what a new prompt might do—you’re steering the image, exactly the way you’d tune light or color in Photoshop. . You don’t need to work at the JSON level directly—a vision-language model modifies the JSON for you based on natural language instructions. But the JSON itself lets you understand exactly what happened. You generate an initial image, examine its JSON representation, identify which properties need adjustment—maybe the lighting intensity is too high, or the camera angle needs to shift 15 degrees—and you modify only those values through simple instructions. Everything else stays locked.
This structure is perfect for agentic workflows. An AI agent can analyze the JSON, understand the complete state of the image, make targeted modifications, and explain its reasoning—all because the parameters are explicit and interpretable. The agent isn’t guessing what a prompt change might do; it’s making precise adjustments to known properties.
This removes the unpredictability that’s kept professionals skeptical of generative AI. When you can see the complete parameter set that created an image, understand what each property controls, and modify individual attributes with confidence that nothing else will drift, you’re no longer experimenting—you’re designing. The JSON visibility opens the black box completely. For professional production workflows where consistency and control matter more than novelty, this is the difference between a creative toy and a production tool.
Data ethics and brand safety have become central to enterprise AI. How does Bria’s use of fully licensed, rights-cleared data ensure both compliance and respect for creators’ IP?
From day one, we decided that if the industry was going to grow responsibly, it had to start with data integrity. Every image that trained FIBO comes from licensed, rights-cleared sources through partnerships with content leaders like Getty Images and Envato. This ensures our models are compliant and fair. We see respect for creators as part of the value chain, not as a constraint. Enterprises benefit from that integrity because it gives them the legal and ethical certainty they need to scale confidently.
FIBO was trained to learn each company’s unique brand style and identity. How does this capability change how global brands approach content creation and visual consistency?
Brands have their own visual DNA—a unique way of expressing emotion, trust, and purpose through design. FIBO can learn that language. Once trained, it generates visuals that reflect the same composition, tone, and atmosphere that define a brand’s identity. This turns AI from a creative assistant into a brand asset. It helps global teams create with alignment, not approximation. The result is consistency at scale without losing individuality.
With early adopters already using FIBO to automate packaging design, product imagery, and creative campaigns, what results or feedback have stood out to you most so far?
The shift in mindset. Teams are starting to treat AI as part of their operational toolkit, not as a novelty. One global brand is generating regional packaging variants much more quickly while maintaining brand consistency. Another leading creative agency has accelerated campaign development tenfold through controlled iteration. But the real signal comes from creative directors who tell us they feel more in control; that the model understands their visual intent. That’s a turning point for the industry.
Bria positions itself as a leader in ethical and controllable AI. How do you see this philosophy shaping future regulations or industry standards for visual AI?
We’ve reached a stage where innovation and governance need to move together. Regulation isn’t an obstacle, but rather the infrastructure for sustainable growth. Our approach — transparent data, deterministic outputs, clear provenance — aligns closely with what emerging policies are asking for. I believe we’ll see new standards that prioritize traceability, explainability, and rights protection. Bria’s philosophy is to help define those standards through practice, not policy statements.
Looking ahead, what’s next for Bria after FIBO? Do you envision expanding into multimodal AI that unites image, video, and 3D generation under one controllable framework?
Yes. The same principles that power FIBO—structure, control, transparency—apply across all visual domains. We’re already exploring extensions into video and 3D, where determinism can bring the same reliability that enterprises now have with images. Our goal is simple: make AI creativity as controllable and safe as writing code—and extend that across every visual medium, from image to video to 3D.
Thank you for the great interview, readers who wish to learn more should visit Bria.












