Thought Leaders
Why Emotional AI Coaching Is the Future, Not Just Tracking and Pings

When we started building Simple in 2019, I wanted a health product to guide people the way a good teacher guides a student. The comparison I kept coming back to was Duolingo in its early days. Not because of gamification, but because Duolingo was one of the few digital tools to reliably pull people back into a practice every day. Most health choices aren’t dramatic events. They are tiny daily decisions. If an app can keep someone engaged long enough for those decisions to compound, it’s doing real work. We wanted to build an AI health coach that helps people return, reflect, and try again, even if they fail, well before the current AI-hype.
Most weight-loss apps are built on a different assumption: AI is treated as an accessory. A “human-like” chatbot sits on top of a tracker. Most of the time, there’s some Q&A module that answers what users ask. Plus, there are motivational pings to encourage people to come back when they drift. While these elements are perfectly fine, they don’t get to the root of why people struggle with adherence. Most people don’t fail because they lack information, rather, they fail because staying consistent requires emotional reinforcement, accountability, and a sense of partnership. An application designed to ping and send reminders can’t hold the user through the long plateau phases where real behavior change happens. It turns out, AI can – when done right.
Why traditional weight-loss apps don’t work as well as we’d like
When we began researching adherence patterns, one thing became obvious. People drop out when they feel alone with a difficult goal. Tracking calories or fasting windows is useful only as long as the user feels supported in the moments that feel chaotic or discouraging. Most apps don’t respond to those moments, they just log user data and offer general advice. As a result, we’ve got tools that do not meet the user at the emotional level where quitting becomes an option.
Then, there’s good old decision fatigue. Health choices are repetitive and easy to rationalize away. Without a system that helps people regulate their emotions, interpret setbacks, and maintain momentum, the tracking becomes a mirror of failure rather than progress. When someone logs three days of overeating, they don’t want to see it in a neat dashboard. They want understanding, perspective, and a next step they can actually take.
This is where AI agents are starting to show a measurable shift. When designed as continuous companions rather than utilities, they help users process the meaning of their data. They explain patterns with empathy. They celebrate tiny improvements and offer coping strategies in the moment. A coaching-centered AI becomes a buffer between the user and their own discouragement. That emotional layer is missing from most existing products, but it’s exactly what determines whether a habit survives long enough to become automatic.
Focusing on an emotionally intelligent AI coach
The strongest lever for creating adherence at scale is building a relationship. That’s the part most products overlook. They try to change behavior through logic or structure. Only emotion sustains change. When you feel understood, you stick around. When you feel guided, you try again. And here’s the trick: if you want a functional AI coach, every interaction needs to feel relational, not mechanical. If that works out, consistency stops being a chore for a user, and starts being a conversation everyone wants to return to. In fact, we see ChatGPT swinging back and forth on this spectrum “relational — mechanical” with every new version, with users reacting accordingly.
So for us, each interaction needed to have a purpose. Check-ins aren’t just data collection. They help the AI understand the user’s emotional state and context. Prompts respond to individual patterns. The coaching voice adapts to the user’s tone, preferences, and vulnerabilities. Over time, people begin to treat the AI like a health companion rather than a tool. Many users describe the coach as something between a therapist and a trainer. That wasn’t an accident. It was the result of designing for emotional connection rather than functionality alone.
Redirecting toward a coaching-first model
At one point, our solution was growing fast as a tracking product. At the same time, I couldn’t shake the belief that tracking alone was never going to create the innovative impact we wanted. We made a difficult decision to redirect resources toward the coaching model before we had metrics to support the shift. It felt risky, but staying on the old path felt riskier. The moment we committed to that direction, the product began to change. We rebuilt the interaction model, rewrote the user journey, and expanded the behavioral science behind the coaching. It wasn’t a fast transition, but it was the right one. The shift toward emotional AI has driven better retention, stronger outcomes, and a clearer product identity.
Once our AI coach began forming relationships instead of spitting out instructions, users started staying longer. They opened the app even on days when they didn’t want to think about their weight, shared more details about their actual habits, and checked in after setbacks instead of churning altogether. The coaching became a grounding point rather than an obligation.
This reinforced something we suspected early on: that sustainable weight change is not a process of intensity but of building emotional resilience, and emotional bonding with the AI creates the perfect conditions for it.
How neurodivergent thinking prompted us to target emotions
As much as I’d love to say our product philosophy comes from thorough research and innovative thinking only, it relies a lot on how my own brain works. I have ADHD and a strong tendency toward hypervigilance. This pulls me into spirals, makes me second-guess everything, and causes me to jump between ideas way too quickly. Naturally, I’ve spent a fair share of my life trying to redirect these habits to something constructive.
Hypervigilance turned out to be excellent for risk modeling, for instance. It’s useful when you need to see edge cases before they happen, especially when your product is an AI system interacting with millions of people. Neurodivergent thinking naturally explores the unusual scenario, the user who behaves outside the norm, the emotional reaction you don’t expect. That became an advantage in building a coach that had to be emotionally intelligent above all else. We didn’t need an AI that just understands the “average user,” it had to understand people who were overwhelmed, scattered, inconsistent, avoidant, ashamed, or stressed, because they are most in need of support.
A brain that never stops scanning for what might go wrong is also pretty good at seeing how people might feel misunderstood. That helped shape how our AI-agent responds to users’ confusion, frustration, or doubt. It also influenced our approach to safety. Building an AI that gives advice about health means you have to anticipate failure modes. You have to understand how someone might interpret a message in a moment of stress. Neurodivergent thinking made our team more sensitive to tone, pacing, and emotional nuance. It pushed us to add guardrails that weren’t obvious but became crucial in real-world use.
AI needing human modeling, not just human oversight
There’s a lot of discussion about keeping humans in the loop when deploying AI in general and AI in health contexts in particular. That’s important, but there’s another dimension technical thinkers tend to forget. Effective AI coaching doesn’t just need oversight, it needs modeling. Whatever coach you’re building, it has to behave in ways humans intuitively recognise as caring, consistent, and trustworthy. The emotional cues matter as much as the informational ones.
Modeling human-like patterns doesn’t mean pretending the AI is a person. It means giving the user a familiar rhythm. Good coaches pay attention, adjust their tone, pick up on discouragement. They offer structure when someone feels chaotic. These are very predictable human behaviors. We trained the AI to adopt those patterns because they make adherence easier. When people feel emotionally regulated, they make better choices and stick to them longer. That’s the human factor we cared about.
The future of coaching with AI
The most important thing I’ve learned along the way is that people don’t need louder reminders or more data. They need a relationship with a system that understands how difficult change is. Artificial intelligence is now capable of supporting people in that way, at least, if we design it with emotional nuance. As AI models interpret emotion, context, and behavioral patterns with more nuance now, they will, hopefully, stop functioning like fancy chatbots. My prediction is that emotional intelligence, not the size of the model, is already becoming the real differentiator.
As our product continues to grow, the vision stays the same: health change is a practice, and practice requires a partner. Our goal is to build the most emotionally intelligent health coach in the world. If people feel understood, they come back. If they come back, they change. And if they change, the product is doing what it was built to do. And not to brag, but we’re a $160M ARR company now — proof that emotional AI coaching can scale.












