Thought Leaders
What AI Tools Miss When They Donât Make Space for Community

Any founder who has built an app knows the emotional rollercoaster that is seeing user numbers grow, but time spent on-app stagnate. When my team and I experienced the same earlier this year with our AI-powered pet care app, we put our heads together and decided to launch some community-based features to entice users to stick around longer.Â
Not only did we quickly see time spent on-app grow, but observing user activity across the platform soon started to tell an interesting story about the different ways people interact with an AI chatbot versus one another.Â
Notably, the chatbot was more often approached with more raw emotion like guilt, fear, or panic (i.e. Am I failing my dog if I canât train him to walk off-leash?) as users sought expertise and problem solving. Community spaces, on the other hand, were approached with more curiosity driven by a desire for community-building and shared experiences (i.e. Do Chihuahuas really always shiver, or is mine unusual?).
By comparing and analyzing these interactions, psychographic profiles for a few different categories of users emerged. While creating these types of profiles are nothing new â marketers have been doing this for decades â the insights that have come specifically from comparing AI conversations with peer-to-peer conversations have been particularly illuminating.Â
In fact, what weâve uncovered from this work has made it clear that combining AI chatbots with community engagement features can play an indelible role in creating a more effective product, a safer user experience, and a product pipeline for future developments.Â
More Effective Product
As a CEO, the way I ask my product team to do something is obviously very different from the way I ask my children to do something. But itâs also different from the way I might ask my wife to do something. Why? Because youâve got to know your audience. And the better you know your audience, the more you can meet them with the tone and language they receive and understand best.Â
For example, some of our users are first-time pet owners who are extremely nervous about screwing up. They are the folks who are repeatedly asking the AI chatbot âis that okay?â in regards to their own behaviors to confirm that theyâre doing the right thing. Like any new parents, they need praise and reassurance that theyâre doing a good job.
Others, however, are all business. We have a different set of more seasoned users who are seeking practical solutions and optimizations. They donât need reassurance â theyâre looking for structured answers and checklists. For these users, unnecessary praise or overly emotive responses to their questions would likely turn them off the platform altogether. Look no further than the recent backlash to GPT-5âs change in tone as a very real example.Â
Really, what itâs all about is not just keeping users on the platform, but communicating in such a way that they feel empowered to make informed decisions. The same would hold true for any consumer-facing product, whether itâs a financial planning app or a health tracker. The beauty of having both AI-chatbots and community features is users become more multi-dimensional, making it possible to design a more emotionally intelligent, multi-dimensional product.
Safer User Experience
When it comes to AI chatbots, a little more emotional intelligence could go a long way. If youâre reading this, I doubt I need to catch you up on the headlines regarding instances of chatbots leading vulnerable users down dangerous (even deadly) paths. For this reason itâs worth clarifying that meeting users with a tone and language they best understand is not the same as blindly reinforcing their every thought.Â
In fact, the opposite can (and should) be true â the more deeply you understand someone, the better equipped you are to set necessary parameters and limitations to avoid these types of emergencies. One advantage of using AI chatbots as features within industry-specific tools, however, is that they can behave in a categorically different manner from those designed for general companionship.Â
Even so, though powerful, AI is not flawless, and mistakes and misinformation are always going to be a risk. In this way, community features can also be a welcome opportunity to seek a second opinion or ask others about similar lived experiences.Â
For example, though an AI chatbot can tell you that itâs safe to feed a labrador apples, yours simply might not like apples, and talking with a fellow lab owner whoâs experienced the same can bring needed understanding and advice. On the other hand, if someone in the community chat responds that they often treat their labrador to frozen grapes, this risky advice can be fact-checked with an AI trained on a robust set of verified, accurate data.Â
If recent chatbot horror stories and the last 10 years of social media have taught us anything, itâs that finding a balance between the two extremes can serve as a necessary precaution.Â
Lay Groundwork For Product Pipeline
Any startup selling software can analyze how users behave on-platform in order to see what features are most popular and where improvements can be made. You donât need an AI chatbot or a community feature to do so. However, when both features are available, the resulting datasets give a more detailed look into what users both want and need.Â
Psychographic profiles, for instance, are not just helpful tools for personalizing the tone of voice of an AI chatbot. They can actually serve as roadmaps for entire user journeys. If we take the two example profiles from above, itâs entirely possible to design an app where the timid, first-time pet owner experiences something completely different from the seasoned professional looking for support with an unfamiliar breed.
Additionally, seeing the trends that emerge across different regions can underscore geographically-specific content that may benefit certain user groups. More importantly, it also spotlights the types of partnerships that would make sense for the community in the future, giving the business development team a jumping off point.Â
AI startups know their products are only as good as their data, but the data created by user behavior is just as valuable for knowing where to go next.Â
A Fair Warning: Not All Communities Are Created Equally
Today, all of the major social media platforms â Facebook, X, YouTube, Instagram, and so on â are currently operating with both AI and community engagement features. I am under no illusion that this idea is brand new. However, what weâre seeing in those spaces is overwhelmingly AI-generated content shared by users â what has come to be known as âAI slop.â
The low quality of this content has turned many users off, creating a real opportunity for AI-startups to fill the void by adding community spaces to their platforms. But not all communities are created equally. Social media platforms are notorious for bad behavior and a recent analysis demonstrates that religious, gender and racist hate speech continues to expand in these online spaces.Â
Knowing that both AI chatbots and community spaces can be abused by bad actors, itâs important to carefully consider what behaviors you aim to encourage amongst users. In our case, the AI chatbot is limited to answering individual user questions about pet health and behavior. AI isnât used to generate any health-related content.Â
As for community engagement, users are able to ask questions on forums designed for their particular breed of pet. This limits users to participating in conversations in which they have personal experience and keeps the focus on common ground.Â
By taking similar measures, the AI-platforms of the future can deliver real value for both themselves and their users, getting the best of both the AI and social media worlds.Â