Thought Leaders
If a Bot Can Flirt With Kids, What Else Is It Allowed To Do With Your Data?

When leaked internal guidelines revealed that Meta was allowing its AI chatbots to flirt with children, most people treated it as a scandal and moved on. But it’s worth taking a closer look at what the investigation tells us about the current state of AI ethics: If a company like Meta is condoning such policies at its scale, what else are these platforms quietly allowing? And how much of it involves your data?
Business leaders tend to evaluate AI tools by what they can do, how fast, and at what cost. But there are harder questions worth asking, especially as AI tools fast become table stakes: What terms are you agreeing to when your teams start using AI tools? What are the model providers and agent builders doing with your data? And when something goes wrong, who assumes responsibility?
Most organizations are so wrapped up in figuring out how to wring the most money out of this new tech, they haven’t yet gotten around to considering the most important question:
What is actually happening to your data?
Most people either wildly overestimate the risk of sharing something with a chatbot or dismiss the matter entirely. The fact is, large language models are, in a sense, frozen once they’re trained and released to the public. That means your conversations are stored separately, not instantly wired back into the system’s memory; what you told ChatGPT this morning isn’t immediately informing what the model would tell someone else by the afternoon.
That doesn’t mean your data isn’t being used. It is. The path is simply more complicated.
Conversational logs are stored separately, and many AI labs explicitly reserve the right to use them to train the next version of their model. It’s right there in the terms of service. What goes in as a customer support query or a strategy brainstorm today can, over time, influence a model that millions will use tomorrow.
The risk to proprietary data goes beyond policy. In 2025, Scale AI inadvertently exposed thousands of pages of confidential project materials from clients, including Meta, Google, and xAI. Separately, a November breach of an OpenAI vendor led to hackers making away with customer data, including names, emails, and system details.
To be clear, this isn’t a five-alarm situation, but it’s not free of risk either. Enterprise-grade systems come with contractual guardrails around data reuse. Consumer tools largely don’t. If your data is so sensitive that you’d want an NDA to protect it, you shouldn’t hand it to a consumer chatbot and assume it won’t be used elsewhere.
The numbers suggest most organizations haven’t absorbed this yet. Nearly eight in ten employees have pasted company information into AI tools, and of those, more than four in five did so using their personal accounts, according to a 2025 workforce survey. One in five organizations has already reported a breach tied to shadow AI usage, and only 37 percent have policies in place to detect or manage it, per IBM’s 2025 Cost of a Data Breach Report.
Once understood, this kind of data risk isn’t hard to work around. Differentiate between consumer and enterprise tools, know what you’re signing, and you’ll have covered most of your bases.
Where AI-mediated communication fails businesses
What happens to your data is one piece of the picture. The other, and for many businesses the more consequential one, is what these systems do to the quality and accountability of your most important communications.
Think about the conversations that move business: meetings to retain long-time clients; a sales negotiation where tone and trust are almost more important than the deal language; or a quarterly board presentation on your progress towards the year’s milestones. It turns out, AI can handle the transactional elements of these interactions reasonably well, like taking meeting notes, assigning priorities, and highlighting action points. It struggles with everything underneath.
The specific failure modes are worth naming. AI compresses context; it summarizes, smoothens, and standardizes in ways that can strip out nuance. Moreover, the contents that large language models generate are hard to verify. The people who you’ve sent an AI-generated email or summarized meeting notes have no way to confirm what they’ve received reflects what you meant, or that the message wasn’t filtered or reframed by the AI.
That’s not to say AI has no place in business communication. It clearly does. But there is a category of conversation where the efficiency gains don’t justify the exposure, and most organizations haven’t thought about differentiating those use cases enough.
Know when to do it yourself
So, the question becomes: For your most sensitive communications, should AI be in the loop at all?
My honest answer is no, at least not without a person who can be held accountable for what was said, how it was said, and whether the message was delivered. Verifying human communication isn’t a preference for the old way of doing things; it’s just a recognition that some conversations require a person to stand behind them.
Leaders should do their homework. What does the vendor’s data policy say about reuse? What happens to your team’s conversation logs when the contract ends? These aren’t questions for your IT team to sort out in the background. They’re procurement questions, and they belong earlier in the process than is currently the case.
The bot that was allowed to flirt with kids didn’t make that decision on its own. Someone approved it. Every AI system reflects the judgment of the people who built and deployed it, and those calls aren’t always obvious from the outside.
Until the tools for auditing AI tools and systems catch up with their adoption, the most defensible position business leaders can take is to draw the line between which conversations they’re comfortable routing through AI, and which they’re not.
The efficiency argument for AI is compelling. So is the one for owning what goes out in your name.












