Thought Leaders
Reviewing ‘How People Use ChatGPT’

I’m Ilya Romanov and I’m working on an AI product in stealth mode. In my last article, I talked about a (not really?) disruptive MIT report on AI and revenue bottom lines. Another buzzing report came out—“How People Use ChatGPT”. Its bold opening “Overall, we find that ChatGPT provides economic value through decision support, which is especially important in knowledge-intensive jobs” got me wondering if—at last—there is a proven record of AI adding tangibly to a work process and bottom lines. Let’s deep dive now!
The report and the team
‘How People Use ChatGPT’ was prepared by OpenAI researchers and Harvard economist David Deming. Their combined expertise in AI engineering, privacy-preserving data analysis and economical policy made possible the creation of this extensive large-scale empirical study of ChatGPT.
The study draws on an impressive pool of 1.5 million conversations with ChatGPT. The team used the privacy preserving funnel to anonymize users while still understanding the intent behind messages. Essentially, every ChatGPT conversation was pulled through an automated filter that removed any names or personal details before researchers ever saw it. These cleansed, machine-generated labels (like topic or intent) were analyzed. For linking in basic demographic backgrounds, such as education or occupation, the team worked in a secure clean-room environment where they can only pull aggregated data on groups of at least 100 users, so no individual person or message can ever be singled out or reidentified.
Findings
The bold statement in the ‘Abstract’ is one of the key insights about how AI delivers value in the modern economy. In contrast to the countless ‘automate-it-all’ solutions springing up lately, ChatGPT is employed primarily as a decision-support tool rather than a pure task-automation platform. The report introduces two types of tasks ChatGPT is given by users: ‘Asking’ and ‘Doing’. The former refers to jobs like seeking information, advising or guiding for better decisions (‘What’s the difference between correlation and causation?’). The latter is actually doing specific tasks like drafting emails, writing reports and even creating code (‘Write an email to my manager saying I was not able to reach Sales, though I called several times.’).
Interestingly, the report distinguishes between work and non-work conversations. All non-work conversations make up as much as 70% in 2025, thus showing a sharp rise from 2024’s 53%. Personal interactions largely center on everyday tasks (‘how-to’ advice and tutoring) and writing assistance (editing or translation). This increase and the context of use positions ChatGPT as a daily companion—the new Google of sorts—for exploration, creative work and decision making outside of the workplace.
The report reveals that 49% of all messages are ‘Asking’ compared to only 40% of ‘Doing’ prompts. The trick is that ‘Asking’ messages are growing faster and receive higher satisfaction ratings from ChatGPT users. In July 2024 ‘Asking’ and ‘Doing’ were almost equal, making up 46% of all ChatGPT messages. By June 2025, ‘Asking’ had risen to 51.6%, meaning an absolute increase of 5.6 percentage points which translates into a relative 12% gain, while ‘Doing’ fell to 34.6%, meaning a -25% relative fall.
What does this divergence mean? The economic benefit, the report argues, is in faster and better informed decision-making in knowledge jobs which amplifies productivity. In knowledge-intensive jobs, where business outcomes are directly influenced by the quality and speed of decisions, having access to better processed information, alternative perspectives and excellent analytical support can significantly boost worker performance.
The basis for the conclusion is the findings. On the one hand, users with graduate degrees are about 2 percentage points more likely to use ChatGPT for ‘Asking’ and 1.6 percentage points less likely to send ‘Doing’ messages compared to less educated users. In the similar vein, users in highly-paid scientific and technical occupations are more likely to use AI for ‘Asking’—47% of work-related messages are ‘Asking’ in computer-related jobs.
The unseen decision-making abilities that far outpace the human brain can be reflected in money—not in terms of revenue OpenAI made (a whopping $13 billion in annual recurring revenue, as of early August this year) but in consumer surplus. Put simply, consumer surplus is the gap between the maximum amount a person would willingly pay for a service and the actual price they pay. For example, I’m willing to pay $100 for a monthly ChatGPT subscription but I’m paying $20, so the consumer surplus is $80.
At scale, as the research by Collis and Brynjolfsson (2025) found the consumer surplus to be at least $97 billion in the US alone. And it appears from their research that people will need to be paid $98, on average, to stop using generative AI for a month. The value ChatGPT has for users in both work and non-work contexts is immense, and it is safe to say that in an enterprise context, there is an even greater financial gain.
Why Asking?
As I researched the team behind this report and (surprise-surprise!) invited Perplexity to think with me about this report, I could not help wondering why people ASK rather DO with ChatGPT.
To put into more scientific terms, I’d like to turn to a research referenced in the report. Ide and Talamas (2025) presented two roles AI plays in the workplace: Co-worker that makes deliverables, i.e. does work, and Co-pilot that enhances problem solving without any final outputs. The data from ‘How people use ChatGPT’ supports the co-pilot paradigm: again, people ask rather than do with ChatGPT (49% vs 40%) and asking is on the rise and ‘Making decisions and solving problems” ranks among the top work activities in every occupation group surveyed. In a nutshell, ChatGPT provides council rather than work in most cases.
So, AI is a Co-pilot (did Microsoft know anything when naming their AI tool… if you catch my drift) in most cases. But why? Scroll Linkedin and you see ‘Automate this’ and ‘Automate that’.
I have thought about that, too, as I read the report and hundreds of ecstatic Linkedin posts from AI enthusiasts who found THAT script to finally get AI to work. My assumption is that this predominance of Asking-type over Doing-type with ChatGPT may be just a reflection of the model’s current (or past) limitations rather than a fundamental public preference for AI as a co-pilot. Chances are, one there is a truly powerful agentic AI solution, users will be increasingly adopting AI as a real co-worker rather than an advisor. Put simply, people want to use AI as a co-worker but there are still constraints.
From my experience and what I heard during the CustDev interviews, I assume that there is one serious drawback that AI is not able to overcome yet. One, it lacks context. Two, not every piece of work can be broken down into a simple prompt (or I’m not just patient enough).
Let me elaborate on these two points. Because AI lacks context, it can’t be let free when it comes to content creation. It needs to be monitored, and the prompt may have to be revised for a different, better outcome. IBM explained the idea in scientific terms. Humans accumulate rich situational understanding thanks to non-stop perception, memory and real-world experience, whereas AI assistants operate purely by predicting the next token (~word) based on a fixed ‘context window’ of recent inputs. AI has short-term memory, and earlier information gets lost once the memory capacity is filled up.
Large language models struggle with multi-step complex tasks. Performance and coherence degrade as tasks combine, Prompt Drive explains. There is an approach to overcome the limitation-the so-called divide-and-conquer approach. However, it acknowledges that no single prompt can capture all the nuances, so AI remains an excellent co-pilot while being a limited co-worker.
Now, I have been thinking in the realm of ChatGPT. Does this ratio (49% vs 40%) replicate with other AI tools?
- 60% of messages sent to Perplexity AI are research-driven requests rather than pure content generation, according to App Labx.
- DeepSeek R1 scores high as an advanced co-worker for logic tasks and co-pilot for chain-of-though assistance, according to its internal report.
- GitHub Co-pilot (the hint is in the name, right?) is designed for editor suggestions for code, debugging and learning assistance. It’s safe to assume that its role is to help rather than do.
- Microsoft Copilot shows a roughly equal split between Asking and Doing, mirroring usage patterns for ChatGPT.
So—what do the findings leave us with? As I see it, AI is a powerful booster for knowledge-intense work that accelerates decision making and translates into direct time savings and indirect revenue gains. To resist AI in favor of ‘human touch’ or ‘unique content’ means losing in the long term. People use ChatGPT and other AI assistants to advance decision-making, not to replace a human decision-maker altogether.
Will the pattern change? Will people see the mass brain rot because of increasing work delegation to AI? Stay tuned for my next article on how AI affects the human brain and cognitive abilities.












