How People Actually Use ChatGPT?

Visuals by:
Angelina Tanova

If you’ve been reading our blog for a while, you already know where we stand on AI. We’re very much on the “use it” side—but not blindly. We’ve written before about the risks, the limits, and the need for common sense. Still, based on our own work, there’s no denying one thing: AI has made us faster. A lot faster. 

One of the tools we rely on most is ChatGPT. We first tried it when it launched, around three years ago (which is kind of crazy how time flies). Since then, it’s grown to roughly 900 million users, which is about 11% of the world’s population. That number alone still feels unreal.

And no two people seem to use ChatGPT in the same way. Some treat it like a search engine. Others use it as a writing partner, a coding helper, or a thinking aid. And now and then, I catch myself wondering how other people are actually using it, outside of my own habits and assumptions.

Back in September, OpenAI published a research paper called “How People Use ChatGPT.” I spent time going through it, trying to understand the findings, spot patterns, and see what the data actually says. In this post, I want to break that paper down into smaller, bite-sized parts and walk through the main takeaways.

Let’s get down to it.

  1. How the data was collected 

Before getting into how people use ChatGPT, it’s worth spending a bit of time on how this research was actually done. Because the way the data is collected shapes everything that comes after.

This paper looks only at consumer ChatGPT usage—that means Free, Plus, and Pro plans, from launch up until the most recent data window. It explicitly excludes non-consumer plans like Business/Teams, Enterprise, and Education. So when we talk about “users” in this research, we’re really talking about everyday people, not companies rolling ChatGPT out internally.

For the core analysis, they start with 1.1 million conversations. From each conversation, they sample one user message, covering a time window from May 2024 to July 2025. That keeps the dataset large, but avoids overweighting long back-and-forth sessions from a small group of heavy users.

There are also several important exclusions:

  • Users who opted out of training
  • Anyone under 18
  • Deleted conversations
  • Deactivated or banned accounts
  • Logged-out users

How people actually use ChatGPT

When you zoom out and look at usage at scale, the picture is surprisingly concentrated. Nearly 80% of all ChatGPT usage falls into just three broad categories: Practical Guidance, Seeking Information, and Writing. Everything else exists, but these three buckets do most of the heavy lifting.

The biggest one by far is Practical Guidance. This category covers things like tutoring and teaching, step-by-step advice, help with everyday problems, and even creative ideation. In other words, people often use ChatGPT as a thinking partner or a helper that walks them through something. Not to give a final answer, but to help them get unstuck or move forward.

Next comes Seeking Information. This includes searching for facts about people, current events, products, recipes, and similar topics. What stands out here is how close this behavior is to traditional web search. The paper describes it as a very close substitute for search engines, which lines up with how many people casually use ChatGPT today: instead of typing keywords into Google, they just ask a question and expect a direct answer.

The third big category is Writing, which turns out to be especially important in work settings. Writing includes drafting emails and documents, but also editing, critiquing, summarizing, and translating text provided by the user. In fact, Writing is the most common use case at work, making up around 40% of work-related messages in June 2025.

There’s another detail here that’s easy to overlook but important: about two-thirds of all Writing requests aren’t asking ChatGPT to create something new from scratch. Instead, they ask it to modify existing text—editing, rewriting, translating, or improving something the user already wrote. That suggests ChatGPT is often used as a second pass or a support layer, not a replacement for human input.

Education also shows up clearly in the data. Roughly 10% of all messages are requests for tutoring or teaching. That’s a meaningful share, and it suggests that learning—formal or informal—is a core use case, not a side effect.

What stands out to me here is how unremarkable most of this usage actually is—and I mean that in a good way. Nearly 80% of activity involves very normal, everyday needs: getting advice, looking things up, and working with text. People aren’t radically changing how they work or learn; they’re just doing the same things they already did, but with a faster and more flexible helper. That’s probably why ChatGPT spread so quickly, because it didn’t ask people to adopt new behaviors; it just slipped into existing ones.

Quality of interactions, i.e are people actually satisfied?

Here we’re looking at something somewhat trickier: whether people are actually happy with the answers they get. Instead of asking users directly, the researchers try to infer satisfaction from what the user does next in the same conversation.

Based on the follow-up message, interactions are labeled as Good, Bad, or Unknown. It’s not a perfect signal, and OpenAI is very clear about that, but it gives a rough sense of how things are trending over time.

At a high level, the trend is pretty clear. In late 2024, “Good” interactions were about three times more common than “Bad” ones. By July 2025, that gap had grown further, with Good interactions happening more than four times as often as Bad. On top of that, the Good category was growing faster than the Bad one.

So even with an imperfect measurement, the direction is consistent: over time, users seem to be getting answers that work for them more often than not.

Things get more interesting when you break this down further. By topic, the differences are noticeable. Self-expression has the highest good-to-bad ratio, at over 7. On the other end, Multimedia and Technical Help perform much worse, with ratios of 1.7 and 2.7, respectively. In simple terms, people are far more satisfied when using ChatGPT to express thoughts or ideas than when asking it to help with technical or media-related tasks.

The same pattern shows up when you look at user intent. Conversations labeled as Asking are rated “Good” more often than those focused on Doing or Expressing. That suggests ChatGPT works best when users are looking for explanations or guidance, rather than expecting it to execute something precise or complex.

Although OpenAI is very clear about the limitations of this approach, the results align almost perfectly with how I experience ChatGPT in real life. When I prompt it looking for explanations or guidance, that’s where it really shines. In, like 9 out of 10 cases, it gives me exactly what I’m looking for, or at least helps me make things clearer in my head. That’s usually enough to move forward.

But the moment I ask it to do anything beyond that—especially things like multimedia or more technical tasks—the success rate drops fast. Maybe it works 3 times out of 10. And honestly, that’s not surprising at all.

Multimedia and technical help being at the bottom of the satisfaction scale makes complete sense. These are areas where people expect precision.

When you look at your own usage, where has ChatGPT genuinely been the most useful for you?

Who uses ChatGPT (age, occupation)

Age: who’s driving the volume vs who’s using it for work

If you look only at users who self-report their age, the 18–25 group alone accounts for about 46% of all messages. That’s almost half. So younger users clearly dominate overall activity.

But that doesn’t mean they’re using ChatGPT mainly for work. In fact, for users under 26, only about 23% of messages are work-related. So they’re active, but mostly outside of a work context.

As users get older, the share of work-related usage goes up. Older age groups are more likely to use ChatGPT for work than younger ones. There’s one clear exception: the 66+ group, where work-related usage drops to around 16%, which probably says more about retirement than about ChatGPT itself.

One detail I found especially interesting: work-related usage is declining over time for all age groups. Even among users who already skew toward work, ChatGPT is slowly becoming more of a non-work tool. So yes, older users are still more work-focused than younger ones, but everyone is drifting in the same direction.

A simple way to think about it: younger users drive volume, older users skew toward work, and over time, ChatGPT is becoming less “work-only” for everyone.

Occupation: clear gaps that don’t really disappear

Because of privacy limits, occupations are grouped into broad buckets: nonprofessional roles, computer-related jobs, engineering and science, management and business, and other professional roles like law, education, and health.

Here, the work-related differences are hard to miss:

  • Computer-related roles: 57% work-related
  • Management & business: 50%
  • Engineering & science: 48%
  • Other professional: 44%
  • Nonprofessional: 40%

After adjustment, the exact numbers shift a bit, but the gaps stay clearly significant.

Even within work-related usage, how people use ChatGPT differs by occupation. Higher-paid professional groups tend to use it more for Asking and less for Doing compared to nonprofessional users. For example, in computer-related roles, 47% of work-related messages are Asking, compared to 32% for nonprofessional roles. Adjustments reduce the gap, but it doesn’t disappear.

Final thoughts: what this data actually says about us

After going through all this data, what stuck with me most wasn’t a specific chart or percentage. It was how normal all of it feels once you step back.

People aren’t using ChatGPT in some futuristic or dramatic way. They’re using it the same way they use Google, notes apps, or a colleague they trust enough to ask a “stupid” question. To understand something. To rephrase a thought. To get unstuck. To sanity-check an idea before moving on.

That also explains why younger users make up such a big chunk of activity. They’re not “using AI for work”; they’re just using a tool that happens to be there when they need it. Older and more experienced users lean more toward work use, but even there, the lines are blurring. Over time, everyone seems to be drifting toward using ChatGPT outside of work more often, not less.

What I also found reassuring is that the model’s strengths and weaknesses are pretty consistent. When you ask it to explain, guide, or help you think, it usually delivers. When you ask it to be precise, technical, or multimedia-heavy, it struggles more, and people feel that immediately. That gap between “helpful” and “correct” shows up again and again across the paper.

In a way, this research doesn’t really tell us something radically new about ChatGPT. It tells us something about people. We gravitate toward tools that reduce friction in thinking, not just tools that spit out perfect answers. We’re more forgiving when a tool helps us move forward than when it promises certainty and misses the mark.

If there’s one takeaway I’d keep, it’s this: ChatGPT isn’t becoming important because it replaces work or thinking. It’s becoming important because it fits into the messy, in-between moments — when you’re unsure, halfway through an idea, or just trying to make sense of something. And judging by the data, more and more people are comfortable keeping it there.

Take a second and think about how you use it.
Not how it’s marketed. Not how people argue about it online.
Just your own day-to-day experience.

That gap between those two things is probably where the real story is.

Latest Insights
How People Actually Use ChatGPT?
Read more  →
The Hidden Process Behind Every AI Answer
Read more  →
We Analyzed 1,000 Reddit Comments to Discover the Most-Used Vibe Coding Tools (Part 2/2)
Read more  →
View More

Implement change.
Automate smarter with AI.

Be on top of global trends.

Get in touch