Imagine a tool that, in less than three years, has become a weekly habit for over 700 million people, around 10% of the world’s adults by July 2025. That tool is ChatGPT, the AI chatbot that has quietly woven itself into daily life across the globe.
If you’ve ever wondered what people really do with ChatGPT, here’s a stat to start with: by mid-2025, people were sending over 2.5 billion messages a day, roughly 29,000 per second. But what are all these people doing with it? Are we mostly using AI to write code at work? To find companionship? To cheat on homework?
A new study from researchers at Harvard, Duke, and OpenAI has analyzed how people use ChatGPT, and the results challenge many of our assumptions about AI adoption. By examining billions of messages while protecting user privacy, they’ve uncovered patterns that reveal not just what we’re doing with AI, but what it tells us about the future of work, education, and daily life.
Want a quick and easy overview of the article? View the infographic, created with my experimental AI infographic generator.
Busting the Myths: What We’re NOT Doing with ChatGPT
The study revealed several surprising gaps between perception and reality:
Myth 1: “Everyone’s using AI to code”
Reality: Only 4.2% of ChatGPT messages involve computer programming. While other AI tools might see higher coding usage, the average ChatGPT user is far more likely to ask for recipe modifications than Python scripts.
Myth 2: “AI is becoming our therapist and companion”
Reality: Just 1.9% of messages involve relationships and personal reflection, and only 0.4% involve games or role-play. Despite media coverage of AI companions, most users treat ChatGPT as a tool, not a friend.
Myth 3: “AI is replacing human creativity”
Reality: When people use ChatGPT for writing (the most common work task), two-thirds of the time they’re asking it to modify, edit, or translate their own text – not create something from scratch. It’s more editor than author.
It’s also worth noting that satisfaction is highest where the stakes are personal or reflective, and lower where precision is required (e.g., image, technical help). That’s a clue for when to doublecheck outputs.
What Are People Really Doing with ChatGPT?
Non-Work Use Is Booming
- Over 70% of ChatGPT messages are for non-work purposes. That’s up from 53% just a year ago.
- People are turning to ChatGPT for everything from planning meals and learning new skills to getting advice on personal projects.
Takeaway: ChatGPT isn’t just a work tool – it’s becoming a digital Swiss Army knife for everyday life.
The Top 3 Uses
So, if it’s not just work, what are people asking ChatGPT? The study categorized millions of conversations and found three topics reign supreme, accounting for nearly 80% of all interactions:
- Practical Guidance (around 29%): This is all about getting personalized advice and ideas.
- Examples: Asking for a customized workout plan, brainstorming creative ideas for a hobby, getting tutoring help for a school subject (tutoring alone makes up about 10% of all messages!).
- Seeking Information (grew from 14% to 24%): This is ChatGPT acting like a super-powered search engine, but with conversational context.
- Examples: “What should I look for when choosing a health plan?”, “Tell me about the latest current events in X,” or looking up recipes.
- Writing (declined from 36% to 24%): From drafting emails to editing documents, ChatGPT is a writing assistant for the masses.
- Examples: “Rewrite this email to make it more formal,” “Summarize this long article,” or even generating creative fiction.
Here’s a twist: Even in work contexts, Writing is the most common use, making up about 40% of work messages. And get this: two-thirds of all writing-related tasks involve modifying user-provided text (editing, critiquing, translating) rather than creating something entirely new from scratch. It’s less about replacing human creativity and more about augmenting it.
How Are People Using It? (Intent Matters)
Researchers classified messages into three types:
- Asking: Seeking advice or information (49% of messages).
- Doing: Requesting the AI to perform a task, like drafting a document (40%).
- Expressing: Sharing thoughts or feelings without expecting action (11%).
At work, “Doing” dominates (56%), especially for writing tasks. But “Asking” is growing fastest and is rated highest for quality.
Across jobs, ChatGPT acts like a decision copilot:
Most common work activities mapped by O*NET:
- Getting Information (19.3%)
- Interpreting Information (13.1%)
- Documenting/Recording (12.8%)
For work messages, these shift toward decision-making and problem-solving.
Who’s Using ChatGPT?
The Gender Gap Has Closed
Early adopters were disproportionately male (about 80% had typically masculine names), but by mid-2025, the gap had essentially closed, with a slight majority of active users having typically feminine names.
Usage patterns differ by gender: female-name users tend toward writing and practical guidance, while male-name users lean toward technical help and multimedia.
Youth Dominates, But Everyone’s Joining
Nearly half of all messages come from users aged 18-25. Work-related usage increases with age (except for those 66+, where only 16% of messages are work-related).
Global Adoption Surges in Developing Countries
ChatGPT has seen disproportionate growth in low- and middle-income countries ($10,000–$40,000 GDP per capita), suggesting the technology is becoming truly global.
Education and Occupation Matter
Highly educated users and those in professional occupations are more likely to use ChatGPT for work. Within work usage, professional/technical users send more “Asking” messages, while management focuses on writing (52% of their work messages).
How Did Researchers Figure All This Out?
You might wonder: How did researchers analyze billions of messages without violating privacy? The answer reveals both the possibilities and challenges of studying AI systems.
Privacy-First, Data-Driven
- The study analyzed over a million conversations from ChatGPT’s global user base, covering every continent where the tool is available.
- To protect privacy, no human ever read the actual messages. Instead, AI models classified messages after removing any personal information.
- Researchers used a “data clean room”, a secure environment where they could run pre-approved queries that only returned results for groups of 100+ users. Individual privacy was protected while still revealing population-level patterns.
Why This Approach?
- Scale: Surveys can’t capture billions of real interactions. Automated analysis lets researchers see what people actually do, not just what they say they do.
- Privacy: By never exposing raw messages, the study set a new standard for ethical research on digital behavior.
What Can’t We See? (Caveats and limits)
- The study only covers consumer users (not business or education accounts).
- Under-18s and logged-out users are excluded.
- Automated classifiers can make mistakes, especially with nuanced or multilingual conversations.
- Model changes over time can influence behavior and classifications.
- Some demographic details (like gender) are inferred from names, which isn’t perfect.
- A working paper (not yet peer-reviewed
Real-World Applications and Impact
These findings have significant implications for how we should approach integrating AI into our lives and work.
- For Organizations: The primary value is not just automation but augmentation. Leaders should prioritize decision-support workflows that help employees analyze options and writing-assistance tools that improve communication.
- For Education: With tutoring and teaching requests making up roughly 10% of all messages, there is a massive opportunity to leverage AI for personalized learning. This must be paired with a strong emphasis on teaching AI literacy, the critical thinking skills needed to evaluate, verify, and iterate on AI-generated content.
- For Society and Policymakers: The enormous value of AI in “home production” suggests that its consumer surplus and welfare gains extend far beyond the workplace. Fostering equitable global access and multilingual capabilities is crucial as adoption surges in developing countries.
The Questions We Should Be Asking
As we stand at this inflection point in human-AI interaction, this research raises profound questions:
- If AI’s biggest impact is on unpaid work and personal tasks, how should we measure its economic value?
- As AI becomes better at providing decision support, how will this change the nature of human expertise?
- What happens when AI advisors become as common as web searches, a tool everyone uses dozens of times daily?
ChatGPT has become a global habit, a digital Swiss Army knife used more for navigating the complexities of daily life than for conquering the tasks of our jobs. The data paints a clear picture: we are using AI less like an autopilot to replace our efforts and more like a co-pilot to enhance our thinking. We are collaborators, not just consumers of automated outputs.
The most effective users, and likely the most successful future applications, will embrace this reality. They will lean into AI’s power to help us weigh options, clarify our thoughts, and communicate more effectively. As we move forward, the ultimate challenge is not just about what we can get AI to do for us, but how we can use it to help us think.
What would change for you or your team if you treated AI less like an output generator and more like a thinking partner?
What will you ask ChatGPT today? Whatever it is, you’re part of a global experiment that’s reshaping how humans and machines work together, one message at a time.
Source for this article
NBER Working Paper “How People Use ChatGPT” (Chatterji, Cunningham, Deming, Hitzig, Ong, Shan, Wadman), Sept 2025