How People are Really Using ChatGPT

Are you concerned that AI is coming for your job?

Since it exploded onto the scene in 2022, ChatGPT and its AI cousins have created a sensation. Aspects of knowledge work that were always assumed to be the province of humans can now be done in mere moments with the proper prompt. This has led many prognosticators to assume that AI will take over all white-collar work. After three years of these AI tools, what is the actual truth?

A recent post from Daniel Pfeiffer on the website Choice360 sheds light on what people are actually doing with ChatGPT. In a review of a study of 1 million conversations, Pfeiffer discovered that the assumed absorption of knowledge work by AI tools is not what it seems. For starters, more people are using ChatGPT outside of work than at the office.

One of the key takeaways from this report is that, though work-related usages of ChatGPT continue to grow, they are wildly outpaced by nonwork-related usages, which have grown from 53 to 73 percent of all ChatGPT messages. This finding raises two important questions: Given its ostensible economic promises, why isn’t work-related usage growing faster, and why is nonwork usage growing so much? 

Pfeiffer speculates that the clean AI interface has become preferable for regular searching than the messier Google page. The results are also easier for the average person to interpret, saving them time previously used to click through to other websites.

Another assumption is that most people are using ChatGPT to write the original copy of documents. However, actual use appears to be different.

Given the prized role of writing in educational environments, many academics might assume that when people use ChatGPT “for writing,” they’re using it specifically to generate new text from scratch—hence, the return of blue books. What this report finds, however, is that about two-thirds of all writing tasks have ChatGPT modify existing text, e.g., editing it for errors, adjusting the tone, or offering critiques, rather than generating new text. 

On closer inspection, Pfeifer wonders if this finding holds for all types of users.

As we await more data, I think it behooves us to keep in mind that “writing” encompasses a range of activities. While we might imagine that students are asking ChatGPT to “write a seven-page essay on the Civil War,” for instance, they might well be using it to “make this email sound more professional.” 

Image generated with WordPress AI

A third issue considered in the study is the economic impact of generative AI on workers. The media discussion often assumes that AI will take away jobs, especially lower-level knowledge work. Again, that may not be the case yet.

To get a more granular picture, researchers ran all the work-related messages through a different taxonomy based on common work activities, e.g., communicating with supervisors, scheduling events, and training others. They found that 57.9 percent of work-related messages fell into two broad categories “1) obtaining, documenting, and interpreting information; and 2) making decisions, giving advice, solving problems, and thinking creatively”. In other words, people are using ChatGPT less as a replacement worker and more as an advisor and research assistant. 

Reflecting on this finding, Pfieffer comes to this conclusion.

“ChatGPT likely improves worker output by providing decision support, which is especially important in knowledge-intensive jobs where productivity is increasing in the quality of decision-making.”  

Finally, Pfieffer speculates on the impact of hallucinations. As librarians have long complained, it is easy for people to believe what AI says rather than confirm that it is true. The study does not measure the effect of wrong information on people’s productivity and decisions.

The full blog post is worth a read. You can find it on the Choice360 website.

Leave a comment