The Hidden Cost of ChatGPT

My take on our time's proliferation of ChatGPT in written communication.

September 24, 20256 min read

The Hidden Cost of ChatGPT

Perfectly written, crisp and well-structured, the email landed in my inbox, pitching me on a new software our firm should be using to help with customer operations. Its contents exuded an air of relevance and forward-thinking, the kind that so rarely appears among the dozens of cold outreach emails hitting my inbox daily. The only blemish: a few sentences lacked the conciseness one would expect from an author dexterous enough to be formulating this pitch. Then, as if being pulled out of a sales-induced hypnosis, I reached the bottom of the email, and its author was revealed to me: a frontline employee whose role and experience bear no relevance to the software he is pitching.

I instinctively tied this experience to other moments of dissonance between form and author. The international teammate who historically struggled with producing coherent sentences in written English, suddenly writes entirely comprehensible paragraphs, albeit abnormally long, without error. The software engineer who, to the disgruntlement of the entire team, never took the time to submit code changes that were documented properly, suddenly accompanies every submitted code change with lengthy descriptions. In the same spirit, the customer who for many years was content with the rate they pay for our software, suddenly began writing in to their account manager with acutely phrased yet meaningfully irrelevant reasons for why their rate should be reduced.

Each scenario is perplexing because of the broken relationship between the communication and the communicator. We often rely on our knowledge of the communicator as an integral variable in the interpretation of what is being communicated. For example, if a person is known to be frank, we embellish their statements with our own semantic cushions, normalizing our interpretation of their message and mitigating our perception of their communication being offensive. When we layer on historical knowledge of the communicator, our mental dissonance in these scenarios is magnified even further. In its clearest form, we are perplexed because the communicator no longer represents the communication.

The culprit is generative artificial intelligence (AI). Through the success of products that wrap AI into an interactive chatbot, such as ChatGPT, people are becoming increasingly comfortable with delegating their own research and communication to AI. Crucially, the dissonance between the communication and the communicator arises when AI is used to shortcut an otherwise iterative process. Asking AI to conceive of a software solution that streamlines customer operations and pitch it to a company that has grappled with this question for half a decade will not magically produce transformative results. But AI’s temptation is scintillating. Historically, our bar for communication appropriate enough of being communicated has been cohesive and knowledgeable prose. In the past, if one did not have the wherewithal to opine on a topic, producing eloquent prose was the first prohibitive barrier to one’s participation. Today, AI clears this bar effortlessly.

While the rise of AI may seem to democratize knowledge and productivity, its hidden cost is the time we waste treating unworthy communication as worthy of consideration. A simple follow up with a work’s true author is enough to illustrate the veil of knowledge mounted by AI. Invariably, when the veil comes down, the progress AI has made is outweighed by the curse it places on people misinterpreting it as a viable shortcut for their own ambitions. When disrobed of AI, the employee cannot follow-up on their original pitch; the international teammate blunders through their spoken English; and the software engineer forgets to describe all the breaking changes they made when handing off their project. AI possesses the power to warp people’s understanding of their capabilities. The cost of engaging with this is kind of communication is hidden, but substantial.

This hidden cost extends beyond emails into all areas, present and future, where AI aids us in productivity and research. Most evaluations of AI’s contributions to humanity’s productivity ignore this cost. Today’s AI evaluations focus on subject-matter experts, such as scientists and economists, who are aided by AI to produce results and research better and faster. Eventually, AI evaluations will incorporate this cost in their metrics. For the time being, however, we are left lamenting the hidden cost of cold emails and the AI-induced hypnosis that steals our time.