How to write like ChatGPT (That is to say, Poorly)

Everyone’s using ChatGPT to write, from articles to legal briefings.

Well, if you can’t beat them, join them! Let’s learn how to write like a robot.

First, if you don’t know something, make it up!

Purdue University conducted a study analyzing ChatGPT’s output. They found that “52 percent of ChatGPT answers are incorrect.” When large language models like ChatGPT don’t know something, they just make it up.

However, language models still provide factually incorrect information even with the right data. ChatGPT can even contradict the information it’s been given.

Computer scientists call this behavior “hallucinating.” The reason behind hallucinating strikes at the fundamental flaw with modern AI. But we’ll get to that; right now, we want to write like ChatGPT.

I made the mistake of thorough research.

I’ve written white papers and case studies for various tech companies. When I don’t know something, I research the answer. If I’m unsure, I fact-check. If I’m really stumped, I politely ask an expert.

But now I know I should’ve just made stuff up.

For example, when I wrote for an AI company (ironic, I know), I double-checked their security certifications. I was explaining their AI-powered customer service platform, a field that appreciates security.

“We are certified under the American Consumer Cybersecurity Privacy Act (ACCPA) and the European Data Intsuite for Privacy Protection (EDIPP). Your data is safe with us!”

What’s the worst that can happen from poorly researched hallucinations?

There was that lawyer who used ChatGPT to write a legal filing. That filing cited six fabricated court cases, resulting in serious legal consequences. But hey, just proofread!

Could AI’s tendency to hallucinate lead to rampant misinformation, eroding the foundation of truth that democracies depend on?

Yes, absolutely. But that’s what the market wants! ChatGPT is the fastest-growing consumer application ever, and its accuracy is only decreasing!

A joint research study from Stanford and UC Berkeley (you can trust this is true because I cited it) found that ChatGPT-3 and ChatGPT -4’s accuracy decreased in just a few months. In March 2023, ChatGPT-4 identified prime numbers with 98% accuracy. Sounds like an easy enough task. However, in June 2023, ChatGPT-4 could only identify prime numbers with 3% accuracy. The reason is a mystery due to ChatGPT’s opaque nature.

Second, write in lengthy, clunky, verbose sentences that are long-winded and don’t really go anywhere.

That Purdue University study found that “77 percent [of ChatGPT’s answers] are verbose,” meaning long-winded, wordy, garrulous; ChatGPT adds extra words without consideration to flow or conciseness.

That same study asked human participants to choose between two answers— one produced by ChatGPT and another written by a human expert— to judge which one they thought was correct. 40% of participants thought ChatGPT’s answers were correct, but 77% of those chosen answers were incorrect.

This is all to say that lengthy, clunky, formulaic sentences stuffed with unnecessary details are perceived as persuasive, comprehensive, and confident.

Who cares about flow or conciseness? Just use basic sentence structures and passive voice to write multiple sentences reinforcing the same point over and over and over again. That sounds persuasive!

Lastly, simplify writing styles and forget empathy.

Why take the time and effort to learn a brand voice? Follow one of the six rudimentary styles. Limit yourself strictly to a professional, conversational, humorous, empathetic, academic, or creative writing style.

Couldn’t you teach ChatGPT to write in your voice and style?

One writer tried. Fleur Willemijn van Beinum, writer and owner of Think Like a Publisher, once promoted style assimilation. However, it required immense wrangling, even with the premium ChatGPT-4. Months later, they wrote,

“I stopped using ChatGPT to write my copy. Even though I had defined my tone of voice, using that to rewrite the copy and make it my own took longer than writing it myself.”

I used to waste so much time and effort to emulate another’s voice. Now I know just to use cookie-cutter styles.

Engineers, scrapyard owners, dentists, real estate agents, and therapists all talk differently. All of whom I’ve written for. Even then, they talk differently depending on the audience. A dentist speaks differently to a child than an engineer does to an executive.

I emulate their voices while exemplifying their branding’s tone. But all that uniqueness— all that personality— can be thrown out the window.

Let’s set satire aside.

I understand why business owners use ChatGPT. We’re told we constantly need more and more content. ChatGPT is simple, free, and— on the surface— time-saving.

If I wrote like ChatGPT, I would be fired. So why are you doing that damage to your brand?

Yes, using ChatGPT can do real damage. ChatGPT is inaccurate, inconsistent, inconsiderate, and inconvenient if you want quality writing.

Most importantly, it’s unfeeling, which is part of its fundamental flaw.

All current AI models use colossal amounts of data to approximate the world. They do not conceptually understand anything. Even deep learning programs are ultimately pattern recognition. Highly sophisticated pattern recognition, but ultimately just mimicry.

ChatGPT is just mimicking human language. It knows how to follow syntax and grammar to string sentences together. But it has no concept of what it’s actually saying. That is partially why AI hallucinates.

“There are downsides to freelance writers as well!”

Yes, I’m more expensive and require more time than AI. The quality is well worth it.

Where AI hallucinates, I research. Where you have to proofread, I revise. Where a robot emulates the semblance of emotions, I write authentically.

I offer articulate, accurate, and sometimes alliterative writing.

Because Quality Writing makes a Difference, you’ll notice in your Bottom Line.

Previous
Previous

Origins of Bad Words Part 3: Crap

Next
Next

AI-Human Teaming in the Finance Industry