This post is part of LifehackersExposing AIseries.

you could ask an AI program like ChatGPT to write somethinganything at alland it will, in mere seconds.

But not all AI-generated text announces itself quite so plainly.

It’s not actually “intelligent” at all.

This training informs all of their responses to user queries.

Here are some things to look out for.

For example, ChatGPT frequently uses the word “delve,” especially during transitions in writing.

“Let’sdelve intoits meaning.")

The tool also loves to express how an idea “underscores” the overall argument (e.g.

“Madrids cultural landscape is a vibrantmosaic.")

All these words are certainly perfectly fine to use when doing your own writing.

This is just one aspect of AI writing to note as you analyze text going forward.

This becomes even more apparent when a chatbot is attempting to use a casual tone.

Fact check and proofread

LLMs are black boxes.

What we do know is, all AI has the capability (and the tendency) tohallucinate.

In other words, sometimes it an AI will just make things up.

Again, LLMs don’t actuallyknowanything: They just predict patterns of words based on their training.

On the flip side, consider how much proofreading the piece required.

In theory, this means they should be able to spot AI text when presented with a sample.

That’s not always the case.

(Damn right.)

Again, I was assured the piece was 100% AI.

There are a lot of AI detectors on the market, and maybe some are better than others.

But I still think the superior method is analyzing it yourself.