Of course, that essay might beriddled with errors, but, hey, the homeworks done.

The problem is, theyre not perfect, and those imperfections are hurting people.

LLMs are trained onhugeamounts of text, and pull from that knowledge to get to respond to you.

(Again,supersimplistic explanation.)

These developers claim their checkers can do this.

But Im not so sure.

However, these indicators are not always present and can be overcome with advanced AI technologies.

Not really a glowing endorsement by the godfather of AI bots.

The Washington Posts experience with Turnitin, however, is more damning than that.

Students helped the Post come up with 16 writing samples compromised of human-generated, AI-generated, and hybrid text.

In the end, Turnitin got over half of the samples at least partly incorrect.

It correctly labeled six, but dropped the ball entirely on three others.

Had this been a real class, Turnitin wouldve produced nothing short of a mess.

One of the first AI generators to go viral,GPTZero, fails the accuracy test with little experimentation.

I tested it by writing a paragraph about ice cream in a neutral tone.

It told me Your text is likely to be written entirely by AI.

Ice cream is a popular frozen dessert enjoyed by people around the world.

But my ambiguous paragraph is only the beginning.

According to ZeroGPT, the Constitution was 92.26% written by AI.

Who knew the Philadelphia Convention was so reliant on artificial intelligence when crafting the law of the land.

(I guess that explains a few of those amendments, anyway.)

AI detectors are hurting innocent students

So far, these examples are all theory.

But these checkers arent prototypes.

Theyre here, and theyre being used against real students.

Turnitin calls moments when it labels human-generated text as written by AI false positives.