Some of the examples (like the Gulf of
MexicoAmerica description) sound like AI-generated style, but the formatting errors look more like sloppy humans.Like the seemingly-random bold text: don’t AIs output plain text? Any character formmating would have to be applied by hand. And the numbered list that all start with ”1“s is what happens if you write paragraphs with too many returns between them, then apply a numbered-list style in a word processor—I’ve seen humans do it more than once.
don’t AIs output plain text?
No, LLMs can format text and output bold, italics, underlining, hyperlinks, bullet points, numbered lists, and so on.
Current LLMs don’t regularly do any of those things.
Can’t tell if your being sarcastic or not. But current LLMs love to repeat phrases and often get facts confidently wrong. A quick proofread could fix these issues. I don’t have much experience with legal specific models but these artifacts may be more apparent there.