Meta has promptly deleted several of its own AI-generated accounts after human users began engaging with them and posting about the bots' sloppy imagery and tendency to go off the rails and even lie in chats with humans.
No, of course not, but it’s always fun watching big companies sputter when their AI says things the company doesn’t want them to say, especially when the entire tech industry seems hell-bent on deluding everyone into thinking LLM output can be trusted to parse medical records, provide customer service better than humans, and provide a net-positive to this world.
I suspect that’s why the author did it in the first place:
I asked Meta whether Brian’s story was credible. Sweeney, the spokesperson, didn’t respond to follow-up questions.
Are we really supposed to take the story seriously when they’re basically talking to an LLM and taking what they say at face value?
No, of course not, but it’s always fun watching big companies sputter when their AI says things the company doesn’t want them to say, especially when the entire tech industry seems hell-bent on deluding everyone into thinking LLM output can be trusted to parse medical records, provide customer service better than humans, and provide a net-positive to this world.
I suspect that’s why the author did it in the first place: