IMO branding all this stuff as AI is an issue, this stuff is just chatbots and image generators at this point still, though a little more advanced. I like to refer to it all as “spicy-autocorrect”. None of it is actual “intelligence” in a sense.
It’s like saying autocorrect on your phone is AI, all hale our supreme overloads.
All this random generated “gibberish” should be watermarked digitally where it’s embedded in the image. This way platforms can detect and alert the image is not verified/real. Like this photo I just took.
Watermarking is a flawed argument and would only serve the incumbent corporations who have products, fucking over any open source projects or researchers.
IMO branding all this stuff as AI is an issue, this stuff is just chatbots and image generators at this point still, though a little more advanced. I like to refer to it all as “spicy-autocorrect”. None of it is actual “intelligence” in a sense.
It’s like saying autocorrect on your phone is AI, all hale our supreme overloads.
All this random generated “gibberish” should be watermarked digitally where it’s embedded in the image. This way platforms can detect and alert the image is not verified/real. Like this photo I just took.
Watermarking is a flawed argument and would only serve the incumbent corporations who have products, fucking over any open source projects or researchers.
If you think ChatGPT-4 is just a chatbot, then you’re really not qualified to comment on what constitutes an AI system.