most of the time you'll be talking to a bot there without even realizing. they're gonna feed you products and ads interwoven into conversations, and the AI can be controlled so its output reflects corporate interests. advertisers are gonna be able to buy access and run campaigns. based on their input, the AI can generate...
I’m starting to see articles written by folks much smarter than me (folks with lots of letters after their names) that warn about AI models that train on internet content. Some experiments with them have shown that if you continue to train them on AI-generated content, they begin to degrade quickly. I don’t understand how or why this happens, but it reminds me of the degradation of quality you get when you repeatedly scan / FAX an image. So it sounds like one possible dystopian future (of many) is an internet full of incomprehensible AI word salad content.
It would be a fun experiment to fill a lemmy instance with bots, defederate it from everybody then check back in 2 years. A cordonned off Alabama for AI if you will.
Unironically, yes, that would be a cool experiment. But I don’t think you’d have to wait two years for something amusing to happen. Bots don’t need to think before posting.
I’m starting to see articles written by folks much smarter than me (folks with lots of letters after their names) that warn about AI models that train on internet content. Some experiments with them have shown that if you continue to train them on AI-generated content, they begin to degrade quickly. I don’t understand how or why this happens, but it reminds me of the degradation of quality you get when you repeatedly scan / FAX an image. So it sounds like one possible dystopian future (of many) is an internet full of incomprehensible AI word salad content.
It’s like AI inbreeding. Flaws will be amplified over time unless new material is added
It would be a fun experiment to fill a lemmy instance with bots, defederate it from everybody then check back in 2 years. A cordonned off Alabama for AI if you will.
Unironically, yes, that would be a cool experiment. But I don’t think you’d have to wait two years for something amusing to happen. Bots don’t need to think before posting.
Thanks, now I am just imagining all that code getting it on with a whole bunch of other code. ASCII all over the place.
Like, ASCII Bukakke?
Oh yeah baby. Let’s fork all day and make a bunch of child processes!
AI generation loss? I wonder if this can be dealt with if we were to train different models (linugistic logic instead of word prediction)
It’s knownas model collapse.