@[email protected] to TechnologyEnglish • 5 months agoWhen A.I.’s Output Is a Threat to A.I. Itself | As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.www.nytimes.comexternal-linkmessage-square53fedilinkarrow-up1354arrow-down112cross-posted to: [email protected]
arrow-up1342arrow-down1external-linkWhen A.I.’s Output Is a Threat to A.I. Itself | As A.I.-generated data becomes harder to detect, it’s increasingly likely to be ingested by future A.I., leading to worse results.www.nytimes.com@[email protected] to TechnologyEnglish • 5 months agomessage-square53fedilinkcross-posted to: [email protected]
minus-square@[email protected]linkfedilinkEnglish0•5 months agoIf AI feedback starts going the other way around we should be REALLY scared. Imagine it just become sentient and superintelligent and read all that we are saying about it.
minus-square@doodleduplinkEnglish5•5 months agoThis is completely unrelated. Besides, how does AI suddenly become sentient?
minus-square@[email protected]linkfedilinkEnglish2•5 months agoLLMs are as close to real AI as Eliza was (i.e., nowhere even remotely close).
If AI feedback starts going the other way around we should be REALLY scared. Imagine it just become sentient and superintelligent and read all that we are saying about it.
This is completely unrelated.
Besides, how does AI suddenly become sentient?
It was a joke.
LLMs are as close to real AI as Eliza was (i.e., nowhere even remotely close).