- cross-posted to:
- fuck_ai
- cross-posted to:
- fuck_ai
“My dead father is “writing” me notes again A recent AI discovery resurrected my late father’s handwriting—and I want anyone to use it.” - by Benj Edwards - Sep 12, 2024 5:00am CST
I find this use of AI, like with 99% of the “uses” for AI to be not only disturbing as all hell, but shows that we are creeping closer and closer to bizarro world.
I’m not too worried yet. Took like 8 tries to generate an image and it kept coming out wrong. The AI acknowledged that it was wrong, correctly identified what was wrong and promised to rectify it. 8 tries later and I gave up and it actually thanked me for letting it off without further tries.
Damn, you actually tortured an AI?
I think that while current LLMs don’t have the greatest context memories, they will remember certain things long from now.
Generate a picture of a house absolutely without any giraffes whatsoever. There should be no giraffes.
Huh…I kinda figured they’d be past that by now but I used your exact text for this:
What’s even funnier is telling it it’s wrong and to generate it again.
Depends on what you’re using.
With local models you use something called a “negative prompt” to exclude anything that you don’t want in the image.
If you really want this to work, you would have to train/fine tune a model by feeding it a bunch of images that show that person’s handwriting.
if you’re just asking ChatGPT to do this for you then you’re doing it wrong.