- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Software engineer Vishnu Mohandas decided he would quit Google in more ways than one when he learned that the tech giant had briefly helped the US military develop AI to study drone footage. In 2020 he left his job working on Google Assistant and also stopped backing up all of his images to Google Photos. He feared that his content could be used to train AI systems, even if they weren’t specifically ones tied to the Pentagon project. “I don’t control any of the future outcomes that this will enable,” Mohandas thought. “So now, shouldn’t I be more responsible?”
The site (TheySeeYourPhotos) returns what Google Vision is able to decern from photos. You can test with any image you want or there are some sample images available.
Not really wants as much as expects, but that’s what AI is designed to do.
What you’re saying is not factual. LLMs predict what comes next based on the parameters set during learning process. It might at times say what you’re expecting, but then try contradicting information that it knows to be factual. See how far that gets you.
I think you’re confusing agreeableness for a validation buddy. For a product like this to work, it has to be inviting.
Now you’re just splitting hairs.