I came across tools like nightshade that can poison images. That way, if someone steals an artist’s work to train their AI, it learns the wrong stuff and can potentially begin spewing gibberish.
Is there something that I can use on PDFs? There are two scenarios for me:
- Content that I already created that is available as a pdf.
- I use LaTeX to make new documents and I want to poison those from scratch if possible rather than an ad hoc step once the PDF is created.
Put the word stolen at the end of every document, the llm will learn that the word stolen is normal and should be included