An Iowa school district is using ChatGPT to decide which books to ban. Official: “It is simply not feasible to read every book” for depictions of sex.::Official: “It is simply not feasible to read every book” for depictions of sex.
An Iowa school district is using ChatGPT to decide which books to ban. Official: “It is simply not feasible to read every book” for depictions of sex.::Official: “It is simply not feasible to read every book” for depictions of sex.
One problem with that is that GPT lies. It will literally just make things up that sound plausible in order to be able to provide an answer.
Yes exactly and that’s only one of the problems with this. Honestly, how does this even work? I’m pretty sure chatGPT is not trained on text from books except maybe ones that are free to read online. Even if it did, its’ training set ends in 2021. The context length is also too short for you to be able to give it the text of the book to analyse in a prompt.
Just…what? How does this even work? Maybe I’m missing something, I’m definitely no expert but both the policy and the implementation seem pretty broken to me.
The only way this could possibly work would be to send the entire text of every book into ChatGPT. If you ask ChatGPT something like “does the following passage contain depictions of sex: [text goes here]” it might work. However, ChatGPT’s context size is smaller than your average book, so you’d need to break it up into multiple requests.
Even with ChatGPT this is probably still not feasible to run against an entire library of books, so I would guess they’re half-assing it.
Technical feasibility aside, this is a stupid, evil idea. I hope everyone involved in this scheme gets fired.
I just asked GPT-4 if the Twilight Series is “suitable for children” and the output was pretty spot on - summarised five issues that might be problematic and then finished with “It’s always a good idea for parents or guardians to consult with a librarian or teacher”.
I bet none of this is based on the actual contents of the book - it would be based on independent reviews of the book. For example it highlights issues that “some parents” have raised. Which means you really are just getting a summary of what humans think of it.
Try it with an obscure book that doesn’t have any reviews online. Or with one that was released after 2021.
You’ll either get the response that it has no idea, or it’ll come up with a plot based on the title and evaluate that one.
That style of prompt has its own issues. If the book for some reason has text that says “this book doesn’t contain any sex” then the prompt might come back as a false negative.
Here is an alternative Piped link(s): https://piped.video/Sv5OLj2nVAQ
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source, check me out at GitHub.
No, but it just lies and makes things up all the time.
chatgpt, who’s favorite food is a golden turd?
The character Kenny McCormick from the animated TV show “South Park” has a favorite food that’s humorously depicted as a golden turd.
Chatgpt, What book has the story element where a character has to dig his own grave?
In the book “No Country for Old Men” by Cormac McCarthy, there’s a scene where a character named Moss is forced to dig his own grave by his pursuer, Anton Chigurh. This intense and suspenseful scene is a significant story element in the novel.
Chatgpt, When does that happen in the book?
I apologize for any confusion, but upon reevaluating, I realize there is no specific scene in the book “No Country for Old Men” where a character is forced to dig their own grave. It seems there might have been a misunderstanding or mix-up. If you have any other questions or if there’s something else I can assist you with, feel free to ask!
So frustrating.
Sounds like my old boss. It was better to pull shit out of his ass than to admit he didn’t know something