- cross-posted to:
- technology
- cross-posted to:
- technology
First, editors can use LLMs to suggest refinements to their own writing, as long as the edits are checked for accuracy. In other words, it’s being treated like any other grammar checker or writing assistance tool. The policy says, “ LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.”
The second exemption for LLMs is with translation assistance. Editors can use AI tools for the first pass at translating text, but they still need to be fluent enough in both languages to catch errors. As with regular writing refinements, anyone using LLMs also has to check that incorrect information hasn’t been injected.
Both pretty reasonable
Sounds like they emphasize that contributors are still responsible for the changes. Some people are way too trusting in the robots’ abilities.
anyone using LLMs also has to check that incorrect information hasn’t been injected.
It seems reasonable, but it’s pretty easy to miss crucial mistakes when one sentence in 300 is wrong, and there’s 25 cases of technically correct but misleading information
Your worry is only reasonable if it was commonplace to write 300-sentence Wikipedia articles from scratch lol
That’s like 5x as long as the average article. Anyone submitting that much at once will raise an eyebrow

