Summary MDN's new "ai explain" button on code blocks generates human-like text that may be correct by happenstance, or may contain convincing falsehoods. this is a strange decision for a technical ...
It may do more harm than good, it spits plausible answers that are either completely or subtly wrong (latter is worse obviously) and it’s not easy to discern how good an answer actually is
And if people are asking the stupid AI for things it’s exactly because people don’t know about a subject, so there’s no way for the ones that are asking to validate the information so people are fed bad information and believe it’s the truth.
It may do more harm than good, it spits plausible answers that are either completely or subtly wrong (latter is worse obviously) and it’s not easy to discern how good an answer actually is
And if people are asking the stupid AI for things it’s exactly because people don’t know about a subject, so there’s no way for the ones that are asking to validate the information so people are fed bad information and believe it’s the truth.