Summary MDN's new "ai explain" button on code blocks generates human-like text that may be correct by happenstance, or may contain convincing falsehoods. this is a strange decision for a technical ...
I sometimes think that we might currently be at best AI state in the next 20 years or so until other significant technological improvements are achieved.
these AIs were trained on human generated data, but now we’re gonna trash the Internet with AI generated truth sounding nonesense, so the same methods will likely produce worse and worse results
LLM will need a source of truth, like knowledge graphs. This is a very good summary of the topic, by one of the wiki data guys: https://youtu.be/WqYBx2gB6vA
I sometimes think that we might currently be at best AI state in the next 20 years or so until other significant technological improvements are achieved.
these AIs were trained on human generated data, but now we’re gonna trash the Internet with AI generated truth sounding nonesense, so the same methods will likely produce worse and worse results
LLM will need a source of truth, like knowledge graphs. This is a very good summary of the topic, by one of the wiki data guys: https://youtu.be/WqYBx2gB6vA
thanks, very interesting video!