Summaries are often wrong, usually odd, sometimes funny, rarely helpful.

  • @jacksilver
    link
    11 month ago

    I think it’s cause it’s an area where being wrong is acceptable. In most other applications, the uncertainty of the correctness makes the LLM/AIs more dangerous to deploy.