It was merged after they where rightfully ridiculed by the community.

The awful response to the backlash by matwojo really takes the cake:

I’ve learned today that you are sensitive to ensuring human readability over any concerns in regard to AI consumption

  • @[email protected]
    link
    fedilink
    2015 hours ago

    He’s right that it’s probably harder for AI to understand. But wrong in every other way possible. Human understanding should trump AI, at least while they’re as unreliable as they currently are.

    Maybe one day AI will know how to not bullshit, and everyone will use it, and then we’ll start writing documentation specifically for AI. But that’s a long way off.

    • @[email protected]
      link
      fedilink
      1
      edit-2
      3 hours ago

      Having AI not bullshiting will require an entirely different set of algorithms than LLM, or ML in general. ML by design aproximates answers, and you don’t use it for anything that’s deterministic and has a correct answer. So, in that rwgard, we’re basically at square 0.

      You can keep on slapping a bunch of checks on top of random text prediction it gives you, but if you have a way of checking if something is really true for every case imaginable, then you can probably just use that to instead generate the reply, and it can’t be something that’s also ML/random.

    • @Joeffect
      link
      2015 hours ago

      If it can’t understand human text then is it really worth using? Like isn’t that the minimum standard here, to get context and understand from text?

      I don’t have any game in this but this seems backwards and stupid… Especially since all AI currently is fancy pattern matching basically

    • Aatube
      link
      fedilink
      214 hours ago

      He admitted himself that it might not be harder for AI to understand.