• @[email protected]
    link
    fedilink
    English
    5619 days ago

    The problem isn’t that it didn’t. The problem is that anyone thought that it should have.

    • @Solumbran
      link
      English
      919 days ago

      But considering the obvious lack of knowledge around AIs, it should have.

  • @kat_angstrom
    link
    English
    4819 days ago

    It’s not AI, it’s an LLM. It doesn’t know what misinformation is because it doesn’t Know anything

  • @[email protected]
    link
    fedilink
    English
    719 days ago

    Most things I ask it give me back a fever dream. You’re over thinking the current state of the tech. Give it another election cycle.

    • @[email protected]
      link
      fedilink
      English
      319 days ago

      I just ask it boilerplate code and it’s ok. I don’t like having to write a million times the same shit

  • @dhork
    link
    English
    619 days ago

    I’m surprised the other ones did better

  • @CosmoNova
    link
    English
    419 days ago

    It’s always refreshing to read reasonable comments to a nonsensical headline, but I do wonder why it even shows up in my feed when it has so many downvotes.

  • @[email protected]
    link
    fedilink
    English
    219 days ago

    Lol GPT vs Copilot were in stark contrast…

    I think the journalists should just try to stick to things they understand. They probably ran a single query and it failed so they kept going on the same conversation.

    Sometimes the difference between a good answer and a bad answer is two or three attempts.

    It’s not like LLM’s are particularly good at sussing out lies anyway. It’s like summarize the concepts in the article than do web searches on each one trying to find an answer. It’s a fairly expensive query that they’re honestly going to try to avoid if they can.

  • @roofuskit
    link
    English
    116 days ago

    And the first thing Orange Kim will do is take the leash off AI companies.