• Kogasa
    link
    fedilink
    107 months ago

    That’s not necessarily wrong, but not the big explaining factor here I think. The technological challenges behind aligning ML models with factual reality aren’t solved, so it’s not an engineering decision. It’s more that AI is remarkably easy to market as being more capable than it is

    • @[email protected]
      link
      fedilink
      37 months ago

      To expand: I feel like it should be emphasised more that current “AI” models are, at best, hallucinating.

      Their output may look real enough and for some purposes they may be perfectly suitable, but ultimately, they have no concept of the semantic objects related to the words they learn and the semantic relationships between those objects. Without that, they can’t possibly guarantee that the implied semantic connection of the combination of words they produce aligns with the actual relationships.

      You can use a LLM to help translate bullet points into text of a given tone (like abstracts for theses that sound scientific), but you’ll still have to check the factuality and consistency of those texts. When using them to write texts about something you already know, that’s doable and can save you some work. But using it like in the OP to aggregate and present “new” facts without supervision is dangerous, because you can’t actually verify what you don’t already know.

      But “Copilot can scrape your data to give you some pointers and spare some of the tedium of finding it yourself, but you shouldn’t take it for gospel truth” doesn’t quite sell as nicely as “Microsoft Copilot leverages the power of AI to boost productivity, unlock creativity, and helps you understand information better”.