• @[email protected]
    link
    fedilink
    228 months ago

    In 2022 AI evolved into AGI and LLM into AI. Languages are not static as shown by old English. Get on with the times.

    • @Fedizen
      link
      18 months ago

      Changes to language to sell products are not really the language adapting but being influenced and distorted

      • Jojo, Lady of the West
        link
        fedilink
        108 months ago

        People have used ai to describe things like chatbots, video game bots, etc for a very long time. Don’t no true Scotsman the robots.

        • @[email protected]
          link
          fedilink
          English
          28 months ago

          I think the modern pushback comes from people who get their understanding of technology from science fiction. SF has always (mis)used AI to mean sapient computers.

      • Echo Dot
        link
        fedilink
        18 months ago

        LLMs are a way of developing an AI. There’s lots of conspiracy theories in this world that are real it’s better to focus on them rather than make stuff up.

        There really is an amazing technological development going on and you’re dismissing it on irrelevant semantics

      • @Aceticon
        link
        18 months ago

        The acronym AI has been used in game dev for ages to describe things like pathing and simulation which are almost invariably algorithms (such as A* used for autonomous entities to find a path to a specific destination) or emergent behaviours (which are also algorithms were simple rules are applied to individual entities - for example each bird on a flock - to create a complex whole from many such simple agents, and example of this in gamedev being Steering Behaviours, outside gaming it would be the Game Of Life).

    • @[email protected]
      link
      fedilink
      -28 months ago

      They didn’t so much “evolve” as AI scared the shit out of us at such a deep level we changed the definition of AI to remain in denial about the fact that it’s here.

      Since time immemorial, passing a Turing test was the standard. As soon as machines started passing Turing tests, we decided Turing tests weren’t such a good measure of AI.

      But I haven’t yet seen an alternative proposed. Instead of using criteria and tasks to define it, we’re just arbitrarily saying “It’s not AGI so it’s not real AI”.

      In my opinion, it’s more about denial than it is about logic.