Apple engineers have shared new details on a collaboration with NVIDIA to implement faster text generation performance with large language models.

  • @[email protected]
    link
    fedilink
    English
    203 days ago

    Super ironic that Apple are working with NVIDIA, and have launched their own AI offering, but you can’t connect any kind of GPU to any Mac that they currently sell.

    • @[email protected]
      link
      fedilink
      English
      -13
      edit-2
      2 days ago

      Yes, because it would make the Mac worse. Nvidia GPUs are comically inefficient.

      EDIT: tech illiterate shit-for-brains downvoted this comment.

  • @[email protected]
    link
    fedilink
    English
    153 days ago

    Because of the diminishing returns in using larger and larger models, I’m hopeful that this could lead to more efficient LLM implementations that aren’t so harmful to the environment. Just maybe.

    • @garretble
      link
      English
      183 days ago

      “We made Siri faster but each request now drains a lake.”

      • @latenightnoir
        link
        English
        73 days ago

        “Hey, Siri, give me a real-time count of all existing lakes and update it every second.”

        • AvieshekOP
          link
          English
          53 days ago

          Siri: “Let me search it on the web.”

  • @tabular
    link
    English
    53 days ago

    Two

    spoiler

    dickheads

    are better than one