Apple engineers have shared new details on a collaboration with NVIDIA to implement faster text generation performance with large language models.

  • @[email protected]
    link
    fedilink
    English
    212 months ago

    Super ironic that Apple are working with NVIDIA, and have launched their own AI offering, but you can’t connect any kind of GPU to any Mac that they currently sell.

    • @[email protected]
      link
      fedilink
      English
      -15
      edit-2
      2 months ago

      Yes, because it would make the Mac worse. Nvidia GPUs are comically inefficient.

      EDIT: tech illiterate shit-for-brains downvoted this comment.

  • @[email protected]
    link
    fedilink
    English
    162 months ago

    Because of the diminishing returns in using larger and larger models, I’m hopeful that this could lead to more efficient LLM implementations that aren’t so harmful to the environment. Just maybe.

    • @garretble
      link
      English
      202 months ago

      “We made Siri faster but each request now drains a lake.”

      • @latenightnoir
        link
        English
        82 months ago

        “Hey, Siri, give me a real-time count of all existing lakes and update it every second.”

        • AvieshekOP
          link
          English
          52 months ago

          Siri: “Let me search it on the web.”

  • @tabular
    link
    English
    52 months ago

    Two

    spoiler

    dickheads

    are better than one