• Ace! _SL/S
    link
    fedilink
    English
    -85 months ago

    A private local LLM

    Running on a phone? No way, not without being absolutely horrible, slow or making your phone churn through your battery anyway.

    Good LLMs are olready slow on a GTX 1080, which is already miles faster than any phone out there

    • @subtext
      link
      English
      175 months ago

      I hear you, but also I would be shocked if Apple were to roll this out and it be an absolutely terrible experience. Like their MO is “luxury” products with “premium” experiences, it would not be fitting of the brand to have a piece of crap experience on their flagship announcement.

      I’m willing to give them the benefit of the doubt on this one.

      • @Eldritch
        link
        English
        05 months ago

        You might wanna check with siri on that. Apple regularly failed at that even under the leadership of Jobs. And Tim Cook is no Steve Jobs. It’s already looking like it’s going to be just standard remote chat GPT. Hallucinations and all.

        • The Dark Lord ☑️
          link
          fedilink
          English
          65 months ago

          Apple Maps was bad, yes. But they had their hand forced. Google started charging for their API (enough to cripple their app), and they had very little time to create one of their own.

          That’s not happening here. No one is forcing their hand. If they didn’t release an updated Siri this year, nothing would happen.

      • @Womble
        link
        English
        5
        edit-2
        5 months ago

        Microsoft’s penchant for making up names for thing that already have names is neither here nor there. It is an LLM, in fact its already twice as large as chatGPT2 (1.5B params).

        • @[email protected]
          link
          fedilink
          English
          35 months ago

          I do think it’s a useful distinction considering open models can be more than 100B+ nowdays and GPT4 is rumored to be 1.7T params. Plus this class of models are far more likely to be on-device.

    • @[email protected]
      link
      fedilink
      English
      85 months ago

      You would be surprised. If you haven’t tried to run a LLM on Apple silicon, it’s pretty snappy but like all others, RAM can be a significantly limiting factor unless the model is trimmed down to do very specific things to reduce the size.

    • @felixwhynot
      link
      English
      -15 months ago

      I think It’s running on their “Private cloud compute” platform, not locally (I’m not sure though)