• @[email protected]
    link
    fedilink
    19
    edit-2
    1 year ago

    This is going to be pretty interesting, despite seeming far behind Apple is very well positioned to benefit from the AI developments. Apple has an opportunity for deep integration of AI features into the operating systems as well as offline compute through specialised silicon design that no other company really has.

  • @garretble
    link
    English
    111 year ago

    Is this why Jon Stewart got canceled?

  • @[email protected]
    link
    fedilink
    101 year ago

    Better late than never.

    But even more interesting than when is whether this uses local AI models or if this becomes again a data protection trust sink.

    • @[email protected]
      link
      fedilink
      English
      61 year ago

      It will most certainly use local models. (Locally tuned from a common base model) That’s kind of their whole differentiator.

      • @[email protected]
        link
        fedilink
        4
        edit-2
        1 year ago

        We’ll see. To date there’s no local runnable generative LLM model that comes close to the gold standard GPT-4. Even coming close to GPT-3.5-turbo counts as impressive.

        • @[email protected]
          link
          fedilink
          English
          61 year ago

          We only recently got on-device Siri and it still isn’t always on-device if I understand correctly. So the same level of privacy that applies to in-the-cloud Siri could apply here.

          • @[email protected]
            link
            fedilink
            41 year ago

            My on-device-Siri that lives in my Apple Watch Series 4 is definitely processing everything locally now. She got dumber than I.

          • @abhibeckert
            link
            3
            edit-2
            1 year ago

            Apple has sold computers with local voice input and command processing for more than 20 years, and iPhones have pretty much always had that feature (it was called “Voice Control” before Siri existed, and it was 100% local).

            I’d argue that, for Apple, what they’ve started doing recently is processing commands in the cloud. The list of commands that are processed locally vs in the cloud has changed over time… and they did move most of it to the cloud several years ago when they bought a cloud based smart assistant startup and used it as the basis for a new an improved assistant on iPhone. But every year they remove the dependence on that and are going back to how it used to be with local processing. These days even when a command is processed in the cloud it’s often only part of a multi-step process where the majority of the work was done on device. And many everyday commands are done entirely on device.

            For example if you ask it what the weather is, it’s entirely an on device command except for actually checking the latest weather report… and you can ask it what the temperature is “inside” which will check a sensor in your house and be entirely offline (if your home has a temperature sensor. There’s one built into Apple smart speakers and also a small but growing number of third party smart home products)

        • @abhibeckert
          link
          4
          edit-2
          1 year ago

          To date there’s no local runnable generative LLM model that comes close to the gold standard GPT-4.

          True - but iPhones do run a local language model now as part of their keyboard. It’s definitely not GPT-4 quality but that’s to be expected given it runs on a tiny battery and executes every single time you tap the keyboard. Apple has proven that useful language models can be run locally on the slowest hardware they sell. I don’t know of anyone else who’s done that?

          Even coming close to GPT-3.5-turbo counts as impressive.

          Llama 2 is GPT-3.5-Turbo quality and it runs well on modern Macs which have a lot of very fast memory. Even their smallest fanless laptop can be configured with 24GB of memory and it’s fast memory too - 800Gbps. That’s not quite enough to run the largest Llama2 model but it’s close to enough memory. Their more expensive laptops have more memory and it’s faster - they can run the 70 billion parameter llama 2 without breaking a sweat.

          And on desktops Apple sells Macs with 192GB of memory and it’s way faster at 6.4Tbps. That’s slightly more memory (and for a lot less money) than the most expensive data center GPU NVIDIA sells (the NVIDIA unit is faster at compute operations but LLMs are often limited by available memory not compute speed).

          • Quokka
            link
            fedilink
            English
            11 year ago

            You can even run llama2 locally on android phones.

  • billwashere
    link
    English
    01 year ago

    One thing Apple is good at is waiting until the market is ripe and then releasing a better product. Like mp3 players, phones, tablets, etc.

    • @68x
      link
      English
      3
      edit-2
      10 months ago

      deleted by creator

      • billwashere
        link
        English
        21 year ago

        And it seems to not be just Siri. Every Alexa device in my house has gotten more deaf and more stupid. But yes Siri has declined recently I agree.

          • billwashere
            link
            English
            2
            edit-2
            1 year ago

            I haven’t had that happen yet. But yeah I would too.

        • @[email protected]
          link
          fedilink
          21 year ago

          Add google assistant to the list. I remember it being great back in 2018 and now it struggles to do basic things like “set a timer”

      • @[email protected]OP
        link
        fedilink
        11 year ago

        In my experience it’s not really getting worse, it’s just not getting substantially better either.