• Rimu
    link
    fedilink
    English
    782 months ago

    I coded an Alexa Skill once. It was tedious and a garbage platform. After a while it was delisted for spurious reasons, even worse DX than Google and Apple app stores. Complete dumpster fire from start to finish.

    All obsolete now that LLMs are here. I don’t think any devs will miss it.

    • @doodledup
      link
      English
      222 months ago

      Alexa and LLMs are fundamentally not too different from each other. It’s just a slightly different architecture and most importantly a much larger network.

      The problem with LLMs is that they require immense compute power.

      I don’t see how LLMs will get into the households any time soon. It’s not economical.

      • just another dev
        link
        fedilink
        English
        152 months ago

        The problem with LLMs is that they require immense compute power.

        To train. But you can run a relatively simple one like phi-3 on quite modest hardware.

      • @[email protected]
        link
        fedilink
        English
        42 months ago

        I don’t see how LLMs will get into the households any time soon. It’s not economical.

        I can run an LLM on my phone, on my tablet, on my laptop, on my desktop, or on my server. Heck, I could run a small model on the Raspberry PI 5 if I wanted. And none of those devices have dedicated chips for AI.

        The problem with LLMs is that they require immense compute power.

        Not really, particularly if you’re talking about the usage of smaller models. Running an LLM on your GPU and sending it queries isn’t going to use more energy than using your GPU to game for the same amount of time would.

        • @doodledup
          link
          English
          32 months ago

          I think when people talk about LLMs replacing Alexa they mean the much more capable models with billions of parameters. The small models that a Raspberry-Pi can run are no use really.

          • @[email protected]
            link
            fedilink
            English
            52 months ago

            The models I’m talking about that a PI 5 can run have billions of parameters, though. For example, Mistral 7B (here’s a guide to running it on the PI 5) has roughly 7 Billion parameters. By quantizing each parameter to 4 bits, it only takes up 3.5 GB in RAM, making it easily fit in the 8 GB model’s memory. If you have a GPU with 8+ GB of VRAM (most cards from the past few years have 8 GB or more - the 1070, 2060 Super, and 3050 and each better card in that generation hit that mark), you have enough VRAM and more than enough speed to run Q4 versions of the 13B models (which have roughly 13 Billion parameters), and if you have one with 24 GB of VRAM, like the 3090, then you can run Q4 versions of the 30B models.

            Apple Silicon Macs can also competently run inference for these models - for them, the limiting factor is system RAM, not VRAM, though. And it’s not like you’ll need a Mac as even Microsoft is investing in ARM CPUs with dedicated AI chips.

            • @doodledup
              link
              English
              22 months ago

              Thanks for sharing that. I have a Raspberry-Pi 4B laying around and getting dusty. I might try this.

      • @[email protected]
        link
        fedilink
        English
        12 months ago

        The immense computing power for AI is needed for training LLMs, it’s far less for running a pre-trained model on a local machine.

      • @[email protected]
        link
        fedilink
        English
        -12 months ago

        The problem with LLMs is that they require immense compute power. I don’t see how LLMs will get into the households any time soon. It’s not economical.

        You realize the current systems run in the cloud?

        • @doodledup
          link
          English
          12 months ago

          Well yea. You could slap Gemini Google-Home today. You wouldn’t even need a new device for that probably. The reason they don’t do that is econimical.

          My point is that LLMs aren’t replacing those devices. They are the same thing essentially. Just one a trimmed version of the other for economic reasons.

    • @[email protected]
      link
      fedilink
      English
      172 months ago

      Alexa skill store is a “prime” example of Amazon’s we don’t give a shit attitude. For years they’ve turned their back on third party developers by limiting skill integration. A well designed skill on that store gets a two star rating. When everything in your app store is total shit - maybe the problem is you Amazon?! It’s been like that for years ; I completely avoid using skills as they only lead to frustration.

      LLM integration into an Alexa device could be a big improvement, but current speed performance at that scale seems concerning that we’d get a laggy or very dumbed down system. Frankly Id be happy if Alexa could just grasp the concept of synonyms and also have the ability to attempt second guess interpretations of speech comprehension rather than assume user has just asked the exact same question in rapid succession but with a more frustrated tone.

      • JackbyDev
        link
        fedilink
        English
        22 months ago

        Every damn smart light skill has different syntax and there is no way to get the Alexa app to just fucking tell me what the syntax is. The "nui’ (no user interface) approach is cute but really falls flat when trying to do complex tasks or mix brands of smart devices.

        Also, it might be Google that does this more often so I won’t blame Alexa necessarily, but a lot of times when I ask things to play my liked songs I end up getting a song called “my liked songs” to play. It hasn’t happened in a while so however I am phrasing it must be correct now but it’s not something I’m super consciously aware of.

        • Rimu
          link
          fedilink
          32 months ago

          Yeah the syntax stuff was the biggest disappointment for me as a dev, too. There’s very little natural language processing going on, just simple template-based pattern matching. So basic and inflexible.

    • JackbyDev
      link
      fedilink
      English
      12 months ago

      I never dove into the skill API, but I’d imagine you’re setting phrases up. Can LLMs really help there? Like asking Alexa general information, I could see how LLMs were helpful, but asking it to turn lights on, how would that help?

      • @anarchyrabbit
        link
        English
        12 months ago

        It may be better at identifying intents, especially with different dialects and languages. You could tell it to send the response in a specific format, say json. Never tried it but might work.