• @pHr34kY
    link
    English
    803 days ago

    We need more people to code like this.

    Everyone codes like it will be the only process running on their dev machine. Before you know it, you’ve opened a word processor, a mail client and a chat program. 8GB RAM is gone and the 3.2GHz 8-core CPU is idling at 35%.

  • @[email protected]
    link
    fedilink
    English
    373 days ago

    Any processor can run llms. The only issue is how fast, and how much ram it has access to. And you can trade the latter for disk space if you’re willing to sacrifice even more speed.

    If it can add, it can run any model

    • @surph_ninja
      link
      English
      63 days ago

      Yes, and a big part of AI advancement is running it on leaner hardware, using less power, with more efficient models.

      Not every team is working on building bigger with more resources. Showing off how much they can squeeze out of minimal hardware is an important piece of this.

    • @Warl0k3
      link
      English
      1
      edit-2
      3 days ago

      Yeah the Church-Turing thesis holds that you can run an LLM on a casio wrist watch (if for some reason you wanted to do that). I can’t imagine this is exactly what you’d call ‘good’…

  • @[email protected]
    link
    fedilink
    English
    483 days ago

    The story that this 260K parameter model generated (in the screenshot of their report):

    Sleepy Joe said: “Hello, Spot. Do you want to be careful with me?” Spot told Spot, “Yes, I will step back!”
    Spot replied, “I lost my broken broke in my cold rock. It is okay, you can’t.” Spoon and Spot went to the top of the rock and pulled his broken rock. Spot was sad because he was not l

    It’s still impressive that they got it running. I look forward to seeing if their BitNet architecture produces better results on the same hardware.

  • @givesomefucks
    link
    English
    344 days ago

    Before we go, please note that EXO is still looking for help. If you also want to avoid the future of AI being locked into massive data centers owned by billionaires and megacorps and think you can contribute in some way, you could reach out.

    Unfortunately Biden just signed an executive order allowing data centers to be built on federal land…

    And Trump is 100% going to sell it off to the highest bidder and/or give a bunch to Elmo

    • @[email protected]
      link
      fedilink
      English
      113 days ago

      There’s a huge bit of space between “more datacenter space” and “ah well, on-prem and self-host are dead”. Like, this is a 2024-voter level of apathy.

    • @Lost_My_Mind
      link
      English
      -23 days ago

      Well…hold on a second. I was with you through most of that, but then you said Elmo at the end.

      Maybe I’m stupid…maybe there’s another Elmo that makes WAAAAAY more logical sense. But what I’M envisioning is this army of AI Seaseme Street Elmo’s harvesting your data, tracking/stalking you all day, and plotting the best time to come and tickle you.

      I don’t get it. What are you SAYING???

      • @kautau
        link
        English
        213 days ago

        Elongated Muskrat

  • @just_another_person
    link
    English
    103 days ago

    This is like saying “my 30-year-old bike still works under very specific conditions”

    • @[email protected]
      link
      fedilink
      English
      133 days ago

      You exist in a world where people overclock systems to eke a 3% more performance out of the metal, and somehow hammering some performance out of the software seems wasteful?

      This kind of thinking seems to be a “slow? Throw more hardware at it” kind of mentality that I only see in … Wait; you’re a java programmer.

    • Echo Dot
      link
      fedilink
      English
      53 days ago

      You can run AI on a smartwatch, it’s just not going to be very intelligent. The fact that you can technically do it isn’t necessarily very impressive.

  • @[email protected]
    link
    fedilink
    English
    63 days ago

    If it can work in such a small space, think of what it can do in even a low-end android phone.

    I don’t need my phone to write me stories, but I would like it to notice when my flight is running late and either call up the customer support and book a new connection or get a refund (like Facebook’s M was bizarrely adept at doing) or just let my contacts in the upcoming meeting know I’m a bit late.

    If it searches my mail I’d like it to be on the device and never leverage a leaky, insecure cloud service.

    With a trim setup like this, they’ve shown it’s possible.

    • @rottingleaf
      link
      English
      93 days ago

      Why would you need an LLM for that?

      We have a standard, it’s called RSS.

      We have scripting. We also have visual scripting. That there’s no customer tool for that … is not customer’s fault, but not a sign of some fundamental limitation either.

      Customer support would, in fact, be more pleased with an e-mail from a template, and not a robot call (and it’ll likely have robot detection and drop such calls anyway).

      Informing your contacts is better done with a template too.

      However, now when I think about it, if such a tool existed, it could use an LLM as a fallback in each case, where we don’t have a better source of data about your flights, a fitting e-mail template, some point of that template lacking, or confusion in parsing the company page for support e-mail.

      But that still would much rather be some “guesser” element in a visual script, one used when there’s nothing more precise.

      I think such a product could be more popular than just an LLM to which you say to do something and are never certain whether it’s going to do a wildcard weirdness or it’s going to be fine.

  • @Lost_My_Mind
    link
    English
    63 days ago

    Me: Hmmmmmmmmm…I only vaguely have an idea what’s even being discussed here. They somehow got Windows 98 to run on a Llama. Probably picked a llama because if they used a camel it would BSOD.

  • @[email protected]
    link
    fedilink
    English
    53 days ago

    Let me know when someone gets an LLM to run on a Bendix G15. This Pentium stuff is way too new.

  • finley
    link
    fedilink
    English
    -4
    edit-2
    4 days ago

    Imagine how much better it would run on a similar era version of redhat, gentoo, or beos.

    They just proved that the hardware was perfectly capable, in the absolute garbage middle layer-the operating system is what matters about propelling the potential of the hardware forward into a usable form.

    Many people may not remember, but there were a few Lins distributions around at the time. Certainly, they would have been able to make better use of the hardware had enough developers worked on it.

    • ᗪᗩᗰᑎ
      link
      fedilink
      English
      124 days ago

      but the hardware is not capable. it’s running a miniscule custom 260k LLM and the “claim to fame” is that it wasn’t slow. great? we already know tiny models are fast, they’re just not as accurate and perform worse than larger models, all they did was make an even smaller than normal model. this is akin to getting Doom to run on anything with a CPU, while cool and impressive, it doesn’t do much for anyone other than being an exercise in doing something because you can.

      • finley
        link
        fedilink
        English
        -9
        edit-2
        4 days ago

        With your first sentence, I can say you’re wrong. My 1997 era DX4-75 MHz ran redhat wonderfully. And SUSE, and Gentoo.

        As the rest? You don’t know what an AI/LLM would’ve looked like on a processor from the era. No one even thought of it then. That doesn’t mean it can’t run it. It just means you can’t imagine that.

        Fortunately, I do not lack imagination for what could be possible.