Poor nVidia Jetson, you did great for the last 5 years.

Managed to fry the eMMC by shorting pins, it looks like.

Note for future self: fully enclose boards with tight spaces.

  • @FooBarrington
    link
    310 months ago

    It’s pretty good for AI tasks at the edge, e.g. fully local image recognition.

    • @recklessengagement
      link
      110 months ago

      Ahh gotcha, similar to Google’s Coral? Neat.

      I’ve recently been looking into locally hosting some LLMs for various purposes, I haven’t specced out hardware yet. Any good resources you can recommend?

      • @FooBarrington
        link
        110 months ago

        Ahh gotcha, similar to Google’s Coral?

        Kind of, it’s a standalone system with the hardware integrated - kinda like Google Coral with a Raspberry Pi.

        I’ve recently been looking into locally hosting some LLMs for various purposes, I haven’t specced out hardware yet. Any good resources you can recommend?

        Not really, sorry - I haven’t gone too deep into LLMs beyond simple use cases. I’ve only really used llama.cpp myself.

      • @Linkerbaan
        link
        -1
        edit-2
        10 months ago

        A dedicated NVIDIA GPU in a random x86 pc is a lot faster and more price efficient than a Jetson.

        If it isn’t about the form factor the Jetson is not a great contender.