BeagleBoard has launched a new single-board computer called the BeagleV-Ahead. It’s the same shape as the company’s BeagleBone Black and it’s compatible with some accessories designed for that board.

But instead of an ARM-based processor, the new BeagleV-Ahead is powered by a quad-core RISC-V processor. The new board is available now for around $149.

At the heart of the new board is an Alibaba T-Head TH1520 chip which features four 2 GHz Xuantie C910 CPU cores, Imagination BXM-4-64 integrated graphics and a neural processing unit with up to 4 TOPS of AI performance.

  • @[email protected]
    link
    fedilink
    English
    21 year ago

    What does 4 TOPS of AI performance mean? Can it do video streaming analytics or language model inference at reasonable speed or voice transcription?

    • @Fried_out_KombiOP
      link
      English
      11 year ago

      It puts it at the same performance as the Coral edge TPU (also 4 TOPS). I’ve used the edge TPU at work a bit, and it’s definitely enough to run object detection models like YOLOv8 on live video streams (although I could only get YOLOv8 to work for up to 192x192 resolution input; I’ve seen others like MobileNetv2 get up to 640x640 resolution). I don’t think you could run an LLM on it for any reasonable performance, but maybe voice transcription? I’m more in computer vision, so I don’t know how big those models get off the top of my head. Voice transcription definitely seems like a not insanely intensive thing that ought to be able to be done live on 4 TOPS.

      • @[email protected]
        link
        fedilink
        English
        21 year ago

        I’ll have to check out YOLOv8. I’ve been looking at an edge compute device that is ruggedized and can do video streaming analytics. I would need higher resolution (4k) but could get away with very low frame rates (1/30s). If I remember right I can break the image into sections and do inference on each section. Might have to do some overlapping sections to not miss “half on one, half on another”. Thanks for posting

        • @Fried_out_KombiOP
          link
          English
          21 year ago

          Yeah, if your objects within the 4k frame are pretty small, you can definitely tile over the whole image with 192x192 windows. With YOLOv8n (the smallest model size), I could run inference on 192x192 images in 15ms per inference on average, and about 25ms per inference for 192x192 images on YOLOv8s (the second-smallest model size). We’ve been getting pretty good results in terms of accuracy and so forth with both n and s.

          Biggest benefit of YOLOv8, imo, is it has a really good software package that makes it very easy to finetune, export to the edge TPU (or other inference engines), and run inference. Feel free to message me if you run into any issues with it!

          • @[email protected]
            link
            fedilink
            English
            21 year ago

            Yea would probably have to do a several passes at different resolutions. Looking for liquid pooling so doesn’t have to be fast, just has to be right