• @[email protected]
    link
    fedilink
    496 months ago

    Those are some very bold and generic claims for an accelerator chip startup, that doesn’t provide any details or benchmarks other than some basic diagrams and graphs while they are looking for funding and partners.

    Kind of reminds me of basically every tech kickstarter ever.

  • @[email protected]
    link
    fedilink
    356 months ago

    “Extraordinary claims require extraordinary evidence” (a.k.a., the Sagan standard)

    Should I even click?

  • Codex
    link
    286 months ago

    Valtonen says that this has made the CPU the weakest link in computing in recent years.

    This is contrary to everything I know as a programmer currently. CPU is fast and excess cores still go underutilized because efficient paralell programming is a capital H Hard problem.

    The weakest link in computing is RAM, which is why CPUs have 3 layers of caches, to try and optimize the most use out of the bottleneck memory BUS. Whole software architectures are modeled around optimizing cache efficiency.

    I’m not sure I understand how just adding a more cores as a coprocesssor (not even a floating-point optimized unit which GPUs already are) will boost performance so much. Unless the thing can magically schedule single-threaded apps as parallel.

    Even then, it feels like market momentum is already behind TPUs and “ai-enhancement” boards as the next required daughter boards after GPUs.

    • @[email protected]
      link
      fedilink
      56 months ago

      Eh, as always: It depends.

      For example: memcpy, which is one of their claimed 100x performance tasks, can be IO-bound on systems, where the CPU doesn’t have many memory channels. But with a well optimized architecture, e.g. modern server CPUs with a lot more memory channels available, it’s actually pretty hard to saturate the memory bandwidth completely.

  • @Warl0k3
    link
    15
    edit-2
    6 months ago

    Big if true. Going to need some real convincing benchmarks to believe this one, though. From a read, seems like they’re implementing ASIC on processor dies, which is not at all a new concept.

  • @mrfriki
    link
    156 months ago

    Why does this remember me about the math co-processors back in the x386 days?

    • @[email protected]
      link
      fedilink
      76 months ago

      Glad I didn’t have to scroll far to find this. That’s right where my mind went. Though if you think about it, it’s functionally no different than GPUs, upcoming NPUs, E-cores on chips or other ASICs.