Unfortunately, due to the complexity and specialized nature of AVX-512, such optimizations are typically reserved for performance-critical applications and require expertise in low-level programming and processor microarchitecture.

  • @FooBarrington
    link
    21 month ago

    Though you’d get the same speedup if you used SIMD intrinsics. This is just comparing non-SIMD to SIMD.

    • @[email protected]
      link
      fedilink
      2
      edit-2
      1 month ago

      from the article it’s not clear what the performance boost is relative to intrinsics (its extremely unlikely to be anything close to 94x lol), its not even clear from the article if the avx2 implementation they benchmarked against was instrinsics or handwritten either. in some cases avx2 seems to slightly outperform avx-512 in their implementation

      there’s also so many different ways to break a problem down that i’m not sure this is an ideal showcase, at least without more information.

      to be fair to the presenters they may not be the ones making the specific flavour of hype that the article writers are.

        • @[email protected]
          link
          fedilink
          1
          edit-2
          1 month ago

          yes, as i said

          from the article it’s not clear what the performance boost is relative to intrinsics

          (they don’t make that comparison in the article)

          so its not clear exactly how handwritten asm compares to intrinsics in this specific comparison. we can’t assume their handwritten AVX-512 asm and instrinics AVX-512 will perform identically here, it may be better, or worse.

          also worth noting they’re discussing benchmarking of a specific function, so overall performance on executing a given set of commands may be quite different depending what can and can’t be unrolled and in which order for different dependencies.