• @[email protected]
    link
    fedilink
    English
    658 months ago

    No it couldn’t. The breakthrough is on galactic algorithms, who sacrifice speed on small amounts on numbers to gain speed on extremely large (doesn’t fit in the solar system anymore) amounts of numbers.

    On top of that, the algorithm assumes infinite precision, and it actually really breaks down if you don’t have infinite precision, and our computers don’t have it.

    • @BananaTrifleViolin
      link
      English
      308 months ago

      Yeah we’re in the middle of the idiocy of an AI speculative boom. People will try and bend anything to include AI for attention or to make money, and “journalists” will lap up this crap. Bring on the bust.

  • KeriKitty (They(/It))
    link
    fedilink
    English
    278 months ago

    “could lead to faster, more efficient”

    Ooh, ooh, game engines? No. Physics models? Nope, not that. Cryptography, maybe useful against the coming Quantum Cryptodoom™ ? No, not that either. DSP? Image compression? Something? Hmmmm, what could benefit from faster matrix maths? What one singular thing could be so important that- it’s a meme, of course it’s a freaking meme. Ayyy Aaaiiiiyyyeeee must be the only possible thing of interest because that’s the latest meme fad thing >:( grumble grumble grouch bite et cetera

    Yes, I will hate every single meme-fad-thing as it happens unless it involves kittens. Or maybe one of a few other things, but NOT THAT. Hmph! Grr! And so on!

    • Sibbo
      link
      fedilink
      English
      148 months ago

      Also, reading the article, the immediate practical implications of this improvement are almost nonexistent. This is a theoretical breakthrough, that may or may not lead to further theoretical breakthroughs that may or may not be practically more relevant.

      Certainly important research, but nothing that AI people (or any other scientists) must celebrate. Feeds the AI hype though.

    • @[email protected]OP
      link
      fedilink
      English
      78 months ago

      Machine learning might be the biggest computation type by volume to benefit from this so it’s not that silly. With declining hardware gains we’re back to optimizing in software which is preferable to gobbling up more and more energy resources.

    • @decerian
      link
      English
      28 months ago

      If this actually did lead to faster matrix multiplication, then essentially anything that can be done on a GPU would benefit. That definitely could include games, and physics models, along with a bunch of other applications (and yes, also AI stuff).

      I’m sure the papers authors know all of that, but somehow along the line the article just became"faster and better AI"

    • @[email protected]
      link
      fedilink
      English
      28 months ago

      Adding AeIeeee to the paper is the only way to not starve for a scientist in such a field. They have to highlight some immediate influence of their research in order to receive funding and get invited to conferences. Did it myself, didn’t like it, but that’s how the system works, unfortunately

  • AutoTL;DRB
    link
    fedilink
    English
    88 months ago

    This is the best summary I could come up with:


    Computer scientists have discovered a new way to multiply large matrices faster than ever before by eliminating a previously unknown inefficiency, reports Quanta Magazine.

    In October 2022, we covered a new technique discovered by a Google DeepMind AI model called AlphaTensor, focusing on practical algorithmic improvements for specific matrix sizes, such as 4x4 matrices.

    By contrast, the new research, conducted by Ran Duan and Renfei Zhou of Tsinghua University, Hongxun Wu of the University of California, Berkeley, and by Virginia Vassilevska Williams, Yinzhan Xu, and Zixuan Xu of the Massachusetts Institute of Technology (in a second paper), seeks theoretical enhancements by aiming to lower the complexity exponent, ω, for a broad efficiency gain across all sizes of matrices.

    However, the new technique, which improves upon the “laser method” introduced by Volker Strassen in 1986, has reduced the upper bound of the exponent (denoted as the aforementioned ω), bringing it closer to the ideal value of 2, which represents the theoretical minimum number of operations needed.

    In 2020, Josh Alman and Williams introduced a significant improvement in matrix multiplication efficiency by establishing a new upper bound for ω at approximately 2.3728596.

    While the reduction of the omega constant might appear minor at first glance—reducing the 2020 record value by 0.0013076—the cumulative work of Duan, Zhou, and Williams represents the most substantial progress in the field observed since 2010.


    The original article contains 912 words, the summary contains 227 words. Saved 75%. I’m a bot and I’m open source!