• capital
    link
    2
    edit-2
    4 months ago

    deleted by creator

    • @[email protected]
      link
      fedilink
      English
      9
      edit-2
      4 months ago

      Upgrading for performance reasons used to be a lot more important than it is today.

      Until around 2003-ish, serial computation performance would double every 18 months or so.

      If you can do faster serial computation, everything just gets faster, linear in the serial computation speedup, at no developer effort.

      Since then, we’ve still had hardware get faster, but a much smaller chunk of that is from increases in serial computation speed. A lot of that increase relied on parallelism, and the rate of increase is slower. That isn’t free from a developer standpoint – a lot of work has to happen to make software scale reasonably linearly with parallel computation capability increases, and for some software, it’s just not possible.

      https://en.wikipedia.org/wiki/Koomey's_law

      Koomey’s law describes a trend in the history of computing hardware: for about a half-century, the number of computations per joule of energy dissipated doubled about every 1.57 years. Professor Jonathan Koomey described the trend in a 2010 paper in which he wrote that “at a fixed computing load, the amount of battery you need will fall by a factor of two every year and a half.”[1]

      This trend had been remarkably stable since the 1950s (R2 of over 98%). But in 2011, Koomey re-examined this data[2] and found that after 2000, the doubling slowed to about once every 2.6 years. This is related to the slowing[3] of Moore’s law, the ability to build smaller transistors; and the end around 2005 of Dennard scaling, the ability to build smaller transistors with constant power density.

      “The difference between these two growth rates is substantial. A doubling every year and a half results in a 100-fold increase in efficiency every decade. A doubling every two and a half years yields just a 16-fold increase”, Koomey wrote.[4]

      There have still been real improvements since then. The general shift to solid-state storage. RAM has increased. There are new I/O protocols for video and the like that provide for more throughout.

      But the core “any new computer will be way faster than an older one after a short period of time” thing no longer really holds.

      • @[email protected]
        link
        fedilink
        English
        4
        edit-2
        4 months ago

        I’d also add that I’ve changed some of my PC-buying behavior in response.

        I always avoided getting a higher-end processor, because they’d become obsolete so quickly, but it’s less of an issue now (though the performance difference between the low and high end may not be very large for most applications).

        I used to just get a new GPU when I got a new desktop, but GPU performance increases – the GPU is a parallel-computing piece of hardware – have dramatically outrun CPU performance increases. Upgrading the GPU separately from the rest of the computer has started to make more sense.

        • @Emerald
          link
          14 months ago

          I actually have the highest-end first generation Ryzen processor (the 1700X).

        • @[email protected]
          link
          fedilink
          14 months ago

          My i7 920 could run anything I threw at it up until a few years ago when the mobo started dying.

          But yeah, my last major upgrade was from an R9 390 to a GTX 2060 (12GB).