• @Visstix
    link
    English
    474 months ago

    Can’t wait for this bubble to burst…

    • Todd Bonzalez
      link
      fedilink
      English
      54 months ago

      Hey, as a PC gamer, I welcome a severe oversupply of GPUs. Let’s ride this out for a little bit.

      • @drawerair
        link
        English
        14 months ago

        Nvidia is making so many Hopper and Blackwell cards, but is there a severe oversupply of gaming graphics cards (RTX 40 series)? I did a quick search. The Hopper can’t do games. Or if there’s a workaround, it won’t be as 👍 as a 40 series card.

    • Franklin
      link
      English
      5
      edit-2
      4 months ago

      Since it was the chip from the Nintendo switch that was used for the original Nvidia Shield I wonder if what the switch to we’ll see a revision?

      Probably just wishful thinking, it seems like Nvidia’s core strategy isn’t the consumer market anymore.

      • Jesus
        link
        English
        44 months ago

        I’d be shocked if they pulled out of consumer. They have a LOT of consumer products.

        They said, I could see them doing a Google, and breaking into separate businesses units. I could also see them getting out of set top boxes, because that’s probably one of the smaller revenue streams.

        • @[email protected]
          link
          fedilink
          English
          44 months ago

          I’m pretty sure Shield happened because the Tegra chips were used in infotainment designs. It wasn’t just because of Switch as oft repeated.

  • BrightCandle
    link
    English
    64 months ago

    It doesn’t really make much sense to go faster than silicon node changes unless there is a lot of optimisation on architecture that needs doing. Historically all these refreshes between nodes were largely pointless with small benefits and preparing them took development effort away from the big changes. It’s progress in silicon that matters and brings the performance improvements and moving to a faster cadence hasn’t historically worked out well.

    • @APassenger
      link
      English
      14 months ago

      I wonder when AI will be designing its own chips. Or parts of its chip.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        4 months ago

        Ai is already being incorporated into chip design tools like synopsys. TechTechPotato has an interesting interview with Aart de Geus that is relevant.

        Ai is far off from making high level design improvements, but it can greatly reduce the workload on trace and route and other design steps.

        • @[email protected]
          link
          fedilink
          English
          44 months ago

          The great thing about blanket terms like “AI” is that you can slap it on everything.

          • @[email protected]
            link
            fedilink
            English
            24 months ago

            And the algorithm AI does magic to make our product more awesome than the competitor.

            Yeah, the lack of formal definition of what is and is not considered ai definitely muddies the waters when talking about applications and capabilities.

  • AutoTL;DRB
    link
    fedilink
    English
    24 months ago

    This is the best summary I could come up with:


    Until now, Nvidia’s produced a new architecture roughly once every two years — revealing Ampere in 2020, Hopper in 2022, and Blackwell in 2024, for example.

    (The industry darling H100 AI chip was Hopper, and the B200 is Blackwell, though those same architectures are used in gaming and creator GPUs as well.)

    Huang says Nvidia will accelerate every other kind of chip it makes to match that cadence, too.

    “New CPUs, new GPUs, new networking NICs, new switches… a mountain of chips are coming,” he says.

    Huang also shared a couple of his sales pitches on the call by way of explaining the incredible demand for Nvidia’s AI GPUs:

    Nvidia’s CFO interestingly says that automotive will be its “largest enterprise vertical within data center this year,” pointing to how Tesla purchased 35,000 H100 GPUs to train its “full-self driving” system, while “consumer internet companies” like Meta will continue to be a “strong growth vertical,” too.


    The original article contains 377 words, the summary contains 155 words. Saved 59%. I’m a bot and I’m open source!