- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
2024 could be the year the PC finally dumps x86 for Arm, all thanks to Windows 12 and Qualcomm’s new chip::We’ve already reported on Qualcomm’s new 12-core Arm uberchip, the Snapdragon X Elite, and its claims of x86-beating performance and efficiency. But it takes two to tango when it comes a maj
The CPU and processing power benefits would be great, but if I’m going to lose software support then I’m only going to do it for RISC V.
Yaaaah, came here to something something RISC-V ^.^ One of these days I’ll have a RISC-V system. I’ll have no actual use for it but I’ll love it stubbornly just because :D
Anyway I’m gonna be over here daydreaming about RISC-V taking over the world instead of ARM. Bwehehehehe.
(Edited to fix my ^.^-face)
Fundamentally, I’m not sure Qualcomm is the brand I’d trust to lead the world off of x86.
I understand nobody actually likes Qualcomm products in the cellular space, but they’re stuck with them due to patent minefields. That’s not really a great vibe to bring in when trying to compete against known-quantity x86 vendors.
I figured we’d see homogenous CPUs-- either in the same socket or as an addon module, so you can cast off some stuff to ARM or RISC-V but keep big x86 for games and heavy closed-source software, then flip to RISC-V main with x86 addon cards, and finally emulation.
Sort of thinking about a Pinetab-V, but even the flaky, doesn’t suspend right 20% of the time, wigi was weird on every OS except OpenBSD, Ryzen 2700U it would replace demolishes it. The Lichee Console looked neat with the EEE PC sizing and Trackpoint, but it’s way pricier.
Where would those benefits be? Let’s start with gaming on the M3 Mac - it’s CPU bound in many games even though apple’s compatibility later is actually good. And the GPU is a joke, even compared to the Intel dGPU offerings. Let’s not start on encoding (besides iMovie), packing or compiling things. Or even actually rendering stuff…
Compatibility layers are comprehensive, but they’re generally not performant. For me personally, I use a real computer that runs my daily workload, servers and games all at once on different virtual desktops, so a faster CPU will definitely be impactful.
It’s not just about avoiding 100% CPU either. CPUs not being the bottleneck for performance sounds like a great problem to have