- cross-posted to:
- [email protected]
- [email protected]
- cross-posted to:
- [email protected]
- [email protected]
Interesting take on comparability vs performance. I gotta imaging capturing user data and sending to a cloud collector is also a big culprit.
Interesting take on comparability vs performance. I gotta imaging capturing user data and sending to a cloud collector is also a big culprit.
Hasn’t this always been the case? Software development is a balance between efficiency of code execution and efficiency of code creation. 20 years ago people had to code directly in assembly to make games like Roller Coaster Tycoon, but today they can use C++ (or even more abstract systems like Unity)
We hit the point where hardware is fast enough for most users about 15 years ago, and ever since we’ve been using faster hardware to allow for lazier code creation (which is good, since it means we get more software per man-hour worked)
In the same way that more slop is good for the hog trough
Human development is the development of labor saving practices (i.e development tools and methods) that liberate humans and labor to do other things. In this case “good software” is bound to that it 's efficient enough to run on the system and do it’s job and not slow down the whole system unjustifiably. Why on earth would anybody go full performance optimization autism mode, spending hours grinding down fractions of efficiency out of code, when one couldn’t even notice the difference between it and less optimized code running on the target system? One could spend all that time to do something actually productive for the project like a new feature or do something entirely else. Those earlier game and software devs would have killed for hardware that didn’t require everything to be custom built and optimized to a T. Not having to optimize everything to to a max doesn’t produce “slop”, it produces efficiency.
I agree with most of what you said, but the problem is not everyone has brand new hardware. And it sucks that people have to buy new computers just because software devs are lazy and their program uses 10x more memory than it should.
I think the end of Moore’s law will push more software efficiency since the devs won’t be able to count on free hardware gains. As compilers and other dev tools get better, i think the optimizations will become more automated.
Your examples are honestly terrible. C++ is a fast language, and it’s not easy to write fast x86 Assembly, especially faster than what the C++ compiler would spit out by itself. C++ doesn’t cause a slowdown by itself.
20 years ago people could code in Python and JavaScript, or about any high-level language popular today. Most programming languages are fairly old and some were definitely use for game development in the past (like C++), and game engines definitely date back way before 2003, or 1999 when RollerCoaster Tycoon was released.
RCT is an anomaly, not the rule. People who didn’t need to wouldn’t program in Assembly, unless they were crazy and wanted a challenge. You missed the mark by about a decade or so, even then we’re talking about consoles with extremely limited resources like the NES and not PC games like DOOM (1993), which was written in C.
As long as hardware performance keep increasing, developers would take advantage of it and keep sacrificing performance in exchange for better developer UX. If given a choice between their app using 10x memory vs their app taking 10x less time to develop, most devs would choose the latter, especially if their manager keep breathing behind their neck. The only time a developer would choose to make efficient, but longer to develop apps is usually where the a developer has final say about the project, which usually means small personal side projects, or projects in a company led by technical people who refuse to compromise (which is rare).
Once the hardware performance plateau, we’ll see resurgence of focus on improving application performance.