I’m curious how software can be created and evolve over time. I’m afraid that at some point, we’ll realize there are issues with the software we’re using that can only be remedied by massive changes or a complete rewrite.

Are there any instances of this happening? Where something is designed with a flaw that doesn’t get realized until much later, necessitating scrapping the whole thing and starting from scratch?

  • @[email protected]
    link
    fedilink
    18 months ago

    Yeah, so it sounds like your complaint is actually with application not propagating relevant error handling information to where it’s most convenient for you to read it. Linux is not at fault in your example, because as you said, it returns all the information needed to fix the issue to the one who developed the code, and then they just dropped the ball.

    Maybe there’s a flag you can set to dump those kinds of errors to a log? But even then, some apps use the fail case as part of normal operation (try to open a file, if we can’t, do this other thing). You wouldn’t actually want to know about every single failure, just the ones that the application considers fatal.

    As long as you’re running on a turing complete machine, it’s on the app itself to sufficiently document what qualifies as an error and why it happened.

    • @[email protected]
      link
      fedilink
      18 months ago

      The whole point of my complaint is that shitty C conventions produce shitty error messages. If I could rely on the programmer to work around those stupid conventions every time by actually checking the error and then enriching it with all relevant information I would have no complaints.