• melroy
    link
    fedilink
    114 days ago

    No sh*t, this is what I predicted from day one.

    • @RagingRobot
      link
      English
      34 days ago

      We should have looked to melroy

      • melroy
        link
        fedilink
        23 days ago

        Thank you! That is indeed a valid point. I was hoping more people came up with this valid remark. Do you have any other questions or predictions you would like to know? So that we don’t get “surprises” in the field of technology again?

        • @Eheran
          link
          English
          33 days ago

          Please hit me with some predictions :D

          • melroy
            link
            fedilink
            32 days ago

            Sure!

            • More and more (AI) spyware / malware is getting injected into projects and operating systems. Without the user consent. Mobile phones, laptops, desktop PCs, smart devices, etc. This comes from companies, but also from governments (no, not just China, but also US and EU).
            • AI bubble itself will burst for the “normal users” and most companies who won’t really benefit from AI / LLMs as they thought they will. This will be apparenty only after several years. Where the highly skilled developers left the companies, and you are left with software engineers using AI tools which generates wrong code. The damage LLM (like AI Code generation) is doing and will be continue to do in the upcoming years is very untransparent, but it won’t be nice. We suddently are not getting AGI.
            • More research and efforts will be put into alternative computers, like computers based on biology. Like using living cells. After all nature is so much more efficient then our current technologies. This could fix the energy demand issues we now see with AI.
            • Biology computer will then also create huge moral issues. Since, how do we know the cells are not becoming aware? How do we know it won’t feel pain or the cells are feeling trapped? After all, we, humans, don’t even know how conscious really works and self aware.
            • Users & companies want to get back in control over 5 or 15 years from now. So their could be a big move back from “Cloud” to on-prem. You are already seeing this now with the fediverse.
            • The internet becomes too much centralized and controlled by goverments. Blocking public DNS IPs. Overruling them. The only answer would be is to create a much more decentralized internet alternative, so over 20 or 30 years from now (so we can still talk which each other about issues in the goverments par example). The current internet is just too fragile. And the root of the problem is already DNS. Meaning you need to basically start from scratch.
            • Over 80 years Windows might only be used by corporate businesses. Most people might only use Android or any Linux based distro. This mainly depends on how fast we change our education process, so young people learn about alternatives. And schools should stop promoting and forcing people to use Microsoft products only. If schools won’t change, then we might have a huge issue, and this topic won’t be valid.
            • Google will be split into multiple companies.
            • Microsoft might be split later as well into multiple companies, but only much later, after Google.
            • … Should I continue or stop here…?

            @[email protected] @[email protected]

            #it #software #ai #predictions

    • @Eheran
      link
      English
      -34 days ago

      So you predicted that security flaws in software are not going to vanish with AI?

      • melroy
        link
        fedilink
        23 days ago

        I predicted that introducing AI on software engineer (especially juniors) will result in overall worse code, since apparently people don’t feel responsible for the genAI code. While I believe the responsibility is still fully at the humans who try to deliver code. And on top of that, most devs are not doing good code reviews in general (often due to lack of time or … skill issue). And now we have AI that generates code which are too easily accepted on top of reviewers who blindly accept code… And no unit tests or integration tests… And then we have this current situation. No wonder this would happen. If you are in software engineering, you would know exactly where I’m talking about. Especially if you would work at larger companies.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        4 days ago

        All software has bugs. I prefer the human-generated bugs, they’re much easier to diagnose and solve.

        • melroy
          link
          fedilink
          23 days ago

          My point exactly, now you have genAI code written by AI, who doesn’t know what it is doing. Instructed by a developer, who doesn’t understand the programming language. Reviewed by a co-worker, who doesn’t know what is doing on. It’s madness I tell you!