• @visak
    link
    3210 months ago

    The current stuff is smoke and mirrors and not intelligent in any meaningful sense, but that doesn’t mean it isn’t dangerous. It doesn’t have to be robots with guns to screw over people. Just imagine trying to get PharmaGPT to let you refill your meds, or having to deal with BankGPT trying to figure out why it transfered your rent payment twice. And companies are sure as hell thinking about using this stuff to get rid of human decisionmakers.

    • Square Singer
      link
      fedilink
      1210 months ago

      That is totally true but that’s a different direction than the danger in the marketing as discussed above.

      The media is full of “AI is so amazingly great, we are all going to lose our jobs and it will take over the world.”

      That’s a quite different message than what’s really the case, which is “AI is so shitty, that it will literaly kill people with bad advice when given the chance. And business leaders are so shit that they willingly trust AI, just because it’s cheaper.”

      • @[email protected]
        link
        fedilink
        310 months ago

        This is my biggest concern. I’m in a position where (potentially in the near future) I see AI being used as an excuse to do work quicker so we can focus on other things more but still have to review the AI response before agreeing/signing off. Reviewing for accuracy takes just as long as doing it yourself when it’s strongly regulated and it comes down to revisions and document numbers. Much less making a sound argument that actually is up to date with that documentation. So either I trust the AI short cut and open myself up to errors, or redo all the work for them. No gain in time efficiency with shorter timelines. I’d rather make something and have it flag things that I can check so I’m more sure of my own work. What I do shouldn’t be faster, but it can be more error free. It would take a lot of training and updating of training with each iteration of documentation change. I could be the slave of change, with more expectations, with no actual improvement of the tools I have (in fact more risk of issues with the tools being used).

        • @[email protected]
          link
          fedilink
          17 months ago

          I’m in agile development, in a reasonably safe-from-AI position (scrum master).

          There has already been a trial of software development by AI, with different generative AIs in each agile role; and it worked.

          Bard claims to be able to write unit tests

          I can imagine many IT jobs becoming less skilled

          • @[email protected]
            link
            fedilink
            13 months ago

            Sorry this is months after, but it’s cool to see it worked. I use a software called XXX Agile and it’s not the worst I work with but when ported to my company has some flaws. There’s a long project to switch somewhere else for document control and people who should know much better than me are worried it will fill some gaps but open us up to way more.

      • @visak
        link
        210 months ago

        Yeah they got the “will take out jobs part” just not the “will take our jobs and be worse at it and companies will still prefer it”.

        I was around in the 80s when we were losing all the manufacturing jobs, mostly to outsourcing but they blamed automation, and they said “don’t worry there will be lots of good paying jobs in the new service economy!”. Guess what they outsourced those too and now they’ll automate them.

        • Square Singer
          link
          fedilink
          210 months ago

          It will be interesting how it plays out. For some jobs, mainly stuff that wasn’t important but needed doing anyway (e.g. writing product listing on Amazon), this will be fatal. These jobs aren’t coming back.

          But for more skilled jobs, it will be interesting how they will deal with it when AI will mess up important stuff every single time.

          On the other hand, managers have been doing the same consistently for a much longer time and they still exist. Let’s see what happens.

    • @theragu40
      link
      810 months ago

      Frankly that stuff is already a huge problem and people should be louder about it. So many large companies want you to wade through 30 layers deep menus if AI chat bots before they’ll let you talk to an actual human to get assistance with a service you pay for. It’s just going to get worse and worse.

    • @[email protected]
      link
      fedilink
      English
      -110 months ago

      That’s not a bad thing. Humans really aren’t good decision makers. Having a system with an incredible amount of input data will be able to draw better conclusions than a person.

      Just look at cars.

      • @visak
        link
        4
        edit-2
        10 months ago

        Humans are good decision makers, we’re just not good at paying attention for long periods of time. Which is why I think self-driving cars will eventually be better, but they aren’t yet. And those are expert systems (I refuse to call them AI) trained on a well-curated and limited set of data for a limited and specific purpose. Which is an important difference over the generalized generative models. More data does not make better systems, especially more unvalidated data.

        But here’s another important difference: I can grab the wheel at any time and take over. If we are going to give these systems decision making authority there needs to be an obvious and intuitive override.

        • @[email protected]
          link
          fedilink
          English
          110 months ago

          Self driving cars are already better than humans. The Waymo cars have a crash rate of 0.59/million miles driven. The national average is 2.98.

          I’m betting that most of the self driving car crashes were caused by humans, too.

      • @x4740N
        link
        English
        3
        edit-2
        10 months ago

        AI is just as biased as the data that’s put into them and that data originates from humans who have their own biases so humans are just going to pass their own biases onto the AI that makes the decisions

        I don’t think ai is a good idea

        It just exists as a replacement of the human mind and with the whole population of us on earth that’s a large enough number to contribute any unique ideas to contribute to humanity

        Creating ai would just be making some sort of copy of us

        An AI is similar to an impressionable child

        • @[email protected]
          link
          fedilink
          3
          edit-2
          10 months ago

          Bias is a problem, but it can be ameliorated. I don’t agree that because AI can be biased, you should never use it.

          Creating ai would just be making some sort of copy of us

          I don’t know any humans who can munge a ginormous data set like an AI can.

          However, reproducing human intelligence in a computer would be interesting in its own right.

          • @x4740N
            link
            English
            010 months ago

            However, reproducing human intelligence in a computer would be interesting in its own right.

            I would not try to replicate that knowing humanity it would probably view us as a threat

            I don’t know any humans who can munge a ginormous data set like an AI can.

            No humans cannot but we use tools made by us to do that

        • @[email protected]
          link
          fedilink
          English
          -110 months ago

          Why are you assuming there will be bias in the data, and that the AI couldn’t be made to correct for it? Most of the data for systems like medical AI are basically raw data, and it’s already better than humans at making an accurate diagnoses.

          I’m not sure why people seem to think humans are better than a system that can parse trillions of data points in a few seconds and apply a bunch of statistical models to it almost instantly.

          • @x4740N
            link
            English
            110 months ago

            I wouldn’t trust ai with medical data and neither would medical professionals since your dealong woth someone’s life here to either medical professionals are going to modify the data

            I’m not sure why people seem to think humans are better than a system that can parse trillions of data points in a few seconds and apply a bunch of statistical models to it almost instantly.

            That’s just pre-programmed pattern recognition which has been programmed by rules and data from humans