• @AbouBenAdhem
      link
      English
      278 months ago

      "Magic” is just technology that’s sufficiently advanced hasn’t yet been commercially exploited.

      • @[email protected]
        link
        fedilink
        English
        58 months ago

        Has it? I mean, all this “AI” news is pretty annoying, but how has it impacted our daily lives? ChatGPT has only made mine better.

        • @[email protected]
          link
          fedilink
          English
          178 months ago

          Most search engines are practically unusable due to the massive amount of garbage AI generated sites that take up the search results while having just platantly false information.

          • @[email protected]
            link
            fedilink
            English
            -158 months ago

            I haven’t used a search engine since I started using ChatGPT. It does all the heavy lifting for me.

            • @[email protected]
              link
              fedilink
              English
              128 months ago

              I am concerned to think of all the terrible and just plain wrong information you have been given.

              • @Blue_Morpho
                link
                English
                -38 months ago

                How is that different from Google?

                • @RidcullyTheBrown
                  link
                  English
                  28 months ago

                  Most of the time, information that you’re doing something wrong should be enough to prompt you to dig deeper into the matter. It’s not the job of perfect strangers to educate you.

            • @[email protected]
              link
              fedilink
              English
              118 months ago

              The problem with using it as a search engine is that if it doesn’t know the answer it commonly makes things up. I tried using it for work but it got details wrong enough to make it useless.

              • @[email protected]
                link
                fedilink
                English
                -88 months ago

                You can’t trust search engine results either. It’s just another tool that you use to arrive at a conclusion. You still have to do work.

                • @[email protected]
                  link
                  fedilink
                  English
                  58 months ago

                  You could in the past. About 6 years ago or so the top 3 results were almost always correct.

                  Currently you can’t because of the AI generated content that gets things wrong the same way as using an AI as a search engine.

        • @[email protected]
          link
          fedilink
          English
          58 months ago

          I’m pretty sure the number of people that have lost their jobs over this shitty text generator has surpassed a million.

          • @[email protected]
            link
            fedilink
            English
            -48 months ago

            But that has nothing to do with ChatGPT. It’s what some people were blaming on all of the layoffs when it came out. That would have happened regardless. The news just loves a clickable headline.

            Most of these companies will start rehiring again. They just did it to trim the fat, cut the high earners, and get people back in at a lower rate because they’re desperate for work.

            Tale as old as time.

              • @[email protected]
                link
                fedilink
                English
                -48 months ago

                That is not at all what I said, and you clearly didn’t read my comment. I’m not defending them. I’m saying it’s not the fault of AI. They were gonna fire them anyway to lower costs, and get cheaper workers. (A bad thing)

                Apologize, you fucking cunt.

        • _NoName_
          link
          fedilink
          English
          48 months ago

          If we’re talking only about LLMs, then probably the biggest issues caused are threats to support line jobs, the enshittification of said help lines, blatant misinformation spread via those chat bots, and a variety of niche problems.

          If we’re spreading out to mean AI mor generally, we could talk about how facial recognition has now gotten good enough that it’s being used to identify and catalogue pretty much anyone that passes a FR-equipped security system. Israel has actually been picking civilian targets via AI. We could also talk about “self driving” cars and the compeletely avoidable deaths they’ve caused. We could talk about how most convolution network AIs that identify graphic imagery and other horrific visuals use massive sweat shops to sort said graphic images for pennies. We could also talk about how mimicry AI has now been used to create both endless revenge porn of unwilling victims, and also faked the voice of others to try to scam them or make them not vote. There’s plenty of damage AI as a whole has done, even if LLMs are the most minimal of all of them.

          • @RidcullyTheBrown
            link
            English
            28 months ago

            If we’re spreading out to mean AI mor generally, we could talk about how facial recognition has now gotten good enough that it’s being used to identify and catalogue pretty much anyone that passes a FR-equipped security system.

            I don’t think that this is “AI more generally” as the public (and the current article) understands it. You’re lumping together any slightly self corrective algorithm under the AI umbrella. This might be technically correct, but it’s just operations, it’s not indicative of the current hype.

            We could also talk about “self driving” cars and the compeletely avoidable deaths they’ve caused.

            The limiting factor for self driving cars is hardware, not software. There is no commercially viable video technology available to allow taking the self driving technology out of the lab and into the consumer space. Unless you’re talking about Tesla-like systems which, of course, are neither a “self-driving” system nor consumer ready.

            We could also talk about how mimicry AI has now been used to create both endless revenge porn of unwilling victims, and also faked the voice of others to try to scam them or make them not vote

            This is not AI. The technology behind the voice or image manipulation has existed for some time and has been used for fake porn and for fake voice calls for a long while. We’re only discussing about it now because they can generate traffic if they’re tied to a hype like AI. Very few people would read a story about a student sticking faces of his colleagues over naked bodies, but say the student used AI and suddenly everyone wants to find out what happened. It’s even worse: headlines are discussing the reaction of X celebrity to porn fakes in the context of AI even though porn sites have been having a fake porn section ever since the late 90s and they’re available to anyone with the mental capacity to click “I’m over 18”. Maybe you’re too young to remember, but google wasn’t always censoring search results. Before 2010-ish, fakes like these would routinely appear in google searches of a celebrity’s name. I’m not really sure why AI makes this any different

            • _NoName_
              link
              fedilink
              English
              28 months ago

              You are pulling a no true Scotsman fallacy here. AI has always been a somewhat vague term, and it’s explicitly a buzzword in today’s systems.

              This AI front has also been taking the current form for more than a decade, but it wasn’t a public topic until now, because it was terrible up until now.

              The relevant things is that AI is automating a normally human-centric practice via extensive training on a data model. All systems I’ve mentioned utilize that machine learning practice at some point in their process.

              The statement about the deepfakes is just patently incorrect on your part. It is a trained model which takes an input, and outputs a manipulated output based on its training. That’s enough to meet the criteria. Before it was fairly difficult and almost immediately identifiable as AI manipulated. It’s now popular because it’s gotten good enough to not be immediately noticeable, done fairly easily, and is at the point where it can be mostly automated.

              • @RidcullyTheBrown
                link
                English
                18 months ago

                The statement about the deepfakes is just patently incorrect on your part. It is a trained model which takes an input, and outputs a manipulated output based on its training. That’s enough to meet the criteria. Before it was fairly difficult and almost immediately identifiable as AI manipulated. It’s now popular because it’s gotten good enough to not be immediately noticeable, done fairly easily, and is at the point where it can be mostly automated.

                I never claimed that the current software didn’t use machine learning. I simply said that faking video/images has been happening long before machine learning was involved in it and I completely disagree that it is harder to identify fakes now than it used to be. Maybe it wasn’t a technology that was easily available, but image manipulation is something we have been seeing for a long time. If anything, the fact that it is know public knowledge that image, voice and movie clips can be faked will help people to stop trusting them when they shouldn’t

                • _NoName_
                  link
                  fedilink
                  English
                  1
                  edit-2
                  8 months ago

                  I never claimed that the current software didn’t use machine learning

                  This is not AI.

                  This is your straight statement, and your only argument was saying it was done before AI was used in it. That’s a poor argument. That’s like arguing that self driving isn’t AI because remote control car piloting existed.

                  Automated image manipulation vs having 100s of hours in Photoshop. That’s AI vs what came before. Inputting a source file and getting a manipulated file after some amount of time, vs hours of meticulous work trying to get minor details right.

                  If we want to compare oldschool manipulation vs AI Manipulation, then yes, fakes now are on par with the insane skill of some image doctoring artists - you’re just looking for different things - but it’s at an exponentially lower cost than hiring a professional. Compare AI to itself, though? It’s night and day. Early AI manipulation was atrocious. And modern AI manipulation is only going to get better. That is all due to breakthroughs in AI. imagine what the hell will happen when Sora becomes usable by anyone.

                  Machine learning has taken an originally hard thing to do and made it cheap and easy. Now, any schmuck can pump out doctored footage in an afternoon. That’s why the AI porn is big- you can pay dirt cheap and give the model photos of any random woman and it’ll make porn of them - and that fact has turned it into a much more viable business model than before, that’s currently creating massive amounts of non consensual porn fakes- exponentially more than before.

          • @[email protected]
            link
            fedilink
            English
            -38 months ago

            A lot of what you’ve mentioned has existed for decades in some fashion. It’s just code.

            • @Passerby6497
              link
              English
              18 months ago

              And making these tools mass market instead of being something niche that requires actual talent to do is absolutely something to blame ChatGPT for.

    • @[email protected]
      link
      fedilink
      English
      78 months ago

      I saw through chat GPT’s gimmic right away.

      Having said that, image generation to me was and still is magic. Not because I don’t understand it, but because I saw it as a way to get people with imagination but no skill to actually make art.

      Having said that, the reality and how it is used is different.

      • @Dimantina
        link
        English
        78 months ago

        I love it for asset creation and texture for games. As a programmer with 0 artistic skill it has been a god send to be able to do far higher quality UI while not bogging down my prototyping time.

        But for like truely unique art… It’s kind of a mess. Like try to get an AI to make a dwarf warrior with a Lance riding on the shoulders of an anthropomorphic cat person, who is dressed in monk robes.

        AI struggles so hard with unique scenes like that… For now…

        • @jacksilver
          link
          English
          38 months ago

          I think it’s amazing and terrible at the same time. It clearly produces some amazing looking things, but I’ve never been able to get it to create what I want.

  • @dezmd
    link
    English
    418 months ago

    Somebody woke up and realized that LLM AI is just search rebranded to get more VC funding.

    FOSS implementations need to be encouraged and supported at every turn on this, it’s almost certainly the only way forward with any reasonably open ethical consideration.

  • @[email protected]
    link
    fedilink
    English
    208 months ago

    What a terrible article.

    TLDR: good inventions lose their novelty and become practical. AI has lost its novelty so it’s about to become great?

    Kind of skips the whole practicality aspect.

  • ugjka
    link
    English
    148 months ago

    AI? You meant 1000 people in India, perhaps

    • @Womble
      link
      English
      78 months ago

      Which 1000 Indians are making the Mistral 8x7B model running on my desktop work?

  • @[email protected]
    link
    fedilink
    English
    138 months ago

    It was a hype train for normies who wanted to become relevant in “tech” conversations without knowing shit. Like blockchain before it, and smartphone models before blockchain. And for other normies who dreamed that now that technology will finally make computers able to do what’s told in natural language, like in the movies, and engineers obsolete.

    For me it’ll become more magical, when used appropriately.

  • @Jimmyeatsausage
    link
    English
    88 months ago

    Kudos to the person (or, ironically, AI model) who chose that picture, though. Pepto Popsicle is the best euphemism for LLMs as AI that I can imagine.

  • @drawerair
    link
    English
    -1
    edit-2
    8 months ago

    I still use large-language models for fun. My fav phone reviewer is Marques Brownlee. I compared his best big phones in his phone awards to Claude 3 opus’ best big phones – I asked Opus. I wanted to see the similarities and differences for fun.

    I’ve been :) with the tight competition too. Claude 3 is making Gpt 4 and Gemini sweat.

  • @[email protected]
    link
    fedilink
    English
    -48 months ago

    It’s not even “artificial intelligence”.

    Seriously, “artificial intelligence” suggests, (defines?) self aware.

    AI is ridiculous machine language algorithms.

    The fuck, people!!

    • @piecat
      link
      English
      68 months ago

      AI is ridiculous machine language algorithms.

      It always was

      What you’re thinking of is general ai. But ml is a subset of the field of ai.