• fenrasulfr
    link
    fedilink
    English
    arrow-up
    3
    ·
    21 minutes ago

    The fact is though the average person is starting to replace their search engine with chatgpt, gemini, grok or whatever other llm and I have seen more and more small association using generative ai to make their posters instead of working with artist or soing it themselves.

  • tackleberry@thelemmy.club
    link
    fedilink
    English
    arrow-up
    1
    ·
    32 seconds ago

    I do not hate AI, I am learning how to use it. AI is really great technology and like a calculator, you can be more productive when you know how to use it.

  • HugeNerd@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    1 hour ago

    The mudslide of AI slop on YouTube is like digital gangrene, the brainrot has gone down the stem into the organs. We’re done as a species.

  • Xylight‮@lemdro.id
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    3 hours ago

    this post is real✅ and has been fact checked by true american patriots✅

  • presoak@lazysoci.al
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    To be fair, it’s a genius robot slave.

    Billionaires everywhere are shooting into the sky on rocket exhaust composed of their own semen.

    • HugeNerd@lemmy.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      28 minutes ago

      If that worked I’d have been near Jupiter halfway through high school.

  • null@lemmy.org
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    2 hours ago

    I could stomach the ills that come with AI if you could use it (as implemented in all the crap it’s in) without selling your soul to the data harvesters.

  • Thor_Whale@lemmus.org
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    5 hours ago

    Personally I don’t hate it but I just think there’s really no urgent need for it. If they’re using it to take jobs away from people well what is everyone going to do for work? To the billionaires think there’s going to be a gigantic human die off and they’re going to be elite class of 100,000 people and they’ll be served by intelligent robots? If that’s their angle good luck.

    • WoodScientist
      link
      fedilink
      English
      arrow-up
      7
      ·
      4 hours ago

      They all grew up on classic sci fi. And, being dumbasses, they took Solaria as an aspiration rather than a warning.

  • wizblizz
    link
    fedilink
    English
    arrow-up
    99
    arrow-down
    7
    ·
    8 hours ago

    The fuck are all these comments? AI is shit, fuck AI. It fuels billionaires, destroys the environment, kills critical thinking, confidently tells you to off yourself, praises Hitler, advocates for glue as a pizza topping. This tech is a war on artists and free thought and needs to be destroyed. Stop normalizing, stop using it.

    • maplesaga
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      9
      ·
      4 hours ago

      Same with the internet. Fuels billionaires, destroys the environment with data centers and cables, kills libraries and textbook research, spreads nazi propaganda. We need to stop using technology in general.

      • stabby_cicada@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        3 hours ago

        There are things you can do with the Internet that are impossible to do without the Internet. Everything you mentioned is very real harm that the Internet does to humanity in the world - even if you meant it sarcastically - but that harm has to be weighed against the benefits the Internet provides that can’t be replicated by anything else.

        There’s nothing a LLM can do that a human can’t. The only thing LLMs are good at is convincing managers to replace human employees with LLMs. Because even though LLMs do a worse job than any human employee, they’re cheaper and won’t unionize.

        The cost-benefit analysis for society is very different.

        • maplesaga
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          6
          ·
          3 hours ago

          Lets see a standard problem I’m randomly making up using a free AI, you tell me if this kind of thing can be useful to someone:

          If I have a bucket that is 1 meter tall and 1 meter wide how much volume can it hold?

          The volume V of a cylinder can be calculated using the formula:

          V=πr2h

          Where:

          r is the radius, h is the height.

          In this case, the bucket is 1 meter tall and 1 meter wide, which means the diameter is 1 meter. Therefore, the radius r is:

          r=21 meter​=0.5 meters

          Now substituting the values into the volume formula:

          V=π(0.5m)2(1m) V=π(0.25m2)(1m) V≈0.7854m3

          Thus, the volume the bucket can hold is approximately 0.785 cubic meters.

          • stabby_cicada@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            7 minutes ago

            What you’ve given is an example of a problem where an LLM is inherently the wrong tool.

            See, variation is built into LLMs. They’re programmed to evaluate probable responses and select from them on the basis of probability - to simplify ridiculously, if a particular word follows another 90% of a time, in 90% of the content it generates the LLM will have that word follow the other, and in the other 10% it won’t.

            If you give an LLM the exact same prompt multiple times, you will get multiple different responses. They’ll all be similar responses, but they won’t be exactly the same, because how LLMs generate language is probabilistic and contains randomness.

            (And that is why hallucination is an inherent feature of LLMs and can’t be trained out.)

            But math isn’t language. Math problems have correct answers. When you use a software tool to answer a math problem, you don’t want variation. You want the correct answer every time.

            To solve a math problem, you need to find the appropriate formula, which will be the same every time. Then you use a calculator, which always gives the correct result. You plug the numbers into the formula and calculate the result.

            What I’m getting at is, if you use a calculator to do the math problem yourself, and you put in the correct formula, you’ll always get the correct result. If you use a LLM to generate the answer to a math problem, there is always a non-zero chance it will give you the wrong answer.

            But what if, you might ask, you don’t know the correct formula? What if you’re not good enough at math to calculate the correct answers, even with a calculator? Isn’t this a time when the LLM can be useful, to do something you can’t?

            The problem is, the LLM could be wrong. And if you haven’t looked up the formula yourself, from a reliable source that is not an LLM, you have no way to check the LLM’s work. Which means you can’t trust it for anything important and you have to do the math yourself anyway.

            (This is true for everything an LLM does, but is especially true for math.)

            And if you have looked up the formula yourself, it’s just as easy to use a calculator the first time and skip the LLM.

            Right? This is what I’m getting at. An LLM can do some of the same things a human does, but it’s always going to be worse at it than a human, because it’s not conscious, it’s not reasoning its way to a correct answer, it’s just generating a string of linguistic tokens based on probabilities. And math problems might be the clearest possible example of this.

            • maplesaga
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              8 minutes ago

              Thats well put, I’m under no naive assumption that LLMs are AI. Though I do think youre discounting the usefulness, as it did give the right answer, which is a fine use for average people doing basic math or whatever project theyre working on. I’m under no delusion that its replacing workers, unless someones job is writing fancy emails or building spreadsheets, and I do still think its a massive bubble.

          • Tartas1995@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            5
            ·
            1 hour ago

            Using llms for math questions is probably the worst usage for llms.

            And all of this is easily calculated without ai. You can literally google it and let google do the math for you without ai.

            • maplesaga
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              33 minutes ago

              Perhaps your right, though the AI also allows natural language or voice, and further explanations.

              When you visualize a cylinder, think of stacking many thin circular disks (each with a height Δh) to build up the height h. The volume of each individual disk is its area πr2 multiplied by its infinitesimally small height Δh. When you aggregate these over the full height h, you arrive at the volume of the cylinder.

              Its also eroding all the bullshit we used to do, like cover letters and things that had no reason to exist besides wasting someones time. So truth be told I’m a fan, even if it is a massively unprofitable bubble, I also recognize its limitations given its hallucinations so I understand it shouldnt be relied upon for useful work.

      • wizblizz
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 hours ago

        Doesn’t work that way unfortunately. Ask a person on the street what AI is and theyll tell you whatever flavor slop generator they’re familiar with. You’re not going to see much pushback on ML around the Fediverse.

      • criscodisco
        link
        fedilink
        English
        arrow-up
        33
        arrow-down
        4
        ·
        6 hours ago

        LLMs are shit, fuck LLMs. They fuel billionaires, destroy the environment, kill critical thinking, confidently tell you to off yourself, praise Hitler, advocate for glue as a pizza topping. This tech is a war on artists and free thought and needs to be destroyed. Stop normalizing, stop using it.

        And AI is a pipe dream no one is close to fulfilling, won’t be realized by feeding LLMs all of the data in existence, and billionaires are destroying our economy in their pursuit of it.

        • Knock_Knock_Lemmy_In
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          10
          ·
          5 hours ago

          You are referring to AGI not AI.

          The broad category of AI is most definitely real.

          • AppleTea@lemmy.zip
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            4 hours ago

            Could you define that category? Or give us an example of a programme that fits under it and one that doesn’t?

            • Knock_Knock_Lemmy_In
              link
              fedilink
              English
              arrow-up
              7
              arrow-down
              5
              ·
              4 hours ago

              AI contains LLMs and Machine learning and AGI.

              My main point is that you shouldn’t throw out computerised protein folding and cancer detection with your hatred of LLMs.

              • AppleTea@lemmy.zip
                link
                fedilink
                English
                arrow-up
                6
                ·
                4 hours ago

                OK, and my point is that people are using the term “AI” so loosely as to be indistinguishable from “algorithm”.

                We’ll still have the statistical protein folding models after this bubble eventually pops, we’re just not gonna call it “AI”. It’s a trendy marketing department word, and its usefulness as a description in Computer Science is rapidly diminishing.

                • lad@programming.dev
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  35 minutes ago

                  I would say it just got widespread use, I definitely heard of MS Word doing autofill as ‘AI’ at the time when deep learning was freshly invented thing. People tried to label a lot of things ‘AI’, with LLMs the label just stuck better

              • underisk@lemmy.ml
                link
                fedilink
                English
                arrow-up
                6
                arrow-down
                1
                ·
                4 hours ago

                AI to a layman just means “LLMs and Generative AI that rich assholes keep trying to force me to use or consume the output of”. i dont think its worthwhile to split semantic hairs over this. call the “good” stuff CNNs or machine learning if you really feel the need to draw a distinction.

                • Knock_Knock_Lemmy_In
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  3
                  ·
                  3 hours ago

                  To a layman, yes I agree.

                  Not many laymen on lemmy. We can afford to be precise with our language.

      • Auli@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        4 hours ago

        So what is AI in your opinion because LLMs fall under that umbrella.

        • Knock_Knock_Lemmy_In
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          3 hours ago

          My opinion. AI is a way to improve a computer models accuracy over time based on new data.

          I could even argue that ChatGPT etc. are not AI because the LLMs are not directly learning from the inputs they are receiving.

      • mechoman444
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        17
        ·
        5 hours ago

        Change this out for any other technology that’s been innovated throughout human history. The printing press semiconductors the internet.

        The anti-ai rhetoric on this platform is becoming nonsensical.

        At this point it’s just bandwagon hate. These people don’t even understand the difference between llms and AIs and the various applications that they have.

        • wizblizz
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 hours ago

          Bullshit, fuck your false equivalency. This tech is good at generaating slop, propaganda, and destroying critical thinking. Thats it. It has zero value.

          • mechoman444
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 hour ago

            Ok. This is clearly rage bait.

            You’re an ignorant fool and I’m probably not the first person to tell you that.

        • Auli@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          4 hours ago

          Sorry don’t remember any of those other technologies using so much resources, raising prices for everyone else as they don’t pay the actual cost. And being wrong about stuff.

          • Knock_Knock_Lemmy_In
            link
            fedilink
            English
            arrow-up
            3
            ·
            3 hours ago

            Bitcoin and Ethereum PoW used resources and raised (GPU and electricity) prices for everyone.

          • mechoman444
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            4
            ·
            edit-2
            1 hour ago

            They literally killed and excommunicated people after the invention of the printing press for producing unauthorized copies of the Bible. Figures like William Tyndale paid with their lives for translating scripture into English, challenging the Church’s authority.

            There is illicit material circulating freely on Tor, demonstrating that technology can distribute both knowledge and criminal content.

            Semiconductors underpin some of humanity’s most powerful and destructive technologies, from advanced military systems to cyberweapons. They are a neutral tool, but their applications have reshaped warfare and global power dynamics.

            You are fully entitled to dislike AI or technologies associated with it. But to dismiss it entirely is ignorant. Whether you want to believe it or not, we are on the precipice of a technological revolution, the shape of which remains uncertain, but its impact will be undeniable.

        • missingno@fedia.io
          link
          fedilink
          arrow-up
          5
          ·
          4 hours ago

          Any other technology? How about 3D TVs, smart glasses, blockchain, NFTs, the Metaverse?

          • mechoman444
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            2 hours ago

            Yes. These all qualify. They’re all massively successful technologies.

            Well, aside from 3D TVs and smart glasses. But they’re generally innocuous. Yes I also understand that smart glasses es have privacy issues but then again in this day and age what doesn’t.

            • missingno@fedia.io
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              1 hour ago

              If you think any of these are massively successful, I question what reality you are living in.

              • mechoman444
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 hour ago

                The blockchain nfts the metaverse aren’t successful?

                These three things generate massive amounts of revenue. The metaverse especially is a billion dollar IP.

                The word success doesn’t have a positive connotation to it in this case.

                • missingno@fedia.io
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  1 hour ago

                  The metaverse had a billion dollars pumped into it, and yet for all they money they spent they have literally no users to show for it. Likewise for NFTs, a few idiots got suckered into paying for monkey JPGs and are now left holding a bag that no one wants.

                  The blockchain has a small cult trading money back and forth to make it look bigger than it really is. But it’s never achieved any kind of mainstream adoption as the currency true believers keep insisting it will be. And it never will, because it’s way too inefficient to ever scale.

          • maplesaga
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            4 hours ago

            Blockchains in an age of Trump choosing a new Fed chair after trying to have Powell arrested.

            Trust your government over software and cryptography, which has no basis in reality outside of the laws of physics and mathematics.

            • missingno@fedia.io
              link
              fedilink
              arrow-up
              3
              ·
              3 hours ago

              Figured I’d summon at least one person trying to defend crypto. Just because the US has issues doesn’t suddenly mean crypto is good.

              Bitcoin has been around for almost two decades now, and still has not achieved anything beyond being a means for speculators to try and fleece each other. If it hasn’t reached widespread mainstream adoption by now, it never will.

              Crypto is a failed technology, full stop.

              • maplesaga
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                3
                ·
                edit-2
                3 hours ago

                Gold is also just digging something up and then re-burying it. If it hasnt replaced fiat then why are people buying it, why has it been going up 100% a year recently when theres no new industrial demand for it?

                Its fine to not hold it, but all finite assets have some intrinsic value, because fiat keeps pumping via new debt issuance, which is inevitably debased. Like it was during Covid, or 2008, or 2001, etc…

                Crypto has a higher volatility, but can have a higher return, and is more closer correlated to the Nasdaq; like all assets its generally efficiently priced. I’d say its closer to TQQQ than it is to VT or gold, which may be suitable for 1-10% of a portfolio depending on goals and risk tolerance. If they drop interest rates quickly to pump the stockmarket TQQQ and Bitcoin would likely both rise dramatically.

                • missingno@fedia.io
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  2 hours ago

                  I feel like you just autopiloted into random cryptobro talking points that have nothing to do with the conversation. I don’t care if you like crypto, the reality is that rest of the world has already rejected it and moved on.

            • WoodScientist
              link
              fedilink
              English
              arrow-up
              3
              ·
              4 hours ago

              If the US dollar goes through hyperinflation and becomes worthless, people in the US won’t switch to Bitcoin or other crypto as their main form of currency. We’ll do exactly what citizens of every country that experiences such a currency crash does - start using other more stable currencies. You would see businesses start accepting a mix of Canadian dollar, Mexican pesos, Euros, and Yuan.

    • mechoman444
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      15
      ·
      5 hours ago

      The fuck are all these comments? The internet is shit, fuck the internet. It fuels billionaires, destroys the environment, kills critical thinking, confidently tells you to off yourself, praises Hitler, advocates for glue as a pizza topping. This tech is a war on artists and free thought and needs to be destroyed. Stop normalizing it. Stop using it.

    • TractorDuffy
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      19
      ·
      7 hours ago

      t’s the same as any other commercial tool. As long as it’s profitable the owner will continue to sell it, and users who are willing to pay will buy it. You use commercial tools every day that are harmful to the environment. Do you drive? Heat your home? Buy meat, dairy or animal products?

      I honestly don’t know where this hatred for AI comes from; it feels like a trend that people jump onto because they want to be included in something.

      AI is used in radiology to identify birth defects, cancer signs and heart problems. You’re acting like its only use-case is artwork, which isn’t true. You’re welcome to your opinion but you’re welcome to consider other perspectives as well. Ciao!

      • hardcoreufo
        link
        fedilink
        English
        arrow-up
        7
        ·
        4 hours ago

        The use in radiology is not a good thing. Hospitals are cutting trained technicians and making the few they keep double check more images per day as a backup for AI. If they were just using it as an aide and the humans were still analyzing the same number of picturea that would be fine but capitalism sees a way to save a buck and people will die as a result.

        • ClamDrinker
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          3
          ·
          3 hours ago

          This isn’t a problem with AI though, it’s a problem with the people cutting trained technicians. In places where such incompetent people don’t decide that, you get the same number of trained technicians accepting (and being a part of) a change that gives them slightly more accurate findings, resulting in lives being saved overall. Which is typically what health workers want to begin with.

      • missingno@fedia.io
        link
        fedilink
        arrow-up
        4
        ·
        4 hours ago

        I honestly don’t know where this hatred for AI comes from

        Did you try reading the comment you just replied to?

      • Auli@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        OK so why is AI so big right now because it isn’t profitable. Even there most expensive tier is losing them money. Then you have the data centers getting breaks on electricity so the rest of us cost goes up to make the difference. Where is this magical profitability that is driving AI.

      • ClamDrinker
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        9
        ·
        5 hours ago

        It’s in part because people aren’t open to contradictions in their world view. In part you can’t blame them for that since everyone has their own valid perspective. But staying willingly ignorant of positives and gray areas is a valid criticism. And sadly there are plenty of influencers peddling a black-white mindset on AI, ignoring all other uses. Not saying intentionally or not, again perspective. I’m sure online content creation has to contend with a lot more AI content compared to the norm. But only on the internet do I encounter rabidly anti AI people, in real life basically nobody cares. Some use it, some don’t, most do so responsibly as a tool. And I work in the creative industry…

        • CreativeShotgun
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          5 hours ago

          “I’ve never seen it it must not exist”

          I work in a creative industry too and it is the bane of not only my group but every other company I’ve spoken to. Every artist and musician I know hates it too.

          • ClamDrinker
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            5 hours ago

            I never said it doesn’t exist. I’m sorry people in your area are being negatively affected if so. But the point still stands. My experience is just as valid.

        • hardcoreufo
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 hours ago

          I’m pretty anti AI as it is a tool of the billionaire class to enslave the masses. Look up TESCREAL, its the digital eugenics billionaires and fringe philosiphers believe in and it is the driving force in the AI push.

          That being said I can see a use for a focused, local LLM/AI assistant. I have to search a lot of confidential technical manuals, schematics and trust cases in my job. We are thinking about testing out Ollama to upload all our documents too to make searching them easier.

          • ClamDrinker
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            4 hours ago

            You are the exact person I didn’t mean 😄 the first is a very valid reason to dislike AI.

        • SLVRDRGN
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          4 hours ago

          Even before our current time, “nobody cares” is not a thermostat reading of what “really matters”. It almost sounds like you believe people know what’s best for themselves, when the truth of the matter is that humanity has long proved otherwise.

          • WoodScientist
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 hours ago

            You sound like a cartoon supervillain, Lex Luthor ranting to superman about the common animals not knowing what’s best for themselves.

          • ClamDrinker
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            4 hours ago

            I don’t believe that. What I’m saying is that these are all people I work with look very critically and skeptically at the world, as that’s pretty much an inherent requirement for the creative industry. We all know what AI is and what it does, and most arguments against it hold no water to people with a realistic view of the industry to the point it simply cannot be black and white like some claim it to be.

            There are a few good reasons to dislike AI, but those don’t apply to all of AI. Some are value based, and other people have other values that are equally valid. And some can be avoided entirely. Like how you could ship packages with a coal rocket instead of a train on electricity, or just shipping less packages to begin with.

            There is trust and experience between one another in the industry that we aren’t using it unnecessarily, wastefully, and incorrectly, and AI is not anywhere near a requirement by consumers nor healthy minded businesses.

        • Auli@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          4 hours ago

          Look up dot com bubble. We still have the internet. Just because AI is over-hyped and in a bubble doesn’t mean it won’t still have uses.

          • ClamDrinker
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            3 hours ago

            I fully agree. I still remember the time when using Photoshop was seen by some as not being “real artist”, because “any idiot with a mouse can draw now”. I’m not under any illusion this will last forever, the negative sentiment is boiling because of the bubble and it’s negative externalities, not by the technology itself. So once that bursts, things will hopefully be a lot more peaceful.

    • But_my_mom_says_im_cool
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      16
      ·
      7 hours ago

      Which ai and for which use? It’s a tool. It’s like getting mad cause a guy invented a hammer. It’s not the tool hurting you dude, it’s the people wielding it.

      • starelfsc2@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        6 hours ago

        If that hammer also had massive environmental impacts and hammers were pushed into every aspect of your life while also stealing massive amounts of copyrighted data, sure. It’s very useful for problems that can be easily verified, but the only reason it’s good at those is from the massive amount of stolen data.

        • Bytemeister
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          4 hours ago

          Arguably, hammers also have a massive impact on the environment. They are also part of everyday life. Building you live in? Built using a hammer. New sidewalk? Old one came out with an automatic hammer. Car? Bet there was a type of hammer used during assembly. You can’t escape the hammer. Stop running. Accept your inner hammer. Embrace the hammer, become the hammer. Hammer on.

  • vane
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    7 hours ago

    That was December 2024.

    McKinsey & Company consulting firm has agreed to pay $650 million to settle a federal investigation into its work to help opioids manufacturer Purdue Pharma boost the sales of the highly addictive drug OxyContin, according to court papers filed in Virginia on Friday.

    Drug dealer must sell drugs.

    • brucethemoose
      link
      fedilink
      English
      arrow-up
      26
      arrow-down
      1
      ·
      9 hours ago

      The research/tinkerer community overwhelmingly agrees. They were making fun of Tech Bros before chatbots blew up.

    • TractorDuffy
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      6 hours ago

      Obama was there, he awarded the medal of honor, my parents were proud of me, the AI chump was instantly killed

  • foliumcreations
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    2
    ·
    9 hours ago

    I have made the conscious decision to try and not refer to it as AI, but predictive LLM or generative mimic models, to better reflect what they are. If we all manage to change our vernacular, perhaps we can make them silgtly less attractive to use for everything. Some might even feel less inclined to brag about using them for all their work.

    Other options might be unethical guessing machines, deceptive echo models, or the classic from Wh40k Abominable Intelligence.

    • mechoman444
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      The men of iron are so freaking cool! They’re still around in modern 40k hiding biding their time.

      Maybe one day we’ll have a whole new army of AIs in 40k!

    • wonderingwanderer@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      3 hours ago

      I mostly agree. Machine Learning is AI, and LLMs are trained with a specific form of Machine Learning. It would be more accurate to say LLMs are created with AI, but themselves are just a static predictive model.

      And people also need to realize that “AI” doesn’t mean sentient or conscious. It’s just a really complex computer algorithm. Even AGI won’t be sentient, it would only mimic sentiency.

      And LLMs will never evolve into AGI, any more than the Broca’s and Wernicke’s areas can be adapted to replace the prefrontal cortex, the cingulate gyrus, or the vagus nerve.

      Tangent on the nature of consciousness:

      The nature of consciousness is philosophically contentious, but science doesn’t really have any answers there either. The “Best Guess™” is that consciousness is an emergent property of neural activity, but unfortunately that leads to the delusion that “If we can just program enough bits into an algorithm, it will become conscious.” And venture capitalists are milking that assumption for all it’s worth.

      The human brain isn’t merely electrical though, it’s electrochemical. It’s pretty foolish to write off the entire chemical aspect of the brain’s physiology and just assume that the electrical impulses are all that matter. The fact is, we don’t know what’s responsible for the property of consciousness. We don’t even know why humans are conscious rather than just being mindless automatons encased in meat.

      Yes, the brain can detect light and color, temperature and pressure, pleasure and pain, proprioception, sound vibrations, aromatic volatile gasses and particles, chemical signals perceived as tastes, other chemical signals perceived as emotions, etc… But why do we perceive what the brain detects? Why is there even an us to perceive it? That’s unanswerable.

      Furthermore, where are “we” even located? In the brainstem? The frontal cortex? The corpus callosum? The amygdala or hippocampus? The pineal or pituitary gland? The occipital, parietal, or temporal lobe? Are “we” distributed throughout the whole system? If so, does that include the spinal cord and peripheral nervous system?

      Where is the center of the “self” responsible for the perception of “selfhood” and “self-awareness”?

      Until science can answer that, there is no path to artificial sentiency, and the closest approximation we have to an explanation for our own sentiency is simply Cogito Ergo Sum: I only know that I am sentient, because if I wasn’t then I wouldn’t be able to question my own sentiency and be aware of the fact that I am questioning it.

      Why digital circuits will never be conscious:

      The human brain has about 14 billion neurons. The average commercial API-based LLM already has about 150 billion parameters, and with FP32 architecture that’s already 4 bytes per parameter. If all it takes is a complex enough system of digits, it would have already worked.

      It’s just as likely that consciousness doesn’t emerge from electrochemical interactions, but is an inherent property of them. If every electron was conscious of its whirring around, we wouldn’t know the difference. Perhaps when enough of them are concerted together in a common effort, their simple form of consciousness “pools together” to form a more complex, unitary consciousness just like drops of water in a bucket form one pool of water. But that’s just pure speculation. And so is emergent consciousness theory. The difference is that consciousness as a property rather than an effect would explain why it seems to emerge from complex enough systems.

      • Knock_Knock_Lemmy_In
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 hours ago

        It’s just a really complex computer algorithm

        Not particularly complex. An LLM is:

        $P_\theta(x) = \prod_t \text{softmax}(f_\theta(x_{<t}))$

        where $f$ is a deep Transformer trained by maximum likelihood.

        • wonderingwanderer@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          6 hours ago

          That “deep Transformer trained by maximum likelihood” is the complex part.

          Billions of parameters in a tensor field distributed over a dozen or more layers, each layer divided by hidden sizes, and multiple attention heads per hidden size. Every parameter’s weight is algorithmically adjusted during training. For every query a matrix multiplication is done on multiple vectors to approximate the relevancy between each token. Possibly tens of thousands of tokens being stored in cached memory at a time, each one being analyzed relative to each other.

          And for standard architecture, each parameter requires four bytes of memory. Even 8-bit quantization requires one byte per parameter. That’s 12-24 GB RAM for a model considered small, in the most efficient format that’s still even remotely practical.

          Deep transformers are not simple systems, if they were then it wouldn’t take such an enormous amount of resources to fully train them.

          • Knock_Knock_Lemmy_In
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 hours ago

            The technical implementation, computational effort and sheer volume of training data is astounding.

            But that doesn’t change the fact that the algorithm is pretty simple. Deepseek is about 1,400 lines of code across 5 .py files

          • maplesaga
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            4 hours ago

            You’re really breaking the shitting on AI vibe when you make it sound like the height of human capacity and ingenuity. Can I just call it slop and go back to eating glue?

            • wonderingwanderer@sopuli.xyz
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              You can still shit on AI, just because it’s computationally complex doesn’t make it the greatest thing ever. It still has a lot of problems. In fact, one of the main problems is its consumption of resources (water, electricity, RAM, etc.) due to its computational complexity.

              I’m not defending AI companies, I just think characterizing LLMs as “simple” is misleading.

              • maplesaga
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 hours ago

                Our whole economy is geared to consume resources, we have inflation targeting to prevent aggregate demand and prices from ever falling. If you want to lower consumption need hard currency, the cheap cash that the AI’s are riding on now is most likely still Covid stimulus and QE.

                • wonderingwanderer@sopuli.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 hours ago

                  And speculation. Venture capitalists think they can create money by investing betting money that they predict they’ll have in the future. It’s how this circular ponzi scheme between Nvidia and OpenAI is holding itself up for now.

                  Those huge numbers that they count in their net worth don’t really exist. It’s money that’s been pledged by a different company based on money they pledged to that company in the first place. It’s speculation all the way down.

                  They’re hoping for a pay-off, but it’s a bubble of sunken costs kicking the can down the road for as long as they can before it bursts.