• @Siegfried
    link
    English
    27 minutes ago

    No AI is a very real thing… just not LLMs, those are pure marketing

  • @NeilBru
    link
    English
    81 hour ago

    I make DNNs (deep neural networks), the current trend in artificial intelligence modeling, for a living.

    Much of my ancillary work consists of deflating tempering the C-suite’s hype and expectations of what “AI” solutions can solve or completely automate.

    DNN algorithms can be powerful tools and muses in scientific, engineering, creativity and innovation. They aren’t full replacements for the power of the human mind.

    I can safely say that many, if not most, of my peers in DNN programming and data science are humble in our approach to developing these systems for deployment.

    If anything, studying this field has given me an even more profound respect for the billions of years of evolution required to display the power and subtleties of intelligence as we narrowly understand it in an anthropological, neuro-scientific, and/or historical framework(s).

  • @[email protected]
    link
    fedilink
    English
    13 minutes ago

    I dunno about him; but genuinely I’m excited about AI. Blows my mind each passing day ;)

  • peopleproblems
    link
    English
    342 hours ago

    Yup.

    I don’t know why. The people marketing it have absolutely no understanding of what they’re selling.

    Best part is that I get paid if it works as they expect it to and I get paid if I have to decommission or replace it. I’m not the one developing the AI that they’re wasting money on, they just demanded I use it.

    That’s true software engineering folks. Decoupling doesn’t just make it easier to program and reuse, it saves your job when you need to retire something later too.

    • @[email protected]
      link
      fedilink
      English
      25 minutes ago

      The people marketing it have absolutely no understanding of what they’re selling.

      Has it ever been any different? Like, I’m not in tech, I build signs for a living, and the people selling our signs have no idea what they’re selling.

    • @[email protected]
      link
      fedilink
      English
      142 hours ago

      Their goal isn’t to make AI.

      The goal of both the VCs and the startups is to make money. That’s why.

      • Kronusdark
        link
        English
        21 hour ago

        It’s not even to make money, they already do that. They need GROWTH. More money this quarter than last or the stockholders don’t get paid.

        • @[email protected]
          link
          fedilink
          English
          453 minutes ago

          Growth doesn’t mean revenue over cost anymore, it just means number go up. The easiest way to create growth from nothing is marketing tulips to venture capital and retail investors.

  • nifty
    link
    English
    116 minutes ago

    In a way he’s right, but it depends! If you take even a common example like Chat GPT or the native object detection used in iPhone cameras, you’d see that there’s a lot of cool stuff already enabled by our current way of building these tools. The limitation right now, I think, is reacting to new information or scenarios which a model isn’t trained on, which is where all the current systems break. Humans do well in new scenarios based on their cognitive flexibility, and at least I am unaware of a good framework for instilling cognitive flexibility in machines.

  • @[email protected]
    link
    fedilink
    English
    102 hours ago

    Just chiming in as another guy who works in AI who agrees with this assessment.

    But it’s a little bit worrisome that we all seem to think we’re in the 10%.

    • @TheGrandNagus
      link
      English
      62 hours ago

      it’s a little bit worrisome that we all seem to think we’re in the 10%.

      A bit like how when you poll drivers on how good they think they are at driving, the vast majority say they’re better than average lol

  • @[email protected]
    link
    fedilink
    English
    393 hours ago

    AI as we know it does have its uses, but I would definitely agree that 90% of it is just marketing hype

    • @[email protected]
      link
      fedilink
      English
      92 hours ago

      The image generation features are fun, even though you have to browbeat the idiot AI into following the description.

    • @[email protected]
      link
      fedilink
      English
      03 hours ago

      You just haven’t tried OpeningAI’s latest orione model. A company employee said it is soooo smart, can you believe it? And the government is like, goddamn we are so scareded of it. Im telling you AGI december 2024, you’ll will see!

      • @Shadywack
        link
        English
        3
        edit-2
        58 minutes ago

        Year of the Linux Deskto…oh wait wrong thread, same same though. If we just wait one more year, we’ll have FULL FSD!

        Next year, I promise, is the year we all switch to crypto, just wait!

        In just two years, no one will be driving 4,000lb cars anymore, everyone just needs a Segway.

        We’re going to have “just walk out” grocery stores in two years, where you pick items off the shelf, and 10,000 outsourced Indians will review your purchase and complete your CC transaction in about a half hour. our awesome technology will handle everything, charging you for your groceries as you leave the store, in just two more years!

    • @MadBigote
      link
      English
      22 hours ago

      We lived more than a decade of those decisions, when borrowing money was cheap, and VC was investing in startups selling juice machines.

  • @Suavevillain
    link
    English
    11 hour ago

    He is correct. It is mostly people cashing out on stuff that isn’t there.

  • @NABDad
    link
    English
    555 hours ago

    I had a professor in college that said when an AI problem is solved, it is no longer AI.

    Computers do all sorts of things today that 30 years ago were the stuff of science fiction. Back then many of those things were considered to be in the realm of AI. Now they’re just tools we use without thinking about them.

    I’m sitting here using gesture typing on my phone to enter these words. The computer is analyzing my motions and predicting what words I want to type based on a statistical likelihood of what comes next from the group of possible words that my gesture could be. This would have been the realm of AI once, but now it’s just the keyboard app on my phone.

    • @[email protected]
      link
      fedilink
      English
      32 hours ago

      The approach of LLMs without some sort of symbolic reasoning layer aren’t actually able to hold a model of what their context is and their relationships. They predict the next token, but fall apart when you change the numbers in a problem or add some negation to the prompt.

      Awesome for protein research, summarization, speech recognition, speech generation, deep fakes, spam creation, RAG document summary, brainstorming, content classification, etc. I don’t even think we’ve found all the patterns they’d be great at predicting.

      There are tons of great uses, but just throwing more data, memory, compute, and power at transformers is likely to hit a wall without new models. All the AGI hype is a bit overblown. That’s not from me that’s Noam Chomsky https://youtu.be/axuGfh4UR9Q?t=9271.

  • @[email protected]
    link
    fedilink
    English
    53 hours ago

    There was a great article in the Journal of Irreproducible Results years ago about the development of Artificial Stupidity (AS). I always do a mental translation to AS when ever I see AI.

  • @brucethemoose
    link
    English
    207
    edit-2
    6 hours ago

    As a fervent AI enthusiast, I disagree.

    …I’d say it’s 97% hype and marketing.

    It’s crazy how much fud is flying around, and legitimately buries good open research. It’s also crazy what these giant corporations are explicitly saying what they’re going to do, and that anyone buys it. TSMC’s allegedly calling Sam Altman a ‘podcast bro’ is spot on, and I’d add “manipulative vampire” to that.

    Talk to any long-time resident of localllama and similar “local” AI communities who actually dig into this stuff, and you’ll find immense skepticism, not the crypto-like AI bros like you find on linkedin, twitter and such and blot everything out.

    • @[email protected]
      link
      fedilink
      English
      63 hours ago

      The saddest part is, this is going to cause yet another AI winter. The first few ones were caused by genuine over-enthusiasm but this one is purely fuelled by greed.

      • @sploosh
        link
        English
        22 hours ago

        The AI ecosystem is flooded, we need a good bubble pop to slow down the massive waste of resources that our current info-remix-based-on-what-you-will-likely-react-positively-to shit-tier AI represents.

    • @WoodScientist
      link
      English
      22 hours ago

      I think we should indict Sam Altman on two sets of charges:

      1. A set of securities fraud charges.

      2. 8 billion counts of criminal reckless endangerment.

      He’s out on podcasts constantly saying the OpenAI is near superintelligent AGI and that there’s a good chance that they won’t be able to control it, and that human survival is at risk. How is gambling with human extinction not a massive act of planetary-scale criminal reckless endangerment?

      So either he is putting the entire planet at risk, or he is lying through his teeth about how far along OpenAI is. If he’s telling the truth, he’s endangering us all. If he’s lying, then he’s committing securities fraud in an attempt to defraud shareholders. Either way, he should be in prison. I say we indict him for both simultaneously and let the courts sort it out.

    • @[email protected]
      link
      fedilink
      English
      184 hours ago

      TSMC are probably making more money than anyone in this goldrush by selling the shovels and picks, so if that’s their opinion, I feel people should listen…

      There’s little in the AI business plan other than hurling money at it and hoping job losses ensue.

      • @brucethemoose
        link
        English
        34 hours ago

        TSMC doesn’t really have official opinions, they take silicon orders for money and shrug happily. Being neutral is good for business.

        Altman’s scheme is just a whole other level of crazy though.

    • @[email protected]
      link
      fedilink
      English
      73 hours ago

      Agreed that’s why it’s so dangerous. These tech bros are going to do damage with their shitty products. It seems like it’s Altman’s goal, honestly.

      • @[email protected]
        link
        fedilink
        English
        62 hours ago

        He wants money/power, and he is getting it. The rest of the AI field will forever be haunted by his greed.

    • falkerie71
      link
      fedilink
      English
      536 hours ago

      For real. Being a software engineer with basic knowledge in ML, I’m just sick of companies from every industry being so desperate to cling onto the hype train they’re willing to label anything with AI, even if it has little or nothing to do with it, just to boost their stock value. I would be so uncomfortable being an employee having to do this.

      • @Mikelius
        link
        English
        124 hours ago

        For sure, it seems like 90% of ai startups are nothing more than front end wrappers for a gpt instance.

        • @[email protected]
          link
          fedilink
          English
          8
          edit-2
          3 hours ago

          They’re all built on top of OpenAI which is very unprofitable at the moment. Feels like the whole industry is built on a shaky foundation.

          Putting the entire fate of your company in a different company (OpenAI) is not a great business move. I guess the successful AI startups will eventually transition to self-hosted models like Llama, if they survive that long.

      • @[email protected]
        link
        fedilink
        English
        22 hours ago

        As someone who was working really hard trying to get my company to be able use some classical ML (with very limited amounts of data), with some knowledge on how AI works, and just generally want to do some cool math stuff at work, being asked incessantly to shove AI into any problem that our execs think are “good sells” and be pressured to think about how we can “use AI” was a terrible feel. They now think my work is insufficient and has been tightening the noose on my team.

    • @paddirn
      link
      English
      145 hours ago

      I really want to like AI, I’d love to have an intelligent AI assistant or something, but I just struggle to find any uses for it outside of some really niche cases or for basic brainstorming tasks. Otherwise, it just feels like alot of work for very little benefit or results that I can’t even trust or use.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        1 hour ago

        I receive alerts when people are outside my house, using security cameras, Blue Iris, CodeProject AI, Node-RED and Home Assistant, using a Google Coral for local AI. Entirely local - no cloud services apart from Google’s notification system to get notifications to my phone while I’m not home (which most Android apps use). That’s a good use case for AI since it avoids false positives that occur with regular motion detection.

      • @brucethemoose
        link
        English
        8
        edit-2
        4 hours ago

        It’s useful.

        I keep Qwen 32B loaded on my desktop pretty much whenever its on, as an (unreliable) assistant to analyze or parse big texts, to do quick chores or write scripts, to bounce ideas off of or even as a offline replacement for google translate (though I specifically use aya 32B for that).

        It does “feel” different when the LLM is local, as you can manipulate the prompt syntax so easily, hammer it with multiple requests that come back really fast when it seems to get something wrong, not worry about refusals or data leakage and such.

        • @[email protected]
          link
          fedilink
          English
          32 hours ago

          Attractive. You got some pretty solid specs?

          Rue the day I cheaped out on RAM. soldered RAMmmm

    • @[email protected]
      link
      fedilink
      English
      175 hours ago

      Seriously, I’d love to be enthusiastic about it because it’s genuinely cool what you can do with math.

      But the lies that are shoved in our faces are just so fucking much and so fucking egregious that it’s pretty much impossible.

      And on top of that LLMs are hugely overshadowing actual interesting approaches for funding.

    • KSP Atlas
      link
      fedilink
      English
      54 hours ago

      After getting my head around the basics of the way LLMs work I thought “people rely on this for information?”, the model seems ok for tasks like summarisation though

      • @[email protected]
        link
        fedilink
        English
        22 hours ago

        I don’t love it for summarization. If I read a summary, my takeaway may be inaccurate.

        Brainstorming is incredible. And revision suggestions. And drafting tedious responses, reformatting, parsing.

        In all cases, nothing gets attributed to me unless I read every word and am in a position to verify the output. And I internalize nothing directly, besides philosophy or something. Sure can be an amazing starting point especially compared to a blank page.

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        3 hours ago

        It’s good for coding if you train it on your own code base. Not great for writing very complex code since the models tend to hallucinate, but it’s great for common patterns, and straightforward questions specific to your code base that can be answered based on existing code (eg “how do I load a user’s most recent order given their email address?”)

        • @[email protected]
          link
          fedilink
          English
          12 hours ago

          It’s wild when you only know how to use SELECT in SQL, but after a dollar worth of prompting and 10 minutes of your time, you can have a significantly complex query you end up using multiple times a week.

      • @brucethemoose
        link
        English
        14 hours ago

        the model seems ok for tasks like summarisation though

        That and retrieval and the business use cases so far, but even then only if the results can be wrong somewhat frequently.

    • @[email protected]
      link
      fedilink
      English
      24 hours ago

      TSMC’s allegedly calling Sam Altman a ‘podcast bro’ is spot on, and I’d add “manipulative vampire” to that.

      What’s the source for that? It sounds hilarious

      • @brucethemoose
        link
        English
        104 hours ago

        https://web.archive.org/web/20240930204245/https://www.nytimes.com/2024/09/25/business/openai-plan-electricity.html

        When Mr. Altman visited TSMC’s headquarters in Taiwan shortly after he started his fund-raising effort, he told its executives that it would take $7 trillion and many years to build 36 semiconductor plants and additional data centers to fulfill his vision, two people briefed on the conversation said. It was his first visit to one of the multibillion-dollar plants.

        TSMC’s executives found the idea so absurd that they took to calling Mr. Altman a “podcasting bro,” one of these people said. Adding just a few more chip-making plants, much less 36, was incredibly risky because of the money involved.

    • @Evotech
      link
      English
      45 hours ago

      It’s selling the future, but nobody knows if we can actually get there

      • @brucethemoose
        link
        English
        44 hours ago

        It’s selling an anticompetitive dystopia. It’s selling a Facebook monopoly vs selling the Fediverse.

        We dont need 7 trillion dollars of datacenters burning the Earth, we need collaborative, open source innovation.

      • IninewCrow
        link
        fedilink
        English
        35 hours ago

        The first part is true … no one cares about the second part of your statement.

    • billwashere
      link
      English
      -35 hours ago

      Yep the current iteration is. But should we cross the threshold to full AGI… that’s either gonna be awesome or world ending. Not sure which.

      • @brucethemoose
        link
        English
        8
        edit-2
        4 hours ago

        Current LLMs cannot be AGI, no matter how big they are. The fundamental architecture just isn’t right.

        • billwashere
          link
          English
          24 hours ago

          You’re absolutely right. LLMs are good at faking language and sometimes not even great at that. Not sure why I got downvoted but oh well. But AGI will be game changing if it happens.

      • @[email protected]
        link
        fedilink
        English
        24 hours ago

        I know nothing about anything, but I unfoundedly believe we’re still very far away from the computing power required for that. I think we still underestimate the power of biological brains.

        • billwashere
          link
          English
          23 hours ago

          Very likely. But 4 years ago I would have said we weren’t close to what these LLMs can do now so who knows.

      • @[email protected]
        link
        fedilink
        English
        15 hours ago

        Based on what I’ve witnessed so far, people will play with their AGI units for a bit and then put them down to continue scrolling memes.

        Which means it is neither awesome, nor world-ending, but just boring/business as usual.

        • billwashere
          link
          English
          13 hours ago

          There are people way smarter than me that claim it will be a threshold and would likely grow exponentially after it’s crossed. I guess we won’t know for sure until it happens. I do agree most people get bored easily but if this thing is possible to think for itself without interaction it won’t matter if the humans get bored.

    • @Valmond
      link
      English
      -4
      edit-2
      5 hours ago

      Ya, it’s like machine learning but better. That’s about it IMO.

      Edit: As I have to spell it out: as opposed to (machine learning with) neural networks.

        • @[email protected]
          link
          fedilink
          English
          86 hours ago

          It’s also neural networks, and probably some other CS structures.

          AI is a category, and even specific implementations tend to use multiple techniques.

          • @brucethemoose
            link
            English
            35 hours ago

            Well there is a very specific architecture “rut” the LLMs people use have fallen into, and even small attempts to break out (like with Jamba) don’t seem to get much interest, unfortunately.

            • @[email protected]
              link
              fedilink
              English
              65 hours ago

              Sure, but LLMs aren’t the only AI being used, nor will they eliminate the other forms of AI. As people see issues with the big LLMs, development focus will change to adopt other approaches.

              • @commandar
                link
                English
                3
                edit-2
                5 hours ago

                There is real risk that the hype cycle around LLMs will smother other research in the cradle when the bubble pops.

                The hyperscalers are dumping tens of billions of dollars into infrastructure investment every single quarter right now on the promise of LLMs. If LLMs don’t turn into something with a tangible ROI, the term AI will become every bit as radioactive to investors in the future as it is lucrative right now.

                Viable paths of research will become much harder to fund if investors get burned because the business model they’re funding right now doesn’t solidify beyond “trust us bro.”

                • @brucethemoose
                  link
                  English
                  3
                  edit-2
                  4 hours ago

                  the term AI will become every bit as radioactive to investors in the future as it is lucrative right now.

                  Well you say that, but somehow crypto is still around despite most schemes being (IMO) a much more explicit scam. We have politicans supporting it.

                • @[email protected]
                  link
                  fedilink
                  English
                  24 hours ago

                  Sure, but those are largely the big tech companies you’re talking about, and research tends to come from universities and private orgs. That funding hasn’t stopped, it just doesn’t get the headlines like massive investments into LLMs currently do. The market goes in cycles, and once it finds something new and promising, it’ll dump money into it until the next hot thing comes along.

                  There will be massive market consequences if AI fails to deliver on its promises (and I think it will, because the promises are ridiculous), and we get those every so often. If we look back about 25 years, we saw the same thing w/ the dotcom craze, where anything with a website got obscene amounts of funding, even if they didn’t have a viable business model, and we had a massive crash. But important websites survived that bubble bursting, and the market recovered pretty quickly and within a decade we had yet another massive market correction due to another bubble (the housing market, mostly due to corruption in the financial sector).

                  That’s how the market goes. I think AI will crash, and I think it’ll likely crash in the next 5 years or so, but the underlying technologies will absolutely be a core part of our day-to-day life in the same way the Internet is after the dotcom burst. It’ll also look quite a bit different IMO than what we’re seeing today, and within 10 years of that crash, we’ll likely be beyond where we were just before the crash, at least in terms of overall market capitalization.

                  It’s a messy cycle, but it seems to work pretty well in aggregate.

  • @Sam_Bass
    link
    English
    22 hours ago

    and that 10% isnt really real, just a gabbier dr.sbaitso

    • @Jiggle_Physics
      link
      English
      32 hours ago

      Idk man, my doctors seem pretty fucking impressed with AI’s capabilities to make diagnoses by analyzing images like MRI’s.

      • @Sam_Bass
        link
        English
        12 hours ago

        then you are a fortunate rarity. most postd about the tech complain about ai just rearranging what it is told and regurgitating it with added spice

        • @Jiggle_Physics
          link
          English
          21 hour ago

          I think that is because most people are only aware of its use as what are, effectively, chat bots. Which, while the most widely used application, is one of its least useful. Medical image analysis is one of the big places it is making strides in. I am told, by a friend in aerospace, that it is showing massive potential for a variety of engineering uses. His firm has been working on using it to design, or modify, things like hulls, air frames, etc. Industrial uses, such as these, are showing a lot of promise, it seems.

          • @Sam_Bass
            link
            English
            11 hour ago

            thats good. be nice if all the current ai developers would aim that way

  • @iAvicenna
    link
    English
    12 hours ago

    it is basically like how self improvement folks are using quantum

  • @[email protected]
    link
    fedilink
    English
    12 hours ago

    That makes sense. He’s old enough and close enough thematically to have seen a few of these tech hype cycles.