How stupid do you have to be to believe that only 8% of companies have seen failed AI projects? We can’t manage this consistently with CRUD apps and people think that this number isn’t laughable? Some companies have seen benefits during the LLM craze, but not 92% of them. 34% of companies report that generative AI specifically has been assisting with strategic decision making? What the actual fuck are you talking about?

I don’t believe you. No one with a brain believes you, and if your board believes what you just wrote on the survey then they should fire you.

  • @AIhasUse
    link
    English
    -175 months ago

    Yes, and then you take the time to dig a little deeper and use something agent based like aider or crewai or autogen. It is amazing how many people are stuck in the mindset of “if the simplest tools from over a year aren’t very good, then there’s no way there are any good tools now.”

    It’s like seeing the original Planet of the Apes and then arguing against how realistic the Apes are in the new movies without ever seeing them. Sure, you can convince people who really want unrealistic Apes to be the reality, and people who only saw the original, but you’ll do nothing for anyone who actually saw the new movies.

    • @[email protected]
      link
      fedilink
      English
      265 months ago

      I’ve used crewai and autogen in production… And I still agree with the person you’re replying to.

      The 2 main problems with agentic approaches I’ve discovered this far:

      • One mistake or hallucination will propagate to the rest of the agentic task. I’ve even tried adding a QA agent for this purpose but what ends up happening is those agents aren’t reliable and also leads to the main issue:

      • It’s very expensive to run and rerun agents at scale. The scaling factor of each agent being able to call another agent means that you can end up with an exponentially growing number of calls. My colleague at one point ran a job that cost $15 for what could have been a simple task.

      One last consideration: the current LLM providers are very aware of these issues or they wouldn’t be as concerned with finding “clean” data to scrape from the web vs using agents to train agents.

      If you’re using crewai btw, be aware there is some builtin telemetry with the library. I have a wrapper to remove that telemetry if you’re interested in the code.

      Personally, I’m kinda done with LLMs for now and have moved back to my original machine learning pursuits in bioinformatics.

    • FaceDeer
      link
      fedilink
      -25 months ago

      Also, a lot of people who are using AI have become quiet about it of late exactly because of reactions like this article’s. Okay, you’ll “piledrive” me if I mention AI? So I won’t mention AI. I’ll just carry on using it to make whatever I’m making without telling you.

      There’s some great stuff out there, but of course people aren’t going to hear about it broadly if every time it gets mentioned it gets “piledriven.”

      • @afraid_of_zombies
        link
        English
        -1
        edit-2
        5 months ago

        Pretty much me. I am using it everywhere but usually not interested in mentioning it to some internet trolls.

        You can check my profile if you want, or not. 7 months ago I baked my first loaf of bread. I got the recipe from chatgpt. Over 7 months I have been going over with it on recipes and techniques, and as of this month I now have a part time gig job making artisan breads for a restaurant.

        There is no way I could have progressed this fast without that tool. Keep in mind I have a family and a career in engineering, not exactly an abundance of time to take classes.

        I mentioned this once on lemmy and some boomer shit starting screaming how learning how to bake with the help of an AI didn’t count and I need to buy baking books.

        Edit: spelling

        • FaceDeer
          link
          fedilink
          -45 months ago

          And if you need examples of people being piledriven, you can browse my history a bit. :) Since I’m not doing anything with AI that would suffer “professionally” from backlash (such as might happen to an artist who becomes the target of anti-AI witch-hunters) I’ve not been shy about talking about the good things AI can do and how I use it. Or at calling out biased or inaccurate arguments against various AI applications. As a result I get a lot of downvotes.

          Fundamentally, I think it’s just that people are afraid. They’re seeing a big risk from this new technology of losing their jobs, their lifestyles, and control over their lives. And that’s a real concern that should be treated seriously, IMO. But fear is not a good cultivator of rational thought or honest discourse. It’s not helping people work towards solving those real concerns.

          • @AIhasUse
            link
            English
            -45 months ago

            Yeah, this is exactly what I think it is. I’m a bit concerned about how hard it’s going to hit a large number of people when they realize that they’re echo chamber of “LLMs are garbage and have no benefits” was so completely wrong. I agree that there are scary aspects of all this, but pretending like they don’t exist will just make it harder to deal with. It’s like denying that the smoke alarm is going off until your arm is on fire.

            • @atrielienz
              link
              English
              35 months ago

              I’m inclined to believe, based on this thread, that you and the person you’re replying to didn’t read the article because the person who wrote it and most of the replies to it are not saying “LLM’s are garbage and have no benefits”.

              The post is specifically calling out companies that have jumped on the “AI LLM” train who are trying to force feed it into every single project and service regardless of whether it will be useful or beneficial or not. And they will not listen to people working in the field who tell them no it will not be beneficial.

              The hype is what people are upset about because companies are selling something that is useful in selective cases as something that will be useful to everyone universally for just about everything and they’re making products worse.

              Just look at Google and their implementation of AI LLM’S in search results. That’s a product that isn’t useful unless it’s accurate. And it was not ready to be a public facing service. In their other products it’s promising more but actually breaking or removing features that users have been using for years. That’s why people are upset. This isn’t even taking into account the theft that went on of people’s work to get these LLM’S trained.

              This is literally just about companies having more FOMO than sense. This is about them creating and providing to the public broken interactions of products filled with the newest “tech marvel” to increase sales or stock price while detrimentally affecting the common user.

              For every case of an LLM being useful there are several where it’s not. That’s the point.