• designated_fridge
    link
    fedilink
    arrow-up
    11
    arrow-down
    13
    ·
    3 days ago

    I don’t see the bubble popping at all.

    As a software engineer at a big tech org, there’s no way we’ll ever go back to the world before LLMs. It’s just too good to ignore. Does it replace software engineers? No, not all of them, but some. What previously required 70 engineers might now require 60. Five years from now, you might get by on even fewer engineers.

    What could cause the bubble to pop? We’re rolling out AI code at scale, and we’re not seeing an increase in incidents or key metrics going down. Instead, we are shipping more and faster.

    So maybe it’s too expensive? This could be the case, but even so, it’s just a matter of time before the cost goes down or a company figures out a workflow to use tokens more conservatively.

    • very_well_lost
      link
      fedilink
      English
      arrow-up
      16
      ·
      2 days ago

      We’re rolling out AI code at scale, and we’re not seeing an increase in incidents or key metrics going down. Instead, we are shipping more and faster.

      Anecdotal, but I’ve had exactly the opposite experience as an engineer.

      • designated_fridge
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        Interesting!

        I have gone through my ups and downs. Lately I’ve been more and more convinced. I use Claude Code (Opus 4.5) hooked up to our internal atlassian and google drive mcps. I then ofc have to do a lot of writing (gathering requirements, writing context, etc) but instead of spending two days coding, I’ll spend half a day on this and then kick off a CC agent to carry it out.

        I then do a self review when it’s done and a colleague reviews as well before merge.

        And not for architectural work… Rather for features, fixing tech debt, etc.

        This also has the benefit of jira tickets being 1000x better than in the pre-LLM era.

        • very_well_lost
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          2 days ago

          I’m primarily using Opus 4.5 as well (via Cursor). We’ve tried pointing it at JIRA/Confluence via MCP and just letting the agent do it’s thing, but we always get terrible results (even when starting with solid requirements and good documentation). Letting an agent run unsupervised just always makes a mess.

          We never get code that conforms to the existing style and architecture patterns of our application, no matter how much we fuss with rules files or MCP context. We also frequently end up with solutions that compromise security, performance or both. Code reviews take longer than they used to (even with CodeRabbit doing a first pass review of every PR), and critical issues are still sneaking through the review process and out to prod.

          My team has been diligent enough to avoid any major outages so far, but other teams in the organization have had major production outages that have all been traced back to AI generated code.

          I’ve managed to carve out a workflow that does at least produce production-ready code, but it’s hardly efficient:

          • Start in plan mode. Define what I need, provide context, and answer any qualifying questions from the model. Once I’m happy with the ‘plan’, I tell Cursor to save a hardcopy to my local machine. This is important, because it will serve as a rolling checkpoint for when Cursor inevitably crashes.
          • Have the agent generate any unit tests we’ll need to validate this feature when it’s done.
          • Review the generated unit tests and inevitably rewrite them. Tell Cursor to update the plan based on the changes I’ve made to the tests.
          • Put the AI in “Ask” mode (so it doesn’t touch the code just yet) and tell it to summarize the first step of the plan. This makes sure that the step I care about is in the model’s context window so it doesn’t get confused or over-extend.
          • Pop back to agent mode and tell the model to proceed with step 1 and then STOP.
          • Review the model’s output for any issues. At this stage I’ll frequently point out flaws in the output and have the model correct them.
          • Back to “ask” mode, summarize the next step of the plan.
          • Execute the next step, review the output, ask for changes, etc
          • Repeat until all steps are complete.
          • Run the unit tests, then, if there are failures, have the model try to fix those. 50% of the time it fixes any issues encountered here. The other 50% it just makes an enormous mess and I have to fix it myself.
          • Once the unit tests are all passing, I need to review all of the generated code together to further check for any issues I missed (of which there are usually several)
          • When I’m finally satisfied, I tell the agent to create the PR and the rest of the team very carefully reviews it.
          • PR is approved and off we go to QA.

          This is almost always slower than if I’d just written the code myself and hadn’t spent all that extra time babysitting the LLM. It’s also slower to debug if QA comes back with issues, because my understanding of the code is now worse than if I’d written it myself.

          I’ve spoken about this in other comments, but I’m going to repeat it again here because I don’t see anyone else talking about it: When you write code yourself, your understanding of that code is always better. Think of it like taking notes. Studies have shown over and over that humans retain information better when they take notes — not because they refer back to those notes later (although that obviously helps), but because by actively engaging with the material while they’re absorbing it, they build more connections in the brain than they would by just passively listening. This is a fundamental feature in how we learn (active is better than passive), and with the rise of code generation, we’re creating a major learning gap.

          There was a time when I could create a new feature and then six months later still remember all of the intimate details of the requirements I followed, the approach I took, and the compromises I had to make. Now? I’m lucky to retain that same information for 3 weeks, and I’m seeing the same in my coworkers.

    • bthest
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      2 days ago

      We’re rolling out AI code at scale, and we’re not seeing an increase in incidents or key metrics going down. Instead, we are shipping more and faster.

      Of course you’re seeing nothing but good reports and increasing numbers. That’s what bubbles are. Nothing in reality is as good as they’re making the AI market look. It’s all wash trading. No one is actually using these products so there won’t be much complaints or bug reports will there? Yeah it must look really good from the inside looking out.

      The reality is that real people hate your shitty broken AI products and want nothing to do with them.

      • TehWorld
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        I don’t want AI crammed into all the nooks and crannies either, but companies are using AI to advance productivity in very real ways, not just writing software. Just data analysis alone where you throw a bunch of sales data at an AI and have it spit out some less-intuitive trends that it’d take a team of people to suss out is an actual cost savings that can make-line-go-up.

        I do agree that it’s a bubble for sure but just like the housing bubble, there is still a lot of underlying value that will stick around after the burst.

      • designated_fridge
        link
        fedilink
        arrow-up
        1
        arrow-down
        2
        ·
        2 days ago

        I get the AI hate around art. But it’s quite a naïve (and frankly shows just how little you understand about AI) view to talk about broken AI products because I use AI to write some unit tests for me.

        I won’t go into details but pretty sure you use our product every day without reflecting over whether the code was written with the help of AI or not.

        Art is one thing and I agree. But you make it sound like you’d hate mathematicians who decided to use calculators, or hated programmers who used the first programming languages. Real programs are built with machine code!!

    • angstylittlecatboy@reddthat.com
      link
      fedilink
      English
      arrow-up
      20
      ·
      3 days ago

      The Dot Com bubble popping did not mean the end of the internet, just the end of “we’ll invest in any company that’s doing something on the internet.” It’ll be like that.

    • SpaceCowboy@lemmy.ca
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      2 days ago

      When the dot com bubble popped it’s not like the internet went away. Everything you’re saying also applies to the internet, we didn’t go back to the way the world was before the internet.

      Yet the bot com bubble popped.

      The long term viability of a technology does not indicate whether there’s “irrational exuberance” in the short term. Buying up GPUs that’ll depreciate in a few years when there won’t be power to run them in that time frame? Yup it’s a bubble, and it will pop. That doesn’t mean the tech will go away. Just it will be used in more reasonable ways and developed over the next decade instead of it being “it will replace all jobs in field X within six months” while wasting cycles jamming it into everything to create numbers about it’s usage constantly rising by huge amounts.

    • jaxxed
      link
      fedilink
      arrow-up
      5
      ·
      2 days ago

      Tue pop won’t be so much about effectiveness, but rather about profitability. AI costs much more than it makes. When it pops, only those firms that are willing to run it at a loss will be able to offer it.

    • Taleya@aussie.zone
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 days ago

      Look to the dotcom bubble my friend. As someone who was there, this bubble gonna pop.