• Bonehead
    link
    fedilink
    1983 months ago

    This is so unrealistic. Developers don’t drink decaf.

    • Eager Eagle
      link
      English
      863 months ago

      regardless of experience, that’s probably what makes him a junior

    • @Bye
      link
      303 months ago

      I do, exclusively

      Getting rid of caffeine (decaf still has a little) has been amazing for me.

      • @jaybone
        link
        133 months ago

        I’m trying to switch to non-alcoholic vodka.

        • @[email protected]
          link
          fedilink
          63 months ago

          Non-alcoholic gin and tonics are the shit. If you’re legit looking for non-alcoholic drinks and like G&Ts give it a try.

          • @[email protected]
            link
            fedilink
            43 months ago

            Completely agree, it’s basically just botanicals anyway. Well and booze

            Non alcoholic beer has gotten a lot better the last years as well.

        • @[email protected]
          link
          fedilink
          83 months ago

          I’m not the person you’re replying to but for me, I used to get random headaches and jitters and I feel more consistent now.

          The problem is the withdrawal period can be hard for some. It was for me, but overall worth it in the end.

          • @Cliff
            link
            73 months ago

            So you get consistent headaches an jitters now instead of getting it randomly?

        • @[email protected]
          link
          fedilink
          53 months ago

          Personally, if i have too much and/or too late, i have a hard time falling asleep in the evening.

        • @[email protected]
          link
          fedilink
          5
          edit-2
          3 months ago

          How much you drinking? I didn’t think it had an impact on me, even afternoon or evening, and only realised the difference when I cut it out

          • @[email protected]
            link
            fedilink
            43 months ago

            I have a “thermos” style bottle that’s probably 16oz that I drink throughout the day every day. Weekends I’ll drink more as I’m home and it’s readily available.

            It’s cold brew so it’s already cold for anyone disgusted by the “throughout the day” bit lol

            • @[email protected]
              link
              fedilink
              English
              43 months ago

              16 oz is not that much, although cold brew is a little stronger. I used to consume a about gram of caffeine a day but withdrawal for me was a light headache and slightly lower energy for a day (I went no caffeine for a little while to reduce tolerance). I did notice my energy improve without it, however I am sometimes not able to get enough sleep and it is good for leveling out energy in those cases. I generally try to have low doses and occasional strategic bursts when necessary. Also if you are worried about sleep you can do the math using the half life of caffeine (5 hours avg.) to figure out how much you are on when you go to bed. Sorry if incoherent, I have been busy this week and not getting much sleep.

    • @[email protected]
      link
      fedilink
      73 months ago

      Not the same, but I switched to tea mostly for aesthetic reasons, and after a brief adjustment period, I’m finding it a lot more fun an varied than coffee drinking. And easier to find v low caffeine, or tasty 0 caffeine teas of as many varieties as you can imagine.

      I’ll still have a social coffee every now and then, but anyway I’d recommend it, at least to check out. It’s like discovering scotch after a lifetime of beer drinking.

      • @[email protected]
        link
        fedilink
        43 months ago

        Try eplaining tea to others though.
        Every time I am on-site I get asked for two options: Coffee or water.

          • @[email protected]
            link
            fedilink
            4
            edit-2
            3 months ago

            I assume your are either not interested in loose tea or not there yet.

            Once you reach temperature sensitive teas (like japanese greens) that are additionally sensitive to hard water it quickly becomes difficult to brew tea at work/not at home.

            Personally I started to bring a 400ml thermos (about my usual cup) and on some days my 1L thermos.
            Both my thermos keep a 70°C tea warm (probably 50°C) even until end of work and so temperature doesnt become an issue but instead oxidadation. Greens like to become a faint brown color and change their taste. Sometimes for the better, sometimes not.

            • @[email protected]
              link
              fedilink
              33 months ago

              not interested in loose tea or not there yet.

              This strikes me as particularly funny, thank you, that is very accurate. I have dabbled in the leaf that is loose, mostly buying baggies from the bulk food store, so not particularly fresh (or high quality). But yeah I am trying to stick to the cheap stuff for now. I love how it’s so much less expensive than coffee!

              Friends keep sending me these boutique tea and m samples now that I’m drinking tea haha, so I do know what I’m missing

              • @[email protected]
                link
                fedilink
                33 months ago

                Yeah, bagged tea is definitely more cheap compared to those more boutique teas.
                But you can get it cheaper in local tea shops or on sites like yunnan sourcing. But: shipping and import

                • @[email protected]
                  link
                  fedilink
                  33 months ago

                  I’m not even a year into seriously being into tea, so I imagine I’ll just get more particular over time. I’m still working through a few boxes of various grocery store black and herbal teas, so maybe I’ll look around for something different when those start to run out.

                  I do really love a big pot of green tea while I’m working at my desk job.

    • @marcos
      link
      33 months ago

      And LLMs don’t get on the correct answer.

      • @Klear
        link
        83 months ago

        I think this comic might predate the LLM craze.

        • @marcos
          link
          33 months ago

          That’s what people have been pointing. The 60 hours of training should have been a dead giveaway.

          I hope the neurons use a logistic activation function. If it’s a saturating linear one, the result will still be full of surprises.

    • @Agent641
      link
      23 months ago

      DECaf is a pseudo abbreviation for Dangerously and Extraordinarily Caffeinated.

      It has a higher KDR than a Panera charged lemonade.

  • @[email protected]
    link
    fedilink
    123
    edit-2
    3 months ago

    Agreed. If you need to calculate rectangles ML is not the right tool. Now do the comparison for an image identifying program.

    If anyone’s looking for the magic dividing line, ML is a very inefficient way to do anything; but, it doesn’t require us to actually solve the problem, just have a bunch of examples. For very hard but commonplace problems this is still revolutionary.

    • @[email protected]
      link
      fedilink
      English
      293 months ago

      I think the joke is that the Jr. Developer sits there looking at the screen, a picture of a cat appears, and the Jr. Developer types “cat” on the keyboard then presses enter. Boom, AI in action!

      The truth behind the joke is that many companies selling “AI” have lots of humans doing tasks like this behind the scene. “AI” is more likely to get VC money though, so it’s “AI”, I promise.

    • @Mango
      link
      163 months ago

      The correct tool for calculating the area of a rectangle is an elementary school kid who really wants that A.

    • @[email protected]
      link
      fedilink
      English
      133 months ago

      Exactly. Explaining to a computer what a photo of a dog looks like is super hard. Every rule you can come up with has exceptions or edge cases. But if you show it millions of dog pictures and millions of not-dog pictures it can do a pretty decent job of figuring it out when given a new image it hasn’t seen before.

    • @Dultas
      link
      93 months ago

      Another problem is people using LLM like it’s some form of general ML.

    • @[email protected]
      link
      fedilink
      53 months ago

      I think it’s still faster than actual solutions in some cases, I’ve seen someone train an ML model to animate a cloak in a way that looks realistic based on an existing physics simulation of it and it cut the processing time down to a fraction

      I suppose that’s more because it’s not doing a full physics simulation it’s just parroting the cloak-specific physics it observed but still

      • @[email protected]
        link
        fedilink
        2
        edit-2
        3 months ago

        I suppose that’s more because it’s not doing a full physics simulation it’s just parroting the cloak-specific physics it observed but still

        This. I’m sure to a sufficiently intelligent observer it would still look wrong. You could probably achieve the same thing with a conventional algorithm, it’s just that we haven’t come up with a way to profitably exploit our limited perception quite as well as the ML does.

        In the same vein, one of the big things I’m waiting on is somebody making a NN pixel shader. Even a modest network can achieve a photorealistic look very easily.

    • @[email protected]
      link
      fedilink
      783 months ago

      the comic is about using a machine learning algorithm instead of a hand-coded algorithm. not about using chatGPT to write a trivial program that no doubt exists a thousand times in the data it was trained on.

      • @[email protected]
        link
        fedilink
        273 months ago

        The strengths of Machine Learning are in the extremely complex programs.

        Programs no junior dev would be able to accomplish.

        So if the post can misrepresent the issue, then the commenter can do so too.

        • @[email protected]
          link
          fedilink
          273 months ago

          Lol, no. ML is not capable of writing extremely complex code.

          It’s basically like having a bunch of junior devs cranking out code that they don’t really understand.

          ML for coding is only really good at providing basic bitch code that is more time intensive than complex. And even that you have to check for hallucinations.

          • @kurwa
            link
            153 months ago

            To reiterate what the parent comment of the one you replied to said, this isn’t about chat GPT generating code, it’s about using ML to create a indeterministic algorithm, that’s why in the comic it’s only very close to 12 and not 12 exactly.

          • @BluesF
            link
            03 months ago

            ML is not good for coding, it is good for approximately solving very complex problems.

        • @[email protected]
          link
          fedilink
          183 months ago

          Yes that is what they are good at. But not as good as a deterministic algorithm that can do the same thing. You use machine learning when the problem is too complex to solve deterministically, and an approximate result is acceptable.

        • Pelicanen
          link
          fedilink
          163 months ago

          I think the exact opposite, ML is good for automating away the trivial, repetitive tasks that take time away from development but they have a harder time with making a coherent, maintainable architecture of interconnected modules.

          It is also good for data analysis, for example when the dynamics of a system are complex but you have a lot of data. In that context, the algorithm doesn’t have to infer a model that matches reality completely, just one that is close enough for the region of interest.

        • @[email protected]
          link
          fedilink
          143 months ago

          I strongly disagree. ML is perfect for small bullshit like “What’s the area of a rectangle” - it falls on its face when asked:

          Can we build a website for our security paranoid client that wants the server to completely refuse to communicate with users that aren’t authenticated as being employees… Oh, and our CEO requested a password recovery option on the login prompt.

          • @[email protected]
            link
            fedilink
            English
            9
            edit-2
            3 months ago

            I got interested and asked ChatGPT. It gave a middle-management answer.
            Guess we know who’ll be the first to go.

        • @[email protected]
          link
          fedilink
          33 months ago

          The biggest high level challenge in any tech org is security and there’s no way you can convince me that ML can successfully counter these challenges

          “oh but it will but it will!”

          when

          “in the future”

          how long in the future

          “When it can do it”

          how will we know it can do it

          “When it can do it”

          cool.

          • Scew
            link
            English
            33 months ago

            You probably wreck in chess. :)

    • @Klear
      link
      133 months ago

      Nice, that saves the coffee.

  • @[email protected]
    link
    fedilink
    433 months ago

    Ahh the future of dev. Having to compete with AI and LLMs, while also being forced to hastily build apps that use those things, until those things can build the app themselves.

    • @Klear
      link
      19
      edit-2
      3 months ago

      Let’s invent a thing inventor, said the thing inventor inventor after being invented by a thing inventor.

      • @SuperIce
        link
        English
        103 months ago

        You could make a religion out of this.

    • @EdibleFriend
      link
      103 months ago

      And also, as a developer, you have to deal with the way Star Trek just isn’t as good as it used to be.

      Because you’re all fucking nerds.

      (Me too tho)

    • @[email protected]
      link
      fedilink
      53 months ago

      I mean if you have access but are not using Copilot at work you’re just slowing yourself down. It works extremely well for boilerplate/repetitive declarations.

      I’ve been working with third party APIs recently and have written some wrappers around them. Generally by the 3rd method it’s correctly autosuggesting the entire method given only a name, and I can point out mistakes in English or quickly fix them myself. It also makes working in languages I’m not familiar with way easier.

      AI for assistance in programming is one of the most productive uses for it.

      • @[email protected]
        link
        fedilink
        43 months ago

        Oh I use Copilot daily. It fills the gaps for the repetitive stuff like you said. I was writing Stories in a Storybook.js project once and was able to make it auto-suggest the remainder of my entire component states after writing 2-3. They worked out of the gate too with maybe a single variable change. Initially, I wasn’t even going to do all of them in that coding session just to save time and get it handed off, but it was giving me such complete suggestions that I was able to build every single one out with interaction tests and everything.

        Outside of use cases like that and getting very general content, I think AI is a mess. I’ve worked with ChatGPT’s v3.5-4 API a ton and it’s unpredictable and hard to instruct sometimes. Prompts and approaches that worked 2 weeks ago, will now suddenly give you some weird edge case that you just can’t get it to stop repeating—even when using approaches that worked flawlessly for others. It’s like trying to patch a boat while you’re in it.

        The C suite people and suits jumped on AI way too early and have haphazardly forced it into every corner. It’s become a solution searching for a problem. The other day, a friend of mine said he had a client that casually asked how they were going to use AI on the website they were building for them, like it was just a commonplace thing. The buzzword has gotten ahead of itself and now we’re trying to reel it back down to earth.

        • @[email protected]
          link
          fedilink
          1
          edit-2
          3 months ago

          That was a pretty interesting read. However, I think it’s attributing correlation and causation a little too strongly. The overall vibe of the article was that developers who use Copilot are writing worse code across the board. I don’t necessarily think this is the case for a few reasons.

          The first is that Copilot is just a tool and just like any tool it can easily be misused. It definitely makes programming accessible to people who it would not have been accessible to before. We have to keep in mind that it is allowing a lot of people who are very new to programming to make massive programs that they otherwise would not have been able to make. It’s also going to be relied on more heavily by those who are newer because it’s a more useful tool to them, but it will also allow them to learn more quickly.

          The second is that they use a graph with an unlabeled y-axis to show an increase in reverts, and then never mention any indication of whether it is raw lines of code or percentage of lines of code. This is a problem because copilot allows people to write a fuck ton more code. Like it legitimately makes me write at least 40% more. Any increase in revisions are simply a function of writing more code. I actually feel like it leads to me reverting a lesser percentage of lines of code because it forces me to reread the code that the AI outputs multiple times to ensure its validity.

          This ultimately comes down to the developer who’s using the AI. It shouldn’t be writing massive complex functions. It’s just an advanced, context-aware autocomplete that happens to save a ton of typing. Sure, you can let it run off and write massive parts of your code base, but that’s akin to hitting the next word suggestion on your phone keyboard a few dozen times and expecting something coherent.

          I don’t see it much differently than when high level languages first became a thing. The introduction of Python allowed a lot of people who would never have written code in their life to immediately jump in and be productive. They both provide accessibility to more people than the tools before them, and I don’t think that’s a bad thing even if there are some negative side effects. Besides, in anything that really matters there should be thorough code reviews and strict standards. If janky AI generated code is getting into production that is a process issue, not a tooling issue.

  • The sad thing is that no amount of mocking the current state of ML today will prevent it from taking all of our jobs tomorrow. Yes, there will be a phase where programmers, like myself, who refuse to use LLM as a tool to produce work faster will be pushed out by those that will work with LLMs. However, I console myself with the belief that this phase will last not even a full generation, and even those collaborative devs will find themselves made redundant, and we’ll reach the same end without me having to eliminate the one enjoyable part of my job. I do not want to be reduced to being only a debugger for something else’s code.

    Thing is, at the point AI becomes self-improving, the last bastion of human-led development will fall.

    I guess mocking and laughing now is about all we can do.

    • @[email protected]
      link
      fedilink
      243 months ago

      at the point AI becomes self-improving

      This is not a foregone conclusion. Machines have mostly always been stronger and faster than humans, because humans are generally pretty weak and slow. Our strength is adaptability.

      As anyone with a computer knows, if one tiny thing goes wrong it messes up everything. They are not adaptable to change. Most jobs require people to be adaptable to tiny changes in their routine every day. That’s why you still can’t replace accountants with spreadsheets, even though they’ve existed in some form for 50 years.

      It’s just a tool. If you don’t want to use it, that’s kinda weird. You aren’t just “debugging” things. You use it as a junior developer who can do basic things.

      • This is not a foregone conclusion.

        Sure, I agree. There’s many a slip twixt the cup and the lip. However, I’ve seen no evidence that it won’t happen, or that humans hold any inherent advantage over AI (as nascent as it may be, in the rude forms of LLMs and deep learning they’re currently in).

        If you want something to reflect upon, your statement about how humans have an advantage of adaptability sounds exactly like the previous generation of grasping at inherant human superiority that would be our salvation: creativity. It wasn’t too long ago that people claimed that machines would never be able to compose a sonnet, or paint a “Starry Night,” and yet, creativity has been one of the first walls to fall. And anyone claiming that ML only copies and doesn’t produce anything original has obviously never studied the history of fine art.

        Since noone would now claim that machines will never surpass humans in art, the goals have shifted to adaptability? This is an even easier hurdle. Computer hardware is evolving at speeds enormously faster than human hardware. With the exception of the few brief years at the start of our lives, computer software is more easily modified, updated, and improved than our poor connective neural networks. It isn’t even a competition: conputers are vastly more well equipped to adapt faster than we are. As soon as adaptability becomes a priority of focus, they’ll easily exceed us.

        I do agree, there are a lot of ways this futur could not come to pass. Personally, I think it’s most likely we’ll extinct ourselves - or, at least, the society able to continue creating computers. However, we may hit hardware limits. Quantum computing could stall out. Or, we may find that the way we create AI cripples it the same way we are, with built-in biases, inefficiencies in thinking, or simply too high of resource demands for complexity much beyond what two humans can create with far less effort and very little motivation.

        • @[email protected]
          link
          fedilink
          53 months ago

          creativity has been one of the first walls to fall

          Uh, no? Unless you think unhinged nonsense without thought is “creative”. Right now, these programs are like asking a particularly talented insane person to draw something for you.

          Creativity is not just creation. It’s creation with purpose. You can “create art” by breaking a vase. That doesn’t mean it’s good art.

          • And, yet, I’ve been to an exhibit at the Philadelphia Museum of Fine Art that consist of an installation that included a toilet, among other similarly inspired works of great art.

            On a less absurd note, I don’t have much admiration for Pollock, either, but people pay absurd amounts of oof for his stuff, too.

            An art history class I once took posed the question: if you find a clearing in a wood with a really interesting pile of rocks that look suspiciously man-made, but you don’t know if a person put it together or if it was just a random act of nature… is it art? Say you’re convinced a person created it and so you call it art, but then discover it was an accident of nature, does it stop being art?

            I fail to see any great difference. AI created art is artificial, created with the intention of producing art; is it only not art because it wasn’t drawn by a human?

            • @[email protected]
              link
              fedilink
              23 months ago

              If you’re talking about

              https://en.wikipedia.org/wiki/Fountain_(Duchamp)

              that’s a seminal work of avant guard art. You are still talking about it 100 years later. It’s obviously great art.

              Art is a work of visual, auditory, or written media that makes you feel emotion. That’s it. Does this pile of rocks make you feel happy or sad or anything? Then it’s art.

              AI makes pictures like a camera does. It doesn’t make it art unless you make something that evokes emotion.

              • We’re saying the same thing. AI can create art. My point was that we used to claim that art was a domain that was unassailable by machines, and this obviously is not true. So now, humans - or the particular human to whom I was replying - had a new goalpost: adaptabiility.

                We’ll keep coming up with new goalposts where “humans have an edge” that will keep us relevant and ascendant over machines, and irreplaceable. I believe we’ll run out of goalposts faster than many people would like.

                You know, there is one small other hope I have: that, despite how we’ve raised them, our children will be better than us, and will stop the cycle of wealth concentration. It’s unlikely, but it’s the only chance I see.

        • @someacnt_
          link
          13 months ago

          I do have one notion where we can have edge over. Human brain is quite optimized in energy usage, as a consequence of natural selection. Meanwhile, IIRC computers are optimized for speed, and so it often wastes energy. Now let’s see where this goes - will they be able to operate without gushing in energy?

    • Doc Avid Mornington
      link
      fedilink
      English
      153 months ago

      Well, we could end capitalism, and demand that AI be applied to the betterment of humanity, rather than to increasing profits, enter a post-scarcity future, and then do whatever we want with our lives, rather than selling our time by the hour.

    • @[email protected]
      link
      fedilink
      13 months ago

      The best part is that dumbass devs are actively working on self improving AI that will take their jobs.

    • v_krishna
      link
      fedilink
      English
      553 months ago

      Did you just post your open ai api key on the internet?

      • @[email protected]
        link
        fedilink
        21
        edit-2
        3 months ago

        Nah, this is a meme post about using chatgpt to check even numbers instead of simple code.

        Same joke as the OP, different format.

      • @lawrence
        link
        17
        edit-2
        3 months ago

        Let’s put it here in ascii format this free OpenAI API Key, token, just for the sake of history and search engines healthiness… 😂

        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

        But seriously, I hope they have already changed it.

      • konalt
        link
        123 months ago

        After a small test, it doesn’t work.

    • @thesporkeffect
      link
      13 months ago

      Thank you for your concern everyone I did not create this image

  • MxM111
    link
    fedilink
    313 months ago

    Well, if training is included, then why it is not included for the developer? From his first days of his life?

      • @[email protected]
        link
        fedilink
        43 months ago

        Sort of… If the dev didn’t pay for their training, they wouldn’t need as big of a wage to pay off their training debt (the usual scenario I’d wager).

        So in a way the company is currently paying off the debt for the Devs training, most of the time.

      • MxM111
        link
        fedilink
        33 months ago

        The company OpenAI also paid for LLM training and then sell LLM to users.

    • @ilinamorato
      link
      -33 months ago

      When did the training happen? The LLM is trained for the task starting when the task is assigned. The developer’s training has already completed, for this task at least.

      • Deceptichum
        link
        fedilink
        14
        edit-2
        3 months ago

        No? The LLM was trained before you ever even interacted with it. They’re not going to train a model on the fly each time you want to use it, that’s fucking ridiculous.

        • @ilinamorato
          link
          53 months ago

          That’s the joke that the comic is making. Whether or not it’s reflective of reality, they’re joking about a company training a new AI model to calculate the area of rectangles.

  • @[email protected]
    link
    fedilink
    English
    163 months ago

    I see no mention of Hitler nor abusive language, are you sure that’s a real AI? /s :-P

  • @Medli
    link
    163 months ago

    To be fair the human had how many years of training more than the AI to be fit to even attempt to solve this problem.

    • wagesj45
      link
      fedilink
      73 months ago

      And hundreds of thousands of years of evolution pre-training the base model that their experience was layered on top of.

      • R0cket_M00se
        link
        English
        63 months ago

        Exactly, people don’t seem to understand that our intelligence/problem solving ability is based on two major factors.

        1. Our evolutionary lineage, pattern recognition and instinct, etc.

        2. Our nurtured upbringing which creates the “training data” we need to accomplish specific tasks. Even if that upbringing isn’t holistic it would still require a significant amount of training to do anything programming-wise that the “three minutes and a coffee” side of the panel is completely ignoring.

        Without these a human is useless, we have training data as well, it’s just organic and learned over a lifetime in addition to the billions of years of life evolving on this planet.

  • @IronicDeadPan
    link
    103 months ago

    I don’t know why, but “mechanical turk” keeps cropping up when I think about this sort of stuff.

  • @NounsAndWords
    link
    83 months ago

    I’m hoping even a junior dev has had more than 60 hours of training.

  • @[email protected]
    link
    fedilink
    83 months ago

    Yea, but does the AI ask me why “x” doesn’t work as a multiplication operator 14 times while complaining about how this would be easier in Rust?

  • @[email protected]
    link
    fedilink
    53 months ago

    I have used a hammer to weld these two aspirins together and I don’t understand why they aren’t now a bigger aspirin

    The creator of the comic

  • Ook the Librarian
    link
    2
    edit-2
    3 months ago

    But which consumes more energy? Like really. I’m betting AI does, but some tasks might be close.

    • @[email protected]
      link
      fedilink
      5
      edit-2
      3 months ago

      the future unifying metric for productivity should be joules per line of code. If you cost more than a machine you get laid off