• @[email protected]
    link
    fedilink
    English
    61 year ago

    People have been saying programming would become redundant since the first 4GL languages came out in the 1980s.

    Maybe it’ll actually happen some day… but I see no sign of it so far.

    • @orclev
      link
      English
      121 year ago

      Yep, had this argument a bunch. Conversation basically goes:

      Them: All you need is a description of the problem and then it can generate code to solve it

      Me: But the description has to be detailed enough to cover all the edge cases.

      Them: Well yeah.

      Me: You know what we call a description of a problem detailed enough to cover all the edge cases?

      Them: What?

      Me: A program. And the people that know how to write those descriptions are called programmers.

      • @deong
        link
        English
        11 year ago

        Devil’s advocate though. With things like 4GLs, it was still all on the human to come up with the detailed spec. Best case scenario was that you work very hard, write a lot of things down, generate the code, see that it didn’t work and then ???. That “???” at the end was you as the programmer sitting alone in a room trying to figure out what a non-responsive black box might wanted you to have said instead.

        It’s qualitatively different if you can just talk to the black box as though it were a programmer. It’s less of a black box at that point. It understands your language, and it understands the code. So you can start with the spec, but when something inevitably doesn’t work, the “???” step doesn’t just come back to you figuring out with no help what you did wrong. You can ask it questions and make suggestions. You can run experiments. Today’s LLMs hit the wall pretty quick there, and maybe they always will. There’s certainly the viewpoint that “all they do is model text and they can’t really learn anything”.

        I think that’s fundamentally wrong. I’m a pretty solid programmer. I have a PhD in Computer Science, and I’ve worked as a software engineer and an architect throughout a pretty long career. And everything I’ve ever learned has basically been through language. Through reading, writing, speaking, and listening to English and a few other languages. I think that to say that I can learn what I’ve learned, but it’s fundamentally impossible for a robot to learn it is to resort to mysticism. At some point, we will have AIs that can do what I do today. I think that’s inevitable.

        • @orclev
          link
          English
          11 year ago

          Well, that particular conversation typically happens in relation to something like a business rules engine, or sometimes one of those drag and drop visual programming languages which everyone always touts as letting you get rid of programmers (but in reality just limits you to a really hard to work with programming language), but there is a lot of overlap with the current LLM based hype.

          If we ever do get an actual AI, then yes, AI will probably end up writing most of the programs, although it’s possible programmers will still exist in some capacity maybe for the purpose of creating flow charts or something to hand to the AIs. But we’re a long way off from a true AI, so everyone acting like it’s going to happen any day now is as laughable as everyone promising cold fusion was going to happen any day now back in the 70s. Ironically I think we are more likely to see a workable cold fusion before we see true AI, some of the hot fusion experiments happening lately are very promising.