• @[email protected]
    link
    fedilink
    221 year ago

    Who says it ends here? We’ve made tremendous progress in a short time. 10 years ago it was absolutely unthinkable that we’d be at a stage right now where we can generate these amazing images from text on consumer hardware and that AI can write text in a way that could totally fool humans. Even as someone working in the field I was fairly sceptical 5-6 years ago that we’d get here this fast

    • @[email protected]
      link
      fedilink
      71 year ago

      Agree 100%.

      We are at the start and the progress is incredibly fast and accelerating.

      Even the way that image generation alone has improved within the last year I wouldn’t have believed.

      • @[email protected]
        link
        fedilink
        21 year ago

        Yeah also indeed. Back then I was actually working with image generation and GANs and it was just starting to take off. A year later or something StyleGAN would absolutely blow my mind. Generating realistic 1024x1024 images while I was still bumbling about with a measly 64x64 pixels. But I didn’t foresee where this was going even then

        • newIdentity
          link
          fedilink
          1
          edit-2
          1 year ago

          Or remember when everyone was impressed with GauGAN in 2019/2020. We would’ve never guessed that just 2 or 3 years later we would have multiple competing available models.

          Or when Dall-E mini had a hype last year and everyone was impressed with it?

          Now there even are first experiments that make videos out of a prompt and they pretty much look exactly like the earlier iterations of diffusion models

    • @[email protected]
      link
      fedilink
      English
      1
      edit-2
      1 year ago

      Where some underrepresented domains have made massive strides, this GPT thing has done relatively little for data science.

      The important thing is that “what AI/ML can and cannot do” is not changing that much. It’s successful application is what’s changed. The idea of making AI libraries more accessible is huge, and leads to stuff like this. But under the hood, OpenAI doesn’t do much different than other AI tools. It’s just easier to use yourself. You can do more faster as computers get faster, but that seems to be limited with the endgame of Moore’s Law anyway.

      OpenAI runs on supercomputers now. It’ll continue to run on supercomputers in the future. Instead of getting better, it has started to get worse at many things. Experts have always had a fairly good grasp of where it’ll end. There are things AI was always expected to do better than humans at. And things it never will.

      I mean, I expected AI image generation and better text quality. But I also expected the limits it currently has. And I’ve only done a little directly in the field.

      • @[email protected]
        link
        fedilink
        1
        edit-2
        1 year ago

        But the fact that it can do so much is an awesome (and maybe scary) result in and of itself. These LLMs can write working code examples, write convincing stories, give advice, solve simple problems quite reliably, etc all from just learning to predict the next word. I feel like people are moving the goalpost way too quickly, focussing so much on the mistakes it makes instead of the impressive feats that have been achieved. Having AI doing all this was simply unthinkable a few years ago. And yes, OpenAI is currently using a lot of hardware, and ChatGPT might indeed have gotten worse. But none of that changes what has been achieved and how impressive it is.

        Maybe it’s because of all these overhyping clickbait articles that make reality seem disappointing. As someone in the field who’s always been cynical about what would be possible, I just can’t be anything else then impressed with the rate of progress. I was wrong with my predictions 5 years ago, and who knows where we’ll go next.