• @gaylord_fartmaster
    link
    English
    -910 months ago

    Machine learning could find those strengths and weaknesses and learn to work around them likely better than a human could. It’s just trial and error. There’s nothing about the human brain that makes it better suited to understanding the inner logic of an LLM.

    • Blóðbók
      link
      fedilink
      English
      11
      edit-2
      10 months ago

      For that you need a program to judge the quality of output given some input. If we had that, LLMs could just improve themselves directly, bypassing any need for prompt engineering in the first place.

      The reason prompt engineering is a thing is that people know what is expected and desired output and what isn’t, and can adapt their interactions with the tool accordingly, a trait uniquely associated with adaptive complex systems.

      • @[email protected]
        link
        fedilink
        English
        210 months ago

        can adapt their interactions with the tool accordingly

        If we could have programmed around this prior, then people who can and can’t Google wouldn’t be a thing: Google would just know what results to bring up without the search-curse-refine-repeat cycle. Prompt engineering seems like an extension of Google search-fu.

      • @gaylord_fartmaster
        link
        English
        -110 months ago

        If we had that, LLMs could just improve themselves directly, bypassing any need for prompt engineering in the first place.

        Yep, exactly, and it’s been studied and put in to practice effectively already.

        Prompt tuning is not the only way to fine tune the output of an LLM, and since the goal for most is going to be to make them usable by anyone, that’s going to be the least desirable route.

        • Blóðbók
          link
          fedilink
          English
          3
          edit-2
          10 months ago

          I know LLMs are used to grade LLMs. That isn’t solving the problem, it’s just better than nothing because there are no alternatives. There aren’t enough humans willing to endlessly sit and grade LLM responses.

          • @[email protected]
            link
            fedilink
            English
            -1
            edit-2
            10 months ago

            Yes there are, in addition to the thumbs up/down buttons that most people don’t use, you can also score based on metrics like “did the person try to rephrase the same question again?” (indication of a bad response), etc. from data gathered during actual use (which ChatGPT does use for training).

            • Blóðbók
              link
              fedilink
              English
              110 months ago

              Firstly, I’m willing to bet only a minority of users regularly use those buttons. Secondly, you’re talking about the most popular LLM(s) out there. What about all the other LLMs almost nobody is using but are still being developed/researched? Where do they find humans willing to sit and rate all the garbage their LLM puts out?

    • @WhatAmLemmy
      link
      English
      7
      edit-2
      10 months ago

      Congrats. You don’t understand the difference between a statistical model and a human.

      I expected more from a gaylord fartmaster. 2/10.

      • @gaylord_fartmaster
        link
        English
        210 months ago

        In what way?

        Why couldn’t even a basic reinforcement learning model be used to brute force “figure out what input gives desired X output”?

        • @WhatAmLemmy
          link
          English
          110 months ago

          Because the training data is man made so will never be 100% accurate, and critical thought is required to set the desired output, and understand if the output makes sense?

          Statistical models find patterns in one’s and zeros. They don’t apply critical thought.

    • @jacksilver
      link
      English
      210 months ago

      Actually most (I think all, but not 99% positive) machine learning models are incapable of doing straight arithmetic. Due to the way they are built ML models, including deep learning models, can only learn relationships in a limited input space.

      This is most apparent when you test LLMs on different arithmetic operations:

      • For addition, it does okay up until you get to millions or billions
      • Multiplication I think breaks at the 100/1000 level
      • exponents almost break immediately
      • Give it decimal values and it also breaks relatively quickly for any operation.

      This has to do with the fact that LLMs are effectively multiple layers of linear functions, so higher order operations break down faster.