Some AI models get more accurate at maths if you ask them to respond as if they are a Star Trek character, ML engineers say::Researchers asking a chatbot to optimize its own prompts found it was best at solving grade-school math when acting like it was on Star Trek.

    • TrainsAreCool@lemmy.one
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      If you ask them to respond like a politician they answer all your questions with something completely different.

  • Merlin404
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Its because it dosent try to answer right, but more what it thinks toy want to see/read 🤷

  • Rednax
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    edit-2
    1 year ago

    It is only logical that an algorithm trained on the ways of a Vulcan, is precise and accurate in it operation and communication. Vastly more fascinating are the result when you ask it to behave like a human.

  • BigMikeInAustin
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Doh. This says to have the AI write the prompt for you, but it doesn’t give any examples of doing that.

    I don’t want to get into a rabbit hole looking up examples from the wide internet.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    1 year ago

    This is the best summary I could come up with:


    A study attempting to fine-tune prompts fed into a chatbot model found that, in one instance, asking it to speak as if it were on Star Trek dramatically improved its ability to solve grade-school-level math problems.

    “It’s both surprising and irritating that trivial modifications to the prompt can exhibit such dramatic swings in performance,” the study authors Rick Battle and Teja Gollapudi at software firm VMware in California said in their paper.

    “Among the myriad factors influencing the performance of language models, the concept of ‘positive thinking’ has emerged as a fascinating and surprisingly influential dimension,” Battle and Gollapudi said in their paper.

    Their study found that in almost every instance, automatic optimization always surpassed hand-written attempts to nudge the AI with positive thinking, suggesting machine learning models are still better at writing prompts for themselves than humans are.

    The prompt then asked the AI to include these words in its answer: “Captain’s Log, Stardate [insert date here]: We have successfully plotted a course through the turbulence and are now approaching the source of the anomaly.”

    Axel Springer, Business Insider’s parent company, has a global deal to allow OpenAI to train its models on its media brands’ reporting.


    The original article contains 820 words, the summary contains 198 words. Saved 76%. I’m a bot and I’m open source!

    • meeeeetch
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      We’ve finally figured out how to trick the computer that’s bad at math into being less bad at math.