• Otter
    link
    fedilink
    English
    1271 day ago

    I went to grab Bobby Tables and found this new variation

    • Flax
      link
      fedilink
      English
      32 hours ago

      Isn’t it actually illegal to call your child “null” or “nil” in some places

    • @[email protected]
      link
      fedilink
      2612 hours ago

      Why hope they sanitize their inputs?

      Why are they trusting an AI that cant even do math to give notes to tests?

      • @[email protected]
        link
        fedilink
        42 hours ago

        The problem with LLM AIs Ous that you can’t sanitize the inputs safely. There is no difference between the program (initial prompt from the developer) and the data (your form input)

          • @[email protected]
            link
            fedilink
            22 hours ago

            You can try, but you can’t make it correct. My ideal is to write code once that is bug-free. That’s very difficult, but not fundamentally impossible. Especially in small well-scrutinized areas that are critical for security it is possible with enough care and effort to write code with no security bugs. With LLM AI tools that’s not even theoretically possible, let alone practical. You will just need to be forever updating your prompt to mitigate the free latest most fashionable prompt injections.

      • @[email protected]
        link
        fedilink
        English
        1011 hours ago

        Because it’s the most efficient. With students handing in AI theses, it’s only sensible to have teachers use AI to grade them. No we only need to have teachers use AI to create exam questions and education becomes a fully automated process. Then everyone can go home early.