• @LixWindoz
    link
    73 hours ago

    Here’s a list of reasons this is not advisable:

    [object Object]

  • @[email protected]
    link
    fedilink
    139 hours ago

    I did this to myself last week on a new project. Spent an hour trying to track down at what point in my code the data from the database got converted to [Object object]. Finally decided to check the db itself and realized that the [Object object] was coming from inside the house the whole time and the error in my code was when the entry was being written smh

    • @[email protected]
      link
      fedilink
      41 hour ago

      Sanity checks

      Always, always check if your assumptions are true

      • am i even running the function?
      • is this value what i think it is?
      • what is responsible for loading this data, and does it work as expected?
      • am i pointed at the right database?
      • is my configuration set and loaded in correctly?
  • Mr. Satan
    link
    fedilink
    lietuvių kalba
    1515 hours ago

    Me, the dev: “Nobody reported this as a problem… Ok, don’t care, moving on.” Also, if I can’t reproduce it, I can’t fix it, no point in wasting time more than that.

  • Otter
    link
    fedilink
    English
    1281 day ago

    I went to grab Bobby Tables and found this new variation

    • Flax
      link
      fedilink
      English
      32 hours ago

      Isn’t it actually illegal to call your child “null” or “nil” in some places

    • @[email protected]
      link
      fedilink
      2613 hours ago

      Why hope they sanitize their inputs?

      Why are they trusting an AI that cant even do math to give notes to tests?

      • @[email protected]
        link
        fedilink
        43 hours ago

        The problem with LLM AIs Ous that you can’t sanitize the inputs safely. There is no difference between the program (initial prompt from the developer) and the data (your form input)

          • @[email protected]
            link
            fedilink
            22 hours ago

            You can try, but you can’t make it correct. My ideal is to write code once that is bug-free. That’s very difficult, but not fundamentally impossible. Especially in small well-scrutinized areas that are critical for security it is possible with enough care and effort to write code with no security bugs. With LLM AI tools that’s not even theoretically possible, let alone practical. You will just need to be forever updating your prompt to mitigate the free latest most fashionable prompt injections.

      • @[email protected]
        link
        fedilink
        English
        1011 hours ago

        Because it’s the most efficient. With students handing in AI theses, it’s only sensible to have teachers use AI to grade them. No we only need to have teachers use AI to create exam questions and education becomes a fully automated process. Then everyone can go home early.

      • @marcos
        link
        91 day ago

        That “most likely no one is bothered” part is correct, though.

      • @[email protected]
        link
        fedilink
        English
        -151 day ago

        The web is a total shit show, but if you can provide a single example of an established website which will cause a dev any real grief because you enter [object Object] into a form I’ll send you $100.00.

        • @Agrivar
          link
          251 day ago

          If you can send me proof of your sense of humor, I’ll send you a big fat nothing.

        • @buddascrayon
          link
          1524 hours ago

          Well, now I have something fun to do over the weekend.