• @_stranger_
    link
    English
    93 months ago

    I watched a co worker copy paste a class maybe 20 lines long into chatgpt to reformat it. We’re doomed.

    • @[email protected]OP
      link
      fedilink
      English
      5
      edit-2
      3 months ago

      Goodluck checking for hallucinations using this approach

      I used to use llm to fill up forms with personal data: llm always tries to imagine new people, amalgamation of correct names from db, new forms, imaginary places of birth, nonexistent false data. Weeding out these error is hard and usually happens far late into production. To catch the error, I have to create all sorts of pipelines and checks, which is insane complexity and maintenance burden for such a simple job as “fill up a form”


      AI hyped coworker in response to this problem said: oh, so it’s just a quality problem – you can put AI to check the result 10 times and if it’s flaky, give it to human to check.

      He created a system where llms were writing code, checked the resulting code and verified it to written requirements by nontechnical-human. I mean it’s impressive but I can’t imagine the system being “hired” to do high stake projects.

      • @_stranger_
        link
        English
        73 months ago

        Bold of you to assume any checking whatsoever

  • @[email protected]
    link
    fedilink
    English
    2
    edit-2
    3 months ago

    Needs like three more clouds of AI fucking with confused sad engineers in unnecessary ways.

    Otherwise, spot on.

  • @HootinNHollerin
    link
    English
    -4
    edit-2
    3 months ago

    Except they’re not engineers but software developers

    • @kurwa
      link
      English
      23 months ago

      Software engineers?

    • @[email protected]
      link
      fedilink
      English
      1
      edit-2
      3 months ago

      Coders

      Hackers

      Programmers

      Keyboard Ninjas

      Rockstar Problem Solverz

      Algorithm Architects