• AFK BRB Chocolate
    link
    English
    233 days ago

    I manage a software engineering group for an aerospace company, so early on I had to have a discussion with the team about acceptable and non-acceptable uses of an LLM. A lot of what we do is human rated (human lives depend on it), so we have to be careful. Also, it’s a hard no on putting anything controlled or proprietary in a public LLM (the company now has one in-house).

    You can’t put trust into an LLM because they get things wrong. Anything that comes out of one has to be fully reviewed and understood. They can be useful for suggesting test cases or coming up with wording for things. I’ve had employees use it to come up with an algorithm or find an error, but I think it’s risky to have one generate large pieces of code.

    • @[email protected]
      link
      fedilink
      32 days ago

      Great points. Not only the output cannot be trusted, but also reviewing code is notoriously a much more boring activity than writing it, which means our attention is going to be more challenged, in addition to the risk of underestimating the importance of the review over time (e.g., it got it right last 99 times, I will skim this one).

    • Electric
      link
      63 days ago

      Very wise. Terrifying to think an aerospace company would use AI.

      • AFK BRB Chocolate
        link
        English
        52 days ago

        It seems like all companies are susceptible to top level executives, who don’t understand the technology, wanting to know how they’re capitalizing on it, driving lower level management to start pushing it.