Meta “programmed it to simply not answer questions,” but it did anyway.

  • @doodledup
    link
    English
    -42 months ago

    Well but this kind of correctness applies to everything. By thag logic, you can’t believe anything. I’m talking about an entirely different correctness. Like resistance against certain adversarial attacks. Of course, proving that the model is always correct, is as complicated as modelling the entire reality. That’s infeasible. But it’s also infeasible for every other software.