• @CheeseNoodle
    link
    English
    201 month ago

    iirc it recently turned out that the whole black box thing was actually a bullshit excuse to evade liability, at least for certain kinds of model.

      • @CheeseNoodle
        link
        English
        131 month ago

        This ones from 2019 Link
        I was a bit off the mark, its not that the models they use aren’t black boxes its just that they could have made them interpretable from the beginning and chose not to, likely due to liability.

    • Johanno
      link
      fedilink
      English
      61 month ago

      Well in theory you can explain how the model comes to it’s conclusion. However I guess that 0.1% of the “AI Engineers” are actually capable of that. And those costs probably 100k per month.

    • @[email protected]
      link
      fedilink
      English
      41 month ago

      It depends on the algorithms used. Now the lazy approach is to just throw neural networks at everything and waste immense computation ressources. Of course you then get results that are difficult to interpret. There is much more efficient algorithms that are working well to solve many problems and give you interpretable decisions.