• @[email protected]
    link
    fedilink
    English
    11
    edit-2
    11 months ago

    This is an astonishingly bad take.

    Almost every AI system is a black box. Even if you open source the code and the training data, it’s almost impossible to know anything about the current state of a machine learning model.

    So the entire premise here is that a completely unaccountable system - whose decisions are basically impossible to understand or scrutinize - gets to decide what data is or isn’t relevant.

    When an AI says “No crime spotted here”, who gets to even know that it did that? If a human is reviewing all of the footage, then why have the AI? You’re doing the same amount of human work anyway. So as soon as you introduce this system, you remove a huge amount of human oversight, and replace it with decisions that dramatically affect human lives - that could potentially be life or death if it’s the difference between a bad cop being taken off the street or not - being made by a completely unaccountable system.

    Whose to say if the training data fed into this system results in it, say, becoming effectively blind to police violence against black people?

    And if that doesn’t scare you, it absolutely should.

    • @Misconduct
      link
      English
      0
      edit-2
      11 months ago

      It’s not impossible to understand or scrutinize. They give it specific things to look for. It does what it’s told. You can make the argument that ANY tool used by the police will be misused in their favor. AI isn’t special for that by any means. It’s not like we bother to hold anyone accountable for anything else now anyway. Maybe the AI will be less biased

      It’s definitely not doing the same work as a human if humans are spared sifting through hours upon hours of less useful footage. I’m sure they’re testing it etc. Nobody goes all in on this stuff. Really, you guys can be so very dramatic lol