SafeRent is a machine learning black box for landlords. It gives landlords a numerical rating of potential tenants and a yes/no result on whether to rent to them.

In May 2022, Massachusetts housing voucher recipients and the Community Action Agency of Somerville sued the company, claiming SafeRent gave Black and Hispanic rental applicants with housing vouchers disproportionately lower scores.

The tenants had no visibility into how the algorithm scored them. Appeals were rejected on the basis that this was what the computer output said.

  • @[email protected]
    link
    fedilink
    English
    163 days ago

    SafeRent was a giant piece of shit before “AI”. I tried to rent a place 15 years ago that used them. The report returned several serious felonies committed over years by another person with an only vaguely similar name who lived in a state I had never even visited.

    The leasing office people admitted that the report was transparently bogus, but they still had orders to deny all housing to negative reports.

    My only recourse at the time was to lock my record so they won’t issue reports in my name at all. I now ask right up front who a renter uses for screening and they get a vigorous ‘nope’ if they use SafeRent.

    Fsck SafeRent!

  • @[email protected]
    link
    fedilink
    English
    1134 days ago

    The land lords who used the service should also be held liable. You mean to tell me you get a report with a binary answer and you just trust it with no due diligence? If there is no penalty for blindly trusting an algorithm they will just move to the next tool they can use to be bigots.

    • @LANIK2000
      link
      English
      133 days ago

      Just another cost of running business.

    • @[email protected]
      link
      fedilink
      English
      73 days ago

      At least they were banned from using AI screening for 5 years.

      I’d hope breaking a court order would result in the kind of punishments they would actually fear.

  • @[email protected]
    link
    fedilink
    English
    163 days ago

    Just do you job you lazy cunts. Stop trying get ai to do everything. Real estate agents should be checking this stuff it’s part of the role.

    • @nandeEbisu
      link
      English
      153 days ago

      But AI is so useful for laundering racism, sexism, and IP theft with plausible deniability.

    • @AngryCommieKender
      link
      English
      53 days ago

      Residential renters generally don’t use a real estate agent.

        • @AngryCommieKender
          link
          English
          32 days ago

          I’m not certain if we have letting agents in the US. I certainly have never used one, and I’ve rented in at least 30+ states. I’ve lived in 49/50, but that’s counting while living with my parents, so I’m not going through the effort to figure out the exact number.

          I would agree with your humorous throwaway, but that is actually the reality of the US unfortunately.

        • @AngryCommieKender
          link
          English
          12 days ago

          Well that isn’t the US, clearly. The article is about the US.

          • @[email protected]
            link
            fedilink
            English
            17 hours ago

            And my comment is clearly about whichever job is managing the task being replaced by AI. What does it matter if you think the job title is wrong?

  • @cheese_greater
    link
    English
    48
    edit-2
    4 days ago

    If there are suicides linked to wronged applicants, they should be charged with at least “involuntary” manslaughter

        • @otacon239
          link
          English
          254 days ago

          The fact that I’ve never heard of the corporate death penalty until now, but they’re bringing back the actual death penalty says everything.

          • @mriguy
            link
            English
            174 days ago

            “I’ll believe corporations are people when Texas executes one.”

        • @ChickenLadyLovesLife
          link
          English
          13 days ago

          The penalty is usually a fine, which impacts stockholders by making the stock less valuable

          Of course they can always compensate for this by firing a bunch of people.

      • @Squizzy
        link
        English
        43 days ago

        In order to be a director of a business you have to assume the legal responsibility of the organisation. You need more than 1 director and ignorance is not an excuse, there are expectations of awareness and involvement for anyone legally in a director role.

      • @[email protected]
        link
        fedilink
        English
        54 days ago

        Well stockholders don’t have executive capabilities. The CEO is responsible. Could hold board responsible too if they knew.

      • SeaJ
        link
        fedilink
        English
        44 days ago

        You could revoke their corporate charter.

  • @[email protected]
    link
    fedilink
    English
    184 days ago

    Crappy-ass fine and simply a “cost of doing business” for them I bet. Damages have been done for which there is no undoing. Deplorable.

  • sunzu2
    link
    fedilink
    184 days ago

    OK some people got paid… The problem didn’t get solved

    Classic america

  • @11111one11111
    link
    English
    -1
    edit-2
    3 days ago

    -Edit:

    Adding Edit to the beginning to stop the replies from people who read the scenario for context and can’t fight their compulsion to reply by nitpicking my completely made up list of “unbiased” metrics. To these peeps I say, “Fucking no. Bad dog. No!” I don’t fucking care about your commentary to a quickly made up scenario. Whatever qualms you have, just fuckin change the imaginary scenario so it fits the purpose of what the purpose of the story is serving.

    -Preface of actual comment:

    Completely made up scenario to give context to my question. This is not me defending anything referenced to the article.

    -Actual scenario with read, write, edit permissions to all users:

    What if the court order the release of the AI code and training methods for this tenant analysis AI bot and found the metrics used were simply credit score, salary, employer and former rental references. No supplied data for race, name, background check or anything else that would tip the boy toward or away from any bias results. So this pure as it could be bot still produces the same results as seen in the article. Again, imaginary scenario that is likely no foundation of truth.

    -My questions for the provided context:

    1. Are there studies that compare methods of training LLMs with results showing differences in results ranging from less or no racist bias and more racist bias?

    2. Are there ways of training LLMs to perform without bias or is the problem with the LLM’s code and no matter how you train them there will always be a bias presence?

    3. In the exact imaginary scenario, would the pure, unbiased angel version of rhe AI bot but producing equally racist results as biased trained AI bots see different court rulings that the AI that shows it’s flawed design caused the biased results?

    -I’m using bias over racist to reach broader area beyond race related issues. My driving purposes is:

    1. To better understand how courts are handling AI related cases and if they give a fuck about the framework and design of the AI or if none of that matters and the courts are just looking at the results;

    2. Wondering if there are ways to make or already made LLMs that aren’t biased and what about their design makes them biased, is it the doing of the makers of the LLM or is it the training and implication of the LLM by the enduser/training party that is to blame?

    • @General_Effort
      link
      English
      12 days ago

      The article is fake news. I suggest looking elsewhere for proper information.

      As for your questions: LLMs were certainly not involved here. I can’t guess what techniques were used.

      Racial discrimination is often hard to nail down. Race is implicit in any number of facts. Place of birth, current address, school, … You could infer race from such data. If you do not look at race at all but the end result still discriminates, then it’s probably still racial discrimination. I say probably because you are free to do what you like and discriminate based on any number of factors, as long as it isn’t race, sex, and the like. You certainly may discriminate based on education or wealth. Things being as they are, that will discriminate against minorities. They have systematically lower credit ratings, for example.

      In the case of generative AI, bias is often not clearly defined. For example, you type “US President” into an image generator. All US presidents so far were male, and all but one white. But half of all people who are eligible for the presidency are female and (I think) a little less than half non-white. So what’s the non-biased output?