• @[email protected]
    link
    fedilink
    English
    13
    edit-2
    7 months ago

    I’ve disabled personalised ads on YouTube and I see this sort of shit all the time. I’ve given up reporting them because 90% of the time the report is rejected. I don’t even understand the rationale for rejecting it because it’s an obvious a scam as a scam can be - ai impersonation, fake endorsement, illegal advertising category. It’s a scam YouTube.

    I don’t even get why these ads even appear. YouTube has transcription & voice / music recognition capabilities. How hard would it be to flag a suspicious ad and require a human to review it? Or search for duplicates under other burner accounts and zap them at the same time? Or having some kind of randomized audit based on trust where new accounts get reviewed more frequently by experienced reviewers.

    • r00ty
      link
      fedilink
      127 months ago

      No no. This kind of automated “protection” is only used against their users, who are their product. Not the advertisers, who are their customer!

      • @[email protected]
        link
        fedilink
        English
        17 months ago

        There are other considerations here though. Google suffers reputational harm if users become victims through their platform. It becomes news, it creates distrust in users, it generates friction with regulators and law enforcement. Users may be trained to be ad averse or install ad blockers. In addition, these ads generate reports which costs time to process even if the complaints are rejected.

        At the end of the day these scammers are not high profile advertisers and they’re not valuable. They’re burner accounts that pay cents to deliver their ads. They’re ephemeral, get zapped, reappear and constantly waste time and resources. Given that YouTube can easily transcribe content and watermark it, it makes no sense to me that they wouldn’t put some triggers in, e.g. a new advertiser places an ad that says “Elon Musk”, or “Quantum AI” or other such markers, flag it for review.

    • @[email protected]
      link
      fedilink
      English
      47 months ago

      How hard would it be to flag a suspicious ad and require a human to review it?

      Hard? No. But then humans would have to be paid which would slow down the growth of the dragon horde.

      Better to have a computer analyze the ad that another computer thinks looks real.

      • @[email protected]
        link
        fedilink
        English
        17 months ago

        They have to have a human respond to each and every complaint about that ad. Seems more sensible to automate and flag suspicious ads before the complaints happen.

    • @LeroyJenkins
      link
      English
      37 months ago

      they ain’t gonna stop their customers from paying them more money