Hi, I’m building a personal website and I don’t want it to be used to train AI. In my robots.txt file I blocked:

  • ChatGPT-User
  • GPTBot
  • Google-Extended
  • FacebookBot

What bots should I also add? Are there any other ways to block AI bots?

IMPORTANT: I don’t want to block search engine crawlers, only bots that are used to train AI.

  • Luke
    link
    fedilink
    5511 months ago

    FYI, bots and crawlers can simply ignore your robots.txt entirely. This is probably common knowledge around these parts, but I’ve run into clients at work who thought it was a law or something.

    I do like the idea of intentionally polluting the data robots will see, as suggested by this comment. There’s no reliable way to block them without also blocking humans, so making the crawled data as useless as possible is a good option.

    Just be careful not to also confuse screen readers with that tactic, so that accessibility is maintained for humans. It should be easy enough if you keep your aria attributes filled out appropriately, I imagine.

  • @hperrin
    link
    3611 months ago

    Pollute your site with nonsense that’s invisible to users. Things like pages that are linked to with invisible links that are just walls and walls of random text.

    • @[email protected]OP
      link
      fedilink
      1211 months ago

      Good idea. I will made a invisible link to “traps for bots”. One trap will show random text, one will be redirect loop and one would be random link generator that will link to itself. I will also make every response randomly slow, for example 0,5 to 1,5 seconds.

      Good thing is that I can also block search engine crawlers from accessing only the traps.

      • @c24w
        link
        English
        411 months ago

        If you’re interested in traps, you can add a honeypot to your robots.txt. It comes with some risk of blocking legitimate users, though.

        • @[email protected]
          link
          fedilink
          2
          edit-2
          11 months ago

          I dont think thats really a big problem. Like simply make every key word useless, somehow automate the process.

          There should be a tool for this damn, there is at least one Unicode character that doesnt even display a blank in a damn Terminal.

          Like… modern web crap doesnt even load without Javascript or animations. So dont bother a bit more HTML

    • folkrav
      link
      fedilink
      711 months ago

      OP still wants search indexing, in which case it’s a big no-no - it can be perceived as spam by search engines, and links your pages to tons of unrelated keywords.

    • @stewsters
      link
      311 months ago

      As long as you do not rely on SEO to get traffic. This has a good chance of affecting how Google sees your site as well.

  • @inspxtr
    link
    1511 months ago

    I’m curious about how to verify that these bots respect the rules. I don’t doubt that they do, since it might be a PR nightmare for these big tech companies if they don’t, but I don’t know how to verify them. Asking because I’m also doing this for my website.

    By the way, LLMs are usually also trained by common crawl, (not sure to what extent), but I’m not sure whether you want to block common crawl.

    Another thing to consider is whether your website is indexed and crawled by web archive, and whether web archive has some policy on AI bot crawlers and scrapers.

  • Mubelotix
    link
    fedilink
    711 months ago

    Block everyone but the crawlers you like. Blacklists are less reliable than whitelists

  • Oliver Lowe
    link
    fedilink
    411 months ago

    Maybe there’s some IP address ranges to try block?

    It’s difficult because, for example, blocking the addresses OpenAI’s crawlers use may inadvertently block addresses from Azure used by Bing or whatever.

  • @[email protected]
    link
    fedilink
    111 months ago

    Pehaps the user (or in this case the bot) will not go directly to your website, but first to some method of captcha verification or something like that, or like those pages (SteamDB for example) that do not open directly but first open a blank page to verify your network and browser with a captcha.