• elgordino
    link
    fedilink
    585 months ago

    TikTok spider has been a real offender for me. For one site I host it burred through 3TB of data over 2 months requesting the same 500 images over and over. It was ignoring the robots.txt too, I ended up having to block their user agent.

    • @[email protected]
      link
      fedilink
      245 months ago

      Are you sure the caching headers your server is sending for those images are correct? If your server is telling the client to not cache the images, it’ll hit the URL again every time.

      If the image at a particular URL will never change (for example, if your build system inserts a hash into the file name), you can use a far-future expires header to tell clients to cache it indefinitely (e.g. expires max in Nginx).

      • elgordino
        link
        fedilink
        75 months ago

        Thanks for the suggestion, turns out there are no cache headers on these images. They indeed never change, I’ll try that update. Thanks again

  • 100
    link
    fedilink
    285 months ago

    time to fill sites code with randomly generated garbage text that humans will not see but crawlers will gobble up?

    • SerotoninSwells
      link
      135 months ago

      I don’t think it’s a bad idea but it’s largely dependent on the crawler. I can’t speak for AI based crawlers, but typical scraping targets specific elements on a page or grabbing the whole page and parsing it for what you’re looking for. In both instances, your content is already scrapped and added to the pile. Overall, I have to wonder how long “poisoning the water well” is going to work. You can take me with a grain of salt, though; I work on detecting bots for a living.

      • ✺roguetrick✺
        link
        215 months ago

        I work on detecting bots for a living.

        You should just tell people you’re a blade runner.

          • @Tangent5280
            link
            25 months ago

            You see a turtle, upended on the hot asphalt. As you pass it, you do not stop to help. Why is that?

            also that job title is cool as fuck

            • SerotoninSwells
              link
              15 months ago

              I agree and I wish I was actually that cool. I just look at data all day and write rules. 🫠

    • @thirteene
      link
      3
      edit-2
      5 months ago

      Until you realize that you are paying for access fees/network

  • ekZepp
    link
    English
    185 months ago

    All this robot.txt stuff “perplex” me.

  • @Crackhappy
    link
    English
    55 months ago

    She should get a library card.

    • @son_named_bort
      link
      25 months ago

      She tried but she couldn’t read the application.

  • @marcos
    link
    45 months ago

    TBF, pushing a site to the public while adding a “no scrapping” rule is a bit of a shitty practice; and pushing it and adding a “no scrapping, unless you are Google” is a giant shitty practice.

    Rules for politely scrapping the site are fine. But then, there will be always people that disobey those, so you must also actively enforce those rules too. So I’m not sure robots.txt is really useful at all.

    • @breakingcups
      link
      685 months ago

      No it’s not, what a weird take. If I publish my art online for enthusiasts to see it’s not automatically licensed to everyone to distribute. If I specifically want to forbid entities I have huge ethical issues with (such as Google, OpenAI et. al.) from scraping and transforming my work, I should be able to.

      • @marcos
        link
        -285 months ago

        Nothing in my post (or in robots.txt) has any relation to distributing your content.

        • @BlackPenguins
          link
          23
          edit-2
          5 months ago

          What else would they scrape your data for? Sure some could be for personal use but most of the time it will be to redistribute in a new medium. Like a recipe app importing recipes.

          • @[email protected]
            link
            fedilink
            55 months ago

            Indexing is what “scrapers” mostly do.

            That’s how search engines work. If you don’t allow any scraping don’t be surprised if you get no visitors.

            • Album
              link
              fedilink
              135 months ago

              Search engine scrapers index. But that’s a subset of scrapers.

              There are data scrapers and content scrapers, and these are becoming more prolific as AI takes off and ppl need to feed it data.

              This post is specifically about AI scrapers.

    • @[email protected]
      link
      fedilink
      English
      21
      edit-2
      5 months ago

      How would a site make itself acessible to the internet in general while also not allowing itself to be scraped using technology?

      robots.txt does rely on being respected, just like no tresspassing signs. The lack of enforcement is the problem, and keeping robots.txt to track the permissions would make it effective again.

      I am agreeing, just with a slightky different take.

      • Album
        link
        fedilink
        15 months ago

        User agent catching is rather effective. You can serve different responses based on UA.

        So generally people will use a robots.txt to catch the bots that play nice and then use useragents to manage abusers.

    • unalivejoy
      link
      fedilink
      English
      55 months ago

      People really should be providing a sitemap.xml file