• ɐɥO
    link
    fedilink
    583 months ago

    I disallow a page in my robots.txt and ip-ban everyone who goes there. Thats pretty effective.

      • bountygiver [any]
        link
        fedilink
        English
        17
        edit-2
        3 months ago

        humans typically don’t visit [website]/fdfjsidfjsidojfi43j435345 when there’s no button that links to it

        • @Avatar_of_Self
          link
          English
          173 months ago

          I used to do this on one of my sites that was moderately popular in the 00’s. I had a link hidden via javascript, so a user couldn’t click it (unless they disabled javascript and clicked it), though it was hidden pretty well for that too.

          IP hits would be put into a log and my script would add a /24 of that subnet into my firewall. I allowed specific IP ranges for some search engines.

          Anyway, it caught a lot of bots. I really just wanted to stop automated attacks and spambots on the web front.

          I also had a honeypot port that basically did the same thing. If you sent packets to it, your /24 was added to the firewall for a week or so. I think I just used netcat to add to yet another log and wrote a script to add those /24’s to iptables.

          I did it because I had so much bad noise on my logs and spambots, it was pretty crazy.

          • Mikelius
            link
            fedilink
            103 months ago

            This thread has provided genius ideas I somehow never thought of, and I’m totally stealing them for my sites lol.

        • JackbyDev
          link
          fedilink
          English
          14
          edit-2
          3 months ago

          I LOVE VISITING FDFJSIDFJSIDOJFI435345 ON HUMAN WEBSITES, IT IS ONE OF MY FAVORITE HUMAN HOBBIES. 🤖👨

    • LazaroFilm
      link
      English
      93 months ago

      Can you explain this more?

    • Onno (VK6FLAB)
      link
      fedilink
      73 months ago

      Is the page linked in the site anywhere, or just mentioned in the robots.txt file?

    • asudox
      link
      53 months ago

      Not sure if that is effective at all. Why would a crawler check the robots.txt if it’s programmed to ignore it anyways?

      • ɐɥO
        link
        fedilink
        163 months ago

        cause many crawlers seem to explicitly crawl “forbidden” sites

      • @Crashumbc
        link
        English
        33 months ago

        Google and script kiddies copying code…

    • Dizzy Devil Ducky
      link
      fedilink
      English
      43 months ago

      I doubt it’d be possible in most any way due to lack of server control, but I’m definitely gonna have to look this up to see if anything similar could be done on a neocities site.