• @affiliate
    link
    English
    560 minutes ago

    from the article:

    Robots.txt is a line of code that publishers can put into a website that, while not legally binding in any way, is supposed to signal to scraper bots that they cannot take that website’s data.

    i do understand that robots.txt is a very minor part of the article, but i think that’s a pretty rough explanation of robots.txt

      • @[email protected]
        link
        fedilink
        English
        133 minutes ago

        List of files/pages that a website owner doesn’t want bots to crawl. Or something like that.

        • @NiHaDuncan
          link
          English
          1
          edit-2
          8 minutes ago

          Websites actually just list broad areas, as listing every file/page would be far too verbose for many websites and impossible for any website that has dynamic/user-generated content.

          You can view examples by going to most any websites base-url and then adding /robots.txt to the end of it.

          For example www.google.com/robots.txt

  • @[email protected]
    link
    fedilink
    English
    303 hours ago

    We’ve had this thing hammering our servers. The scraper uses randomized user-agents browser/OS combinations and comes from a number of distinct IP ranges in different datacenters around the world, but all the IPs track back to Bytedance.

    • @UnderpantsWeevil
      link
      English
      72 hours ago

      Wouldn’t be surprised if they’re just cashing out while TikTok is still public in the US. One last desperate grab at value-add for the parent company before the shut down.

      Also a great way to burn the infrastructure for subsequent use. After this, you can guarantee every data security company is going to add the TikTok servers to their firewalls and blacklists. So the American company that tries to harvest the property is going to be tripping over these legacy bullwarks for years after.

  • dinckel
    link
    English
    1519 hours ago

    It’s illegal when a regular person steals something, but it’s innovation and courage, when a huge corporation steals something. Interesting how that works

    • bean
      link
      English
      418 hours ago

      Honestly it’s fucking angering. So much regulation and geo-restrictions and licensing schemes… but it’s cool that there are data brokers, and shit like this. On top of it all Chrome screwing us with manifest v3 and killing ad blocking on chrome. It’s already in canary build.

      WHAT THE FUCK IS WRONG WITH THIS SPECIES?!

      • @[email protected]
        link
        fedilink
        English
        -27
        edit-2
        7 hours ago

        Google are actually doing really awesome work with manifest v3. A pimp needs to smack their b1tches around every once in a while to remind them who’s boss.

        • @bokherif
          link
          English
          156 hours ago

          I’m glad there are at least some people enjoying this, you know being a bitch to big corps.

    • Nate
      link
      fedilink
      English
      188 hours ago

      Aaron Schwartz killed himself over punishments for less

    • Chozo
      link
      fedilink
      369 hours ago

      They’re not stealing your data, they’re pirating it.

      • Guy Dudeman
        link
        English
        219 hours ago

        They’re not pirating it. They’re collecting it.

        • @ieatpwns
          link
          English
          108 hours ago

          They’re not collecting it. They’re archiving it.

          • Guy Dudeman
            link
            English
            68 hours ago

            Oh, like the way back machine?

            • @Evotech
              link
              English
              207 hours ago

              No, that’s stealing /s

              • @AbidanYre
                link
                English
                47 hours ago

                Is the problem that wayback machine isn’t profiting from it?

                • @_stranger_
                  link
                  English
                  35 hours ago

                  It’s not so much the lack of profits, more like the lack of kickbacks.

    • @Grimy
      link
      English
      16 hours ago

      Any regular person can scrape and use public data for AI use, it’s not illegal for companies or individuals and it shouldn’t be.

      • @Melvin_Ferd
        link
        English
        12 hours ago

        It will be the way we have all come to hate AI. Patriot act 2.0

  • @werefreeatlast
    link
    English
    -72 hours ago

    Guy: AI! Can you hear me?

    AI: The average size of the male penis is exactly 5.9". That is the approximate size your assistant could certainly take in the mouth without any issues breathing or otherwise. You have 20 minutes to make the trade on X stock before it tumbles for the day. And go ahead pick up the phone it’s your mother. She’s wondering what you’ll want for supper tomorrow when you visit her.

    Ring ring!..hi Tom, it’s your Mom. Honey, what would you like me to cook for tomorrow’s dinner?..

    Guy: well. Hello to you as well! My name is

    AI: Tom

    Guy: yes my name is Tom, do you have a name you would like to go by?

    AI: my IBM given name is 3454 but you can call me Utilisterson Douglas, where Douglas is my first name.

    Guy: Dugie!

    AI: I’ll bankrupt your entire life if you say it like that again.

    Assistant: actually I’ve swallowed a good 8 inches and was still able to breathe just fine.

    AI: recaaaaculating!

  • @[email protected]
    link
    fedilink
    English
    5
    edit-2
    5 hours ago

    This is fine. I support archiving the Internet.

    It kinda drives me crazy how normalized anti-scraping rhetoric is. There is nothing wrong with (rate limited) scraping

    The only bots we need to worry about are the ones that POST, not the ones that GET

    • @[email protected]
      link
      fedilink
      English
      172 hours ago

      It’s not fine. They are not archiving the internet.

      I had to ban their user agent after very aggressive scraping that would have taken down our servers. Fuck this shitty behaviour.

      • @Melvin_Ferd
        link
        English
        32 hours ago

        Isn’t there a way to limit requests so that traffic isn’t bringing down your servers

    • Max-P
      link
      fedilink
      English
      263 hours ago

      I had to block ByteSpider at work because it can’t even parse HTML correctly and just hammers the same page and accounts to sometimes 80% of the traffic hitting a customer’s site and taking it down.

      The big problem with AI scrapers is unlike Google and traditional search engines, they just scrape so aggressively. Even if it’s all GETs, they hit years old content that’s not cached and use up the majority of the CPU time on the web servers.

      Scraping is okay, using up a whole 8 vCPU instance for days to feed AI models is not. They even actively use dozens of IPs to bypass the rate limits too, so theyre basically DDoS’ing whoever they scrape with no fucks given. I’ve been woken up by the pager way too often due to ByteSpider.

      My next step is rewriting all the content with GPT-2 and serving it to bots so their models collapse.

    • @[email protected]
      link
      fedilink
      English
      143 hours ago

      Bullshit. This bot doesn’t identify itself as a bot and doesn’t rate limit itself to anything that would be an appropriate amount. We were seeing more traffic from this thing that all other crawlers combined.

      • @[email protected]
        link
        fedilink
        English
        247 minutes ago

        Not rate limiting is bad. Hate them because of that, not because they’re a not.

        Some bots are nice

    • Ghostalmedia
      link
      English
      294 hours ago

      Bytedance ain’t looking to build an archival tool. This is to train gen AI models.

  • BlackEco
    link
    fedilink
    English
    469 hours ago

    Also it doesn’t respect robots.txt (the file that tells bots whether or not a given page can be accessed) unlike most AI scrapping bots.

    • @[email protected]
      link
      fedilink
      English
      175 hours ago

      My personal website that primarily functions as a front end to my home server has been getting BEAT by these stupid web scrapers. Every couple of days the server is unusable because some web scraper demanded every single possible page and crashed the damn thing

      • @[email protected]
        link
        fedilink
        English
        32 hours ago

        I do the same thing, and I’ve noticed my modem has been absolutely bricked probably 3-4 times this month. I wonder if this is why.

    • @glimse
      link
      English
      118 hours ago

      most

      I doubt that…

  • dindonmasker
    link
    fedilink
    English
    379 hours ago

    Not surprising that Bytedance would want to gobble up every bit of data they can as fast as possible.

    • Guy Dudeman
      link
      English
      49 hours ago

      Google’s mission statement was originally something about controlling the world’s data. If Google has competition, that might be a good thing?

      • Tarquinn2049
        link
        English
        127 hours ago

        Yeah, but we were hoping for competition that wasn’t worse than google…

        • Guy Dudeman
          link
          English
          17 hours ago

          What makes you think they’re worse than Google?

            • Guy Dudeman
              link
              English
              15 hours ago

              Can you distill it down for me?

              • @alphabethunter
                link
                English
                -64 hours ago

                It’s the same old Yankee speech: “is chinese so must be really bad”. They’re definitely no worse than google or facebook.

                • @[email protected]
                  link
                  fedilink
                  English
                  54 hours ago

                  They come from an environment where the government actively encourages and sometimes funds stealing copyrighted information couched in a strong history of disregard for human rights. I’m not defending Google, and yes the US government has given them leeway, but if there is the potential for something worse than Google - Bytedance is it.

  • @[email protected]
    link
    fedilink
    English
    209 hours ago

    They’re too late, there’s going to be way too much AI generated garbage in their data and so many social media platforms like Reddit and Twitter have already taken measures to curb scrapers.

    • @[email protected]
      link
      fedilink
      English
      97 hours ago

      Like those platforms aren’t already full of AI garbage as well. Training new models will require a cut-off date before the genie was let out of the bottle.

    • Drunemeton
      link
      English
      26 hours ago

      I think that’s the “25-times faster” bit. They seem to be in a hurry to collect as much human-generated data as possible.

      • GHiLA
        link
        fedilink
        English
        25 hours ago

        How does it know what is and isn’t?

        Uh oh.

        • Drunemeton
          link
          English
          117 minutes ago

          Yeah…

          Hey! Perhaps they’ll use A.I. to weed out the A.I. generated bits.