Meta has quietly unleashed a new web crawler to scour the internet and collect data en masse to feed its AI model.

The crawler, named the Meta External Agent, was launched last month, according to three firms that track web scrapers and bots across the web. The automated bot essentially copies, or “scrapes,” all the data that is publicly displayed on websites, for example the text in news articles or the conversations in online discussion groups.

A representative of Dark Visitors, which offers a tool for website owners to automatically block all known scraper bots, said Meta External Agent is analogous to OpenAI’s GPTBot, which scrapes the web for AI training data. Two other entities involved in tracking web scrapers confirmed the bot’s existence and its use for gathering AI training data.

While close to 25% of the world’s most popular websites now block GPTBot, only 2% are blocking Meta’s new bot, data from Dark Visitors shows.

Earlier this year, Mark Zuckerberg, Meta’s cofounder and longtime CEO, boasted on an earnings call that his company’s social platforms had amassed a data set for AI training that was even “greater than the Common Crawl,” an entity that has scraped roughly 3 billion web pages each month since 2011.

  • @GarrulousBrevity
    link
    13
    edit-2
    3 months ago

    Does that mean this new bot is ignoring sites’ robots.txt files? The Internet works because of web crawlers, and I’m not sure how this one is different

    Edited to add: Apparently one would need to add Meta-ExternalAgent to their robots file unless they had a wildcard rule, so this isn’t as widely blocked by virtue of being new. Letting it run for a few months before letting anyone know it exists is kinda shady.

    • Aniki 🌱🌿
      link
      fedilink
      English
      73 months ago

      Crawling the web has fuck all to do with the function of the internet. Most crawlers are useless at most to downright disrespectful.

      • @GarrulousBrevity
        link
        1
        edit-2
        3 months ago

        Have you used a search engine? Crawlers are not generative AI.

        • Aniki 🌱🌿
          link
          fedilink
          English
          7
          edit-2
          3 months ago

          The internet is not a search engine, and no - search engines are not generative ai. That’s new.

          Do you have any idea how many content bot crawlers there are? Most of the corporate sites I host at work are serving content to bots more than half the time.

          Do you know altivista still has bots??

          When was the last time you used that search engine?

          • @GarrulousBrevity
            link
            -13 months ago

            I guess I don’t really see the problem with that though. There are configuration levers you could be pulling, but those sites you’re hosting are not. There are lots of shady questions about how these models are getting training data, but crawlers have a well defined opt out mechanism.

            The web would not be what we know it as without them, because it’s how you find sites. Why shouldn’t Alta Vista have one? I don’t object to what Alta Vista does with the data.

            • Aniki 🌱🌿
              link
              fedilink
              63 months ago

              Mate we have absurdly restrictive robots.txt including a custom WordPress plugin that automatically generates the file and the bots don’t give a fuck.

              • @GarrulousBrevity
                link
                03 months ago

                But meta’s will, and Alta Vista. I’m not angry at them when a script kitty makes a bad crawler

              • @GarrulousBrevity
                link
                13 months ago

                I know what you’re trying to say, but that phrasing though. Being able to opt out is an important part of consent. No means no, man.

                • Even more important to being able to opt out, however, is not having to opt out in the first place.

                  Otherwise you get this script:

                  Wanna fuck?

                  No.

                  How about now?

                  No.

                  It’s five minutes later. Have you changed your mind?

                  No.

                  . . .

                  Which is exactly what techbrodudes have been doing to us by having “opt out” features a dozen times a day.

                  • @GarrulousBrevity
                    link
                    13 months ago

                    I think of this as a problem with opt-in only systems. Think of how sites ask you to opt in to allow tracking cookies every goddamn time a page loads. A rule based system which lets you opt in and opt out, like robots.txt, to let you opt out of cookie requests and tell all sites to fuck would be great. @[email protected] is complaining about malicious instances of crawlers that ignore those rules (assuming they’re right and that the rules are set up correctly), and lumping that malware with software made by established corporations. However, Meta and other big tech companies haven’t historically had a problem with ignoring configurations like robots.txt. They have had an issue with using the data they scrape in ways that are different than what they claimed they would, or scraping data from a site that does not allow scraping by coming at it via a URL on a page that it legitimately scraped, but that’s not the kind of shenanigans this article is about, as meta is being pretty upfront about what they’re doing with the data. At least after they announced it existed.

                    An opt-in only solution would just lead to a world where all hosts were being constantly bombarded with requests to opt in. My major take away from how meta handled this is that you should configure any site you own to disallow any action from bots you don’t recognize. As much as reddit can fuck off, I don’t disagree with their move to change their configuration to:

                    User-agent: *
                    Disallow: /