• @rockSlayer
    link
    755 months ago

    Too bad the website is still openly accessible and still capable of being scraped

    • @SzethFriendOfNimi
      link
      225 months ago

      Probably what they’re targeting. Such as sites like safereddit, etc.

      • @rockSlayer
        link
        385 months ago

        Well that’s part of the thing. Web scraping doesn’t get covered by policies. Like, they could ban your ip or any accounts you have, but web scraping itself will always be acceptable. It’s why projects like NewPipe and Invidious don’t care about YouTube cease and desist letters.

        • @werefreeatlast
          link
          35 months ago

          Oops look like this community hasn’t been reviewed. Login if you still want to see the content.

          • @rockSlayer
            link
            35 months ago

            Yea, I’ve seen those pop-ups when trying to find something out. It sucks but isn’t a significant barrier to web scraping

            • @werefreeatlast
              link
              15 months ago

              I tried that on my desktop. So long as you are not actually logged in you cannot see the communities that are too small for a review or too adult after a review.

          • @[email protected]
            link
            fedilink
            English
            15 months ago

            In what way?

            HTML definitely provides more overhead than json if you only care about the data.

          • folkrav
            link
            fedilink
            15 months ago

            Parsing absolutely comes with a lot more overhead. Especially since many websites integrate a lot of JS interactivity nowadays, you oftentimes don’t get the full contents you’re looking for straight out of the HTML you’re getting out of your HTTP request, depending on the site.

      • @[email protected]
        link
        fedilink
        English
        95 months ago

        Still waiting for the news that they took down old.reddit. Without the third party apps, that was the only way it could still be usable.

    • @IphtashuFitz
      link
      English
      25 months ago

      We use Akamai where I work for security, CDN, etc. Their services make it largely trivial to identify traffic from bots. They can classify requests in real time as coming from known bots like Googlebot to programming frameworks like python & java to bots that impersonate Googlebot, to virtually any other automated traffic from unknown bots.

      If Reddit was smart they’d leverage something like that to allow Google, Bing, etc. to crawl their data and block all others, or poison others with bogus data. But we’re talking about Reddit here…