• @[email protected]
    link
    fedilink
    English
    3810 hours ago

    I definitely miss the cached pages. I found that I was using the feature very frequently. Maybe it’s just the relative obscurity of some of my hobbies and interests, but a lot of the information online that shows up in search engines seems to come from old forums. Often times those old forums are no longer around or have migrated to new software (obliterating the old URLs and old posts as well).

  • @bassomitron
    link
    English
    11014 hours ago

    I was super annoyed when they first took away the links. “Pages are more dependably available now,” is such a lazy excuse. Storing the cached content probably wasn’t even that expensive for them, as it didn’t retain anything beyond basic html and text. Their shitty AI-centric web search was likely the main reason for getting rid of it.

    • @[email protected]
      link
      fedilink
      English
      228 hours ago

      They used to have a “cache” link on search results. It occasionally came in handy when the original site was down or changed their link or something.

    • @WindyRebel
      link
      English
      58 hours ago

      It was a tool to see what Google has cached, to check web pages for changes based on Google’s last access.

      It also had a nice habit of bypassing those pop-ups that would prevent scrolling. 😂

    • @General_Effort
      link
      English
      214 hours ago

      Shocked? You’d think all the people outraged at having their websites scraped would be delighted. That’s probably the real reason for this.

      • subignition
        link
        fedilink
        210 hours ago

        It’s not the scraping itself, but the purpose of the scraping, that can be problematic. There are good reasons for public sites to allow scraping.

        • @General_Effort
          link
          English
          13 hours ago

          I have the distinct impression that a number of people would object to the purpose of re-hosting their content as part of a commercial service, especially one run by Google.

          Anyway, now no one has to worry about Google helping people bypass their robots.txt or IP-blocks or whatever counter-measures they take. And Google doesn’t have to worry about being sued. Next stop: The Wayback Machine.

    • @WoahWoah
      link
      English
      87 hours ago

      It’s a 2-trillion-dollar company, I think news of their coming demise has been exaggerated.

  • @RustyNova
    link
    English
    1814 hours ago

    At least they are using the internet archive, which is neat

  • @OutrageousUmpire
    link
    English
    132 minutes ago

    What a disgrace. This clown show of a company kills things people love and pushes advertising no one wants.