• @bassomitron
    link
    English
    1263 months ago

    I was super annoyed when they first took away the links. “Pages are more dependably available now,” is such a lazy excuse. Storing the cached content probably wasn’t even that expensive for them, as it didn’t retain anything beyond basic html and text. Their shitty AI-centric web search was likely the main reason for getting rid of it.

  • @[email protected]
    link
    fedilink
    English
    513 months ago

    I definitely miss the cached pages. I found that I was using the feature very frequently. Maybe it’s just the relative obscurity of some of my hobbies and interests, but a lot of the information online that shows up in search engines seems to come from old forums. Often times those old forums are no longer around or have migrated to new software (obliterating the old URLs and old posts as well).

      • @[email protected]
        link
        fedilink
        English
        83 months ago

        That’s not the same at all. Archivebox would do the trick if it was pre-populated with every page Google Search has in its index.

        • Encrypt-Keeper
          link
          English
          03 months ago

          Well it is, it just doesn’t go out and do all the caching for you ahead of time, instead it’s on demand. You are right that as far as pre populated alternatives go, it’s just archive.org now.

    • @General_Effort
      link
      English
      23 months ago

      Shocked? You’d think all the people outraged at having their websites scraped would be delighted. That’s probably the real reason for this.

      • subignition
        link
        fedilink
        33 months ago

        It’s not the scraping itself, but the purpose of the scraping, that can be problematic. There are good reasons for public sites to allow scraping.

        • @General_Effort
          link
          English
          23 months ago

          I have the distinct impression that a number of people would object to the purpose of re-hosting their content as part of a commercial service, especially one run by Google.

          Anyway, now no one has to worry about Google helping people bypass their robots.txt or IP-blocks or whatever counter-measures they take. And Google doesn’t have to worry about being sued. Next stop: The Wayback Machine.

  • @RustyNova
    link
    English
    203 months ago

    At least they are using the internet archive, which is neat

    • @WoahWoah
      link
      English
      103 months ago

      It’s a 2-trillion-dollar company, I think news of their coming demise has been exaggerated.

  • @OutrageousUmpire
    link
    English
    133 months ago

    What a disgrace. This clown show of a company kills things people love and pushes advertising no one wants.

  • @[email protected]
    link
    fedilink
    English
    103 months ago

    Another piece of internet history now gone. Perhaps not deleted, but hidden beneath Goggle’s own archives until they degrade away.

    • @[email protected]
      link
      fedilink
      English
      283 months ago

      They used to have a “cache” link on search results. It occasionally came in handy when the original site was down or changed their link or something.

    • @WindyRebel
      link
      English
      63 months ago

      It was a tool to see what Google has cached, to check web pages for changes based on Google’s last access.

      It also had a nice habit of bypassing those pop-ups that would prevent scrolling. 😂