• @Gradually_Adjusting
    link
    English
    1410 months ago

    I’m no expert, but dreaming here - is there a FOSS search engine that can be run in a distributed way by a community? I would happily switch to that if there was.

    • @Plopp
      link
      English
      710 months ago

      That’d be awesome. I’m just curious how you’d go about constructing such a thing that would be resilient against millions and potentially billions of dollars invested in trying to break it and making it serve results it otherwise wouldn’t. Because those investments will happen if such search engine would gain traction. I really like the idea though.

    • @elrik
      link
      English
      6
      edit-2
      10 months ago

      It’s a really interesting question and I imagine scaling a distributed solution like that with commodity hardware and relatively high latency network connections would be problematic in several ways.

      There are several orders of magnitude between the population of people who would participate in providing the service and those who would consume the service.

      Those populations aren’t local to each other. In other words, your search is likely global across such a network, especially given the size of the indexed data.

      To put some rough numbers together for perspective, for search nearing Google’s scale:

      • A single copy of a 100PB index would require 10,000 network participants each contributing 10TB of reliable and fast storage.

      • 100K searches / sec if evenly distributed and resolvable by a single node would be at least 10 req/sec/node. Realistically it’s much higher than that, depending on how many copies of the index, how requests are routed, and how many nodes participate in a single query (probably on the order of hundreds). Of that 10TB of storage per node, substantial amounts of it would need to be kept in memory to sustain the likely hundreds of req/sec a node might see on average.

      • The index needs to be updated. Let’s suppose the index is 1/10th the size of the crawled data and the oldest data is 30 days (which is pretty stale for popular sites). That’s at least 33PB of data to crawl per day or roughly 3,000Gbps minimum sustained data ingestion. For those 10,000 nodes they would need 1Gbps of bandwidth to index fresh data.

      These are all rough numbers but this is not something the vast majority of people would have the hardware and connection to support.

      You’d also need many copies of this setup around the world for redundancy and lower latency. You’d also want to protect the network against DDoS, abuse and malicious network participants. You’ll need some form of organizational oversight to support removal of certain data.

      Probably the best way to support such a distributed system in an open manner would be to have universities and other public organizations run the hardware and support the network (at a non-trivial expense).

      • @[email protected]
        link
        fedilink
        English
        410 months ago

        So this is starting to sound more like something that needs to explicitly be paid for in some way (as opposed to just crowd sourcing personal hardware), at least if we want to maintain the same level of service.

        • @Gradually_Adjusting
          link
          English
          210 months ago

          It seems like there are others in the thread with good options

        • @elrik
          link
          English
          110 months ago

          Yes, at least currently. There may be better options as multi-gigabit internet access becomes more common place and commodity hardware gets faster.

          The other options mentioned in this thread are basically toys in comparison (either obtaining results from existing search engines or operating at a scale less than a few terabytes).

    • magic_lobster_party
      link
      fedilink
      410 months ago

      FOSS won’t change this matter unless it somehow can filter out all the low quality AI-generated articles better than Google’s filters.

        • magic_lobster_party
          link
          fedilink
          310 months ago

          Is this allowlist supposed to work on article by article basis? Because that’s what has to be done for publishing platforms like medium.

          • @[email protected]
            link
            fedilink
            English
            110 months ago

            By default I was thinking site. But for sites with huge variances in page quality you’d need it by page/article as you say.

      • @Gradually_Adjusting
        link
        English
        110 months ago

        Comes down to editorial quality maybe; what sites do you trust?

        Jimmy Wales has a social media project with “whom to trust” built into the algorithm. I’m not sure if it is an idea with legs, but I like where his head is at

      • @wikibotB
        link
        English
        410 months ago

        Here’s the summary for the wikipedia article you mentioned in your comment:

        The Yahoo! Directory was a web directory which at one time rivaled DMOZ in size. The directory was Yahoo! 's first offering and started in 1994 under the name Jerry and David’s Guide to the World Wide Web. When Yahoo!

        to opt out, pm me ‘optout’. article | about

        • @Stovetop
          link
          English
          710 months ago

          Bot thinks the exclamation point at the end of Yahoo! is the end of the sentence. Cute bot.

      • ares35
        link
        fedilink
        1
        edit-2
        10 months ago

        even netscape/mozilla was in on this game, with the open directory project (dmoz). aol eventually shut it down, but it apparently lives on independently as curlie.org - but i have no idea how current it is or anything.

  • Halvdan
    link
    fedilink
    English
    1210 months ago

    “might”…? I thought that was pretty much a proven fact. Not that any of the other are any better, with the possible exception of Kagi, but fuck using that anymore. I’m pretty disappointed by that since I finally had found a search that was pretty good and I have no problem paying for it, but I simply won’t give that asshat any money. Why aren’t there any nice capitalists? 🤔

      • @[email protected]
        link
        fedilink
        English
        1210 months ago

        Kagi now has the ability to include Brave search results and people are upset about that because someone brought it up as a “partnership” with Brave, but they’re just using the search API

        • TheEntity
          link
          fedilink
          710 months ago

          Sounds… at worst just as bad as using Microsoft’s or Google’s search results. I get that the Brave CEO is a POS but Google and Microsoft are Google and Microsoft.

          So basically a drama over nothing?

          • @demonsword
            link
            English
            1
            edit-2
            10 months ago

            “Should we not be buying VW, BMW, Siemens and Bayer technology and products today because they participated in holocaust and directly collaborated with Hitler?” – CEO of Kagi when given feedback re: Brave partnership

            doesn’t sound like it’s “drama over nothing” to me

            • @phar
              link
              English
              710 months ago

              lol “Does Hitler still work there?”

        • @[email protected]
          link
          fedilink
          English
          210 months ago

          So it’s not actually funding anything reprehensible?

          I ask because I’m really liking it…

  • @[email protected]
    link
    fedilink
    English
    10
    edit-2
    10 months ago

    SEO is of course a problem, but it’s been a problem for a long time, and there are ways around it for those who know how to seek information. Proper use of keywords, blacklisting sites with known spam information, searching specific sites, mandating specific words and phrases to be contained in the search etc. It’s true, however, that information has become less discoverable during the latest decade – at least reliable information has.

    While AI-written spam articles and such have been a pain sometimes, gatekeeping content is in my opinion as big of a threat to the proper use of search engines for finding information. As more and more sites require you to log in to view the discussion (social media is the worst offender here) much of the search results is unusable. Nowadays the results lead to a paywall or a login wall almost more often than to a proper result, and that makes them almost completely useless. I understand this kind of thing for platforms which pay for creating the content, e.g. news sites, but user-generated content shouldn’t be locked behind a login requirement.

    I fear the day StackOverflow and Reddit decide the users’ discussions should be visible for only logged-in users. Reddit has already taken the first steps with limiting “NSFW” content to logged-in users only (on new reddit). Medium articles going behind paywall also caused some headaches a while back.

    • @[email protected]
      link
      fedilink
      English
      5
      edit-2
      10 months ago

      SEO is of course a problem, but it’s been a problem for a long time

      This has been denied for the longest time by search engines, them claiming search is always getting better, which is why we’re seeing a lot of articles pointing at the problem.

      I think the prime of search engines as we know them has long passed, right before SEO became mainstream. Which is why we need new tools to search the web, which will be powered by AI.

  • Dr. Moose
    link
    English
    610 months ago

    it’s kinda ruined already and Google’s incompetence/greed didn’t need any AI assistance.

  • @alienanimals
    link
    English
    610 months ago

    Weak reporting and a clickbait title. Wow, Christian Guyton must have his masters in Journalism!

  • Wolfeh
    link
    English
    510 months ago

    Might be?

  • @OutrageousUmpire
    link
    English
    110 months ago

    Google has sucked for a while. I see little difference between it, Bing, and Brave. In fact, I often get better results with Bing and Brave.

      • @[email protected]
        link
        fedilink
        English
        -2
        edit-2
        10 months ago

        Sometimes, casualties are an unfortunate result of doing what’s necessary. Think of it like antibiotics. Sure, it kills bad bacteria that causes infection. But it also kills good bacteria.

        Scorched earth is not always bad for the whole of a thing.

  • 𝐘Ⓞz҉
    link
    English
    -610 months ago

    Omg this is the 5th time I’ve seen this post… Literally nobody gives a shit apart from few nerds.