cross-posted from: https://programming.dev/post/2656516

What are your real-world applications of this versatile data structure?

They are useful for optimization in databases like sqlite and query engines like apache spark. Application developers can use them as concise representations of user data for filtering previously seen items.

The linked site gives a short introduction to bloom filters along with some links to further reading:

A Bloom filter is a data structure designed to tell you, rapidly and memory-efficiently, whether an element is present in a set. The price paid for this efficiency is that a Bloom filter is a probabilistic data structure: it tells us that the element either definitely is not in the set or may be in the set.

  • @[email protected]
    link
    fedilink
    21 year ago

    The classical example of scanning a k:v space in a memory efficient manner is the best application I’ve seen, but it still doesn’t really work if you need to know a key is definitely in that space. You still have to perform the lookup.

    In the google BigTable paper they use it like this. The reason it makes sense in that context is that there is that bloom filters are tiny and can be easily held in memory for a large dataset, whereas performing the actual lookup requires a slow disk access/network request. This way you can reduce the amount of expensive lookups massively at the cost of a small amount of memory.

    Also since the underlying data format is immutable the cost of recalculating upon removal doesn’t matter since it doesn’t need to happen

    it turned out that even with absurdly high lookup rates, adding an initial check for negative presence in a record set, we didn’t benefit at all.

    Compared to which alternative? I also don’t understand exactly what the filter’s purpose was in this case, did you have all DNS records in a bloom filter to quickly check for misses? Or was it some kind of allowlist/blocklist of clients?

    Finally, what metrics were you using to decide it did not benefit at all?

    • Dr. Dabbles
      link
      English
      21 year ago

      All records created by customers, yes. The alternative used was our default structure, but that structure is also in ram. The problem faced is that a resolver can be targeted in several ways, one of which is querying for zone data that doesn’t exist. When latency is the primary performance concern, and you need a low performance cost method to look up whether a request can be (maybe) served or (absolutely) not, bloom filters look like the right filter to use.

      Performance wise (latency), there was no improvement in lookup performance, nor a reduction on CPU consumed. So for us in that application, it didn’t make sense.

      Big table was the example I was thinking of. A massive k:v store that can start at 4, 8, 32TB you can have in RAM. The problem is, you can only answer negative membership in the key space, not positive. So if you’re looking up a key you either get a absolute no or an opportunity to scan the key space anyway.

      What I’ve tended to find is that indexes make more sense in most scenarios. Not all, because there’s always exceptions. But indexes tend to be faster, still consume vastly less space that the data being indexed, and they tend to offer more powerful features rather than just membership in a set.