I know memory is fairly cheap but e.g. there are millions of new videos on youtube everyday, each probably few hundred MBs to few GBs. It all has to take enormous amount of space. Not to mention backups.
I know memory is fairly cheap but e.g. there are millions of new videos on youtube everyday, each probably few hundred MBs to few GBs. It all has to take enormous amount of space. Not to mention backups.
Google just has a lot of storage space. They have dozens of data centers, each of which is an entire building dedicated to nothing but storing servers, and they’re constantly adding more servers to previous data centers and building new data centers to fit even more servers into once the ones they have are full.
IIRC, estimates tend to put Google’s current storage capacity somewhere around 10-15 exabytes. Each exabyte is a million terabytes. Each terabyte is a thousand gigabytes. That’s 10-15 billion gigabytes. And they can add storage faster than storage is used up, because they turn massive profits that they can use to pay employees to do nothing but add servers to their data centers.
Google is just a massive force in terms of storage. They probably have more storage than any other organization on the planet. And so, they can share a lot of it for free, because they’re still always turning a profit.
There are also techniques where data centers do offline storage by writing out to a high volume storage medium (I heard blueray as an example, especially because it’s cheap) and storing it in racks. All automated of course. This let’s them store huge quantities of infrequently accessed data (most of it) in a more efficient way. Not everything has to be online and ready to go, as long as it’s capable of being made available on demand.
You can feel it on YouTube when you try to access an old video that no one has watched in a long time.
every time it lags, it’s because youtube has to send someone down to the basement to retrieve the correct blu-ray disc from a storage room
Actual footage of data being manually retrieved from Google’s datacentre
God bless those interns. Earning those college credits.
And that guy is out today…
That’s the difference between getting a video served off a disk off in some random DC in some random state vs. the videos being served off a cache that lives at your ISP.
It’s not offline storage vs. disk, it’s a special edge-of-network cache vs. a video that doesn’t live in that cache, but is still on a hard drive.
It’s far more likely that Google, AWS, and Microsoft are using tape for high-volume, long-term storage.
According to diskprices.com, these are the approximate cost of a few different storage media (assuming one is attempting to optimize for cost):
Tape archives are neat too, little robot rearranging little tape drives in his cute little corridor
Tape drives are still in use in a lot of places too. Enormous density in storage for stuff that’s in “cold storage”
I don’t think the storage density of a blu ray is anywhere near good enough for that use
Doesn’t BR only have like 100 gigs capacity? That would take a shitton of space.
They use tapes for backups, but indeed there ought to be something inbetween.
https://engineering.fb.com/2015/05/04/core-data/under-the-hood-facebook-s-cold-storage-system/
This is an article from 2015 where Facebook/Meta was exploring Blu-ray for their DCs. You’re definitely right though. Tape is key as the longest term storage.
2015 was quite a while ago tho.
Shh, don’t say that. It feels like just a few years at most.
They’re really using optical storage as a backup that can then be near-instantaneously accessed? That’s awesome.
Super cool, blew my mind! I would love to see it in operation. The logistics from the machine side + the storage heuristics for when to store to a disc that’s write-only sounds like a really cool problem.
Where did you get that from?
I think that was just an example. Tiered storage is fairly common, though. NVMe SSDs are faster than SATA SSDs, which are way faster than hard drives. Amazon has a “glacier” tier of cloud storage which is pretty cheap, but it can take time (hours) or money to download your data. Great for backups though.
The 10-15 EB estimate from XKCD was 10 years ago.
Let’s be honest, it isn’t “free”. The user is giving their own data to Google in order to use there services; and data is a commodity.
Kinda starting to seem like “data” is becoming less and less valuable, or am I wrong?
well there’s more and more of it so the value per byte is decreasing as everything tracks you and there’s only so much info you can get
And thats just Google. Amazon and Microsoft also run also have massive massive data capacity that runs large chunks of the internet. And then you get into the small and medium sized hosting companies, that can be pretty significant on their own.
15 exabytes sounds low. Rough math, 1 20 TB hard drive per physical machine with 50,000 physical machines is one exabyte raw storage. I bet 50,000 physical machines is a small datacenter for Google.
It’s still wild to imagine. That’s amillions hard drives, times a couple times over for redundancy over regions and for failures. Then the backups.
Remember when Google started by networking old office computers?
For the really small IT company I worked for, I think we had something like 500TB per rack, and they weren’t even fully used racks.