A moment ago I unmounted my 1TB HDD with 400GB of content and I partition it into two different partitions, obviously keeping the space that was already occupied. I did because I don’t care if the content get corrupted, but after I did it everything is still working perfectly, when I thought everything would be corrupted.
I am possibly a complete ignorant on this subject, but due to the nature of the HDD and how it writes and reads data I expected it to corrupt everything, why didn’t it happen? On an SSD on the other hand I would not consider that possible because it is not even a mechanical part where the information is stored.
when you resize a partition with data, gparted will move the data inside the partition to fit the new size.
imagine you have a 100gb partition with 50gb occupied. now you want to shrink it to 80, if there is stuff in the last 20gb, gparted will move them to available space in the first 80gb, and then make the partition.
still i definitely will not trust it and have backups
Didn’t know gparted did that, good to know.
That’s the main advantage of
parted
overfdisk
+mkfs
, really.
It is incredible that it does all that in a matter of seconds, I mean, it moves so much data without problem in jusr seconds, although in fact that is something curious that I have noticed in Linux. If I move something to the same storage even if it is in a different partition, it makes it instantaneous.
Essentially it only moves the borders of the partitions and “repairs” the filesystem inside each affected partition.
If there is data in an area inside the partion you are manipulating gparted has to move the data to an area inside the partition that is unaffected or move it to the new parts of the partition. This can take a long time even if modern PCs easily move 100MB/s
Also, even if gparted is mature software and the devs probably have implemented a lot of security measures you should always backup your data before manipulating the partitions. Especially when you’re playing around with filesystems that aren’t native like NTFS or more complicated filesystems like ZFS. I know people often nag about this but trust me… Blow 2TB of your data and you really really regret not spending 10 minutes backing up the essentials.
I’ve been using gparted for as long as I can remember and only once or twice has it caused dats loss. Since I’m very old school (started playing with PCs when 386DX 16MHz was fairly hot and RLL disks were a thing) and nerdy I was able to use data rescue software that looked for filesystems over the whole disk and guessed where partition borders should be.
Avoid this type of anxiety by backing up all data or at least backing up the data you can’t live without.
Also, if you have a spare disk, it’s faster and much safer to partition the spare one and just clone each partition. Sometimes it’s even faster to clone the disk this way and then clone it back.
Yes, Linux file systems are superior in every single aspect except compatibility
kinda it is only supported on specific fses iirc btrfs is not happy with it
Awesome question.
The operating system, or OS, really does not care about whether it is a hard drive or a solid state drive when moving around the partitions.
Say your hard drives are pools. One is filled with molasses, and the other has water. The partitions are like the ropes in the pool. Perhaps you have no ropes. Maybe you have three, but two are so close to the wall, and each other, that only a small amount of stuff could occupy those lanes.
That leaves you with one really large lane. That’s your data partition.
Water or molasses, the ropes are the same.
I kinda really want a description of the mental model that led to “HDD can’t do this but SSD can”.
As I said, I am not an expert on the subject, my mentality comes from the fact that my concept of partitions was that they were overwritten, like making a scratch on a DVD and the content could not be read because of that.
I think the idea comes from “HDD slow,” as he was impressed with the speed it was happening at, especially if you think of it as requiring data to be moved around on the disk. It’s not really intuitive to think of it as just a table on the disk somewhere that says which regions belong to which partition, and having those regions be anywhere on the disk.
Is that actually how it works? I thought that was only within the partition itself, while partitions are physically separate on the platter.
For SSD’s, it’s 100% a logical table, because data is stored all over the place for load balancing purposes, so it already uses a logical table to keep track of what each block is for at any given point in time.
For HDD’s, historically they were physically separated, and they mostly are still, but there’s still a logical table, and there’s no reason the logical table can’t say “Blocks 0 through 1234 and 2000 are part of partition 1” if you have something somewhere else that you want on that partition.
Well, hard to say that everything is not corrupted unless you verify each bit. What file system are you using?
Partitioning I believe (if you don’t format anything) is just rewriting drive headers and creating a new header at some new point on the disk for the next partition.
Obviously we aren’t in the days of sequential files, so files are spread physically over the disk with space in between. I’m not up with the exact specifics on whatever the latest windows FS is and how it works, or how EXT works at that level, but it would seem you partitioned it at a point after the data ended.
That, or you haven’t corrupted the inodes/pointers, so it appears that all the files are there, until you try to actually access that place on the disk. If the files existed in the space that the new partition is, then you’re going to get errors. I suspect this is more likely, because inodes will exist in the first sequential bytes on the disk, while the actual file location could be anywhere.
Correct me if I’m wrong on any of the details here.
I’m using ext4 and the contents of the HDD are basically virtual machines, and I was able to access most of them (who use dynamic virtual hard drives) and they seem to work without problem, so I assumed that nothing got corrupted.
Actually I was surprised because a long ago I tried to do the same with BFTRS and all my data get corrupted that time.
And it’s interesting, thanks for the info. I didn’t know that it worked like that.
Storage media doesn’t make a difference here. You can partition a spinning drive, an SSD, NVRAM, phase change storage, hell even magnetic core if you have enough of it.
It also depends on how you did the partitioning. A full partitioning program like gparted will intelligently move and resize partitions. But even if you blindly rewrote a partition table, if you did something like take a 100gb partition, changed it to 50gb, and added a 50gb partition after it, as long as the filesystem has only used that first 50gb, nothing bad will happen. A partition table just says “partition starts here, ends here”.
Just look at the output of fdisk:
Disk /dev/sda: 100 GiB, 107374182400 bytes, 209715200 sectors Disk model: Virtual disk Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xaf179753 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 192944127 192942080 92G 83 Linux /dev/sda2 192944128 209715199 16771072 8G 5 Extended /dev/sda5 192946176 209715199 16769024 8G 82 Linux swap / Solaris