Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
An "inode mismatch" is trivial to fix. That is what fsck is for.
#3 is the only borderline --- and if you have a corrupted filesystem, then you usually have physical damage. Otherwise the usual fsck will repair it.
Now the rate of corruption depends on what filesystem you have on there. Ext3/4 are very good - I haven't seen a filesystem corruption in about 10 years of use with those. XFS is also very good. BTRFS (no experience),
I have used many disks without partitioning.. Usually in a NAS, and then partitioning the NAS created volume. But 16TB in a filesystem is not unworkable. Even 100TB and PB sizes are doable. None of those use partitioning as they are focused on the maximum amount of storage to be provided.
#1,2,4 are separate issues, and don't affect the risk of loosing data. Putting them on separate disks would.
I know inode mismatch is trivial that was just an example. The point here is if one partition get corruption I can separate it from other and only one set of data is affected. If a disk get corruption I am risking all data residing under that.
Another example I can give is about superblock corruption. We tried mounting it with backup superblock but it gave up. Finally we ran fsck to fix it but that didn't work. Red Hat then performed fsck (not sure the switches they used) but that ended up partition left with little to no data in it. Think of this happening with single disk scenario.
Partition table is different and superblocks for each partition are different. Superblocks are created when you format a partition. If partition table is corrupted all the partitions on that particular disk will get affected. However, if partition superblock is corrupted only that partition will get affected.
Partition table is different and superblocks for each partition are different. Superblocks are created when you format a partition. If partition table is corrupted all the partitions on that particular disk will get affected. However, if partition superblock is corrupted only that partition will get affected.
The point is the block on disk is the same.
Without a partition table the home block is in a block that would have been used for a partition table.
Without a partition table the home block is in a block that would have been used for a partition table.
Yes that is correct that is one scenario wherein partition table itself is corrupt. I think you are mixing two things. What I am trying to say is in case of a single disk a superblock corruption will bring down the whole disk. Whereas in case of partitioned disk superblock corruption will bring down only the partition for which it belongs to.
Simple words, partition table corruption will affect the whole disk. Superblock corruption will not affect the whole disk. Those are two different scenarios.
Partitioning is only for administrative use. It does not reduce the risk of a disk failure.
Superblock can affect entire disk incase you are using the whole disk as a single disk (without partitioning). Incase you are using a disk with partitions each partition will have it's own superblock and incase of superblock corruption only that partition will get affected.
Incase of hardware damage or partition table corruption entire disk will get affected irrespective of you are using disk with partitions or as single disk.
Here is the output from my test machine and that might help:
Code:
[root@rhel6-test ~]# dumpe2fs /dev/vda1 | grep superblock
dumpe2fs 1.41.12 (17-May-2010)
Primary superblock at 1, Group descriptors at 2-3
Backup superblock at 8193, Group descriptors at 8194-8195
Backup superblock at 24577, Group descriptors at 24578-24579
Backup superblock at 40961, Group descriptors at 40962-40963
Backup superblock at 57345, Group descriptors at 57346-57347
Backup superblock at 73729, Group descriptors at 73730-73731
Backup superblock at 204801, Group descriptors at 204802-204803
Backup superblock at 221185, Group descriptors at 221186-221187
Backup superblock at 401409, Group descriptors at 401410-401411
[root@rhel6-test ~]# dumpe2fs /dev/mapper/vg_rhel6test-lv_root | grep superblock
dumpe2fs 1.41.12 (17-May-2010)
Primary superblock at 0, Group descriptors at 1-1
Backup superblock at 32768, Group descriptors at 32769-32769
Backup superblock at 98304, Group descriptors at 98305-98305
Backup superblock at 163840, Group descriptors at 163841-163841
Backup superblock at 229376, Group descriptors at 229377-229377
Backup superblock at 294912, Group descriptors at 294913-294913
Backup superblock at 819200, Group descriptors at 819201-819201
Backup superblock at 884736, Group descriptors at 884737-884737
Backup superblock at 1605632, Group descriptors at 1605633-1605633
Backup superblock at 2654208, Group descriptors at 2654209-2654209
[root@rhel6-test ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_rhel6test-lv_root
12G 5.7G 5.2G 53% /
tmpfs 751M 72K 751M 1% /dev/shm
/dev/vda1 485M 33M 427M 8% /boot
[root@rhel6-test ~]# mount /dev/vdb /data
[root@rhel6-test ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_rhel6test-lv_root
12G 5.7G 5.2G 53% /
tmpfs 751M 72K 751M 1% /dev/shm
/dev/vda1 485M 33M 427M 8% /boot
/dev/vdb 5.0G 138M 4.6G 3% /data
[root@rhel6-test ~]# dumpe2fs /dev/vdb | grep superblock
dumpe2fs 1.41.12 (17-May-2010)
Primary superblock at 0, Group descriptors at 1-1
Backup superblock at 32768, Group descriptors at 32769-32769
Backup superblock at 98304, Group descriptors at 98305-98305
Backup superblock at 163840, Group descriptors at 163841-163841
Backup superblock at 229376, Group descriptors at 229377-229377
Backup superblock at 294912, Group descriptors at 294913-294913
Backup superblock at 819200, Group descriptors at 819201-819201
Backup superblock at 884736, Group descriptors at 884737-884737
[root@rhel6-test ~]#
Here I have two disks on the system one is /dev/vda and another is /dev/vdb. The disk /dev/vda has got two partitions /dev/vda1 and /dev/vda2, /dev/vda1 is /boot and /dev/vda2 is a Linux LVM partition. The second disk I am using as single disk and formatted it directly without creating any partitions. Here is the output from fdisk -l:
Code:
[root@rhel6-test ~]# fdisk -l
Disk /dev/vda: 16.1 GB, 16106127360 bytes
16 heads, 63 sectors/track, 31207 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00088b50
Device Boot Start End Blocks Id System
/dev/vda1 * 3 1018 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/vda2 1018 31208 15215616 8e Linux LVM
Partition 2 does not end on cylinder boundary.
Disk /dev/vdb: 5368 MB, 5368709120 bytes
16 heads, 63 sectors/track, 10402 cylinders
Units = cylinders of 1008 * 512 = 516096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/vg_rhel6test-lv_root: 12.4 GB, 12423528448 bytes
255 heads, 63 sectors/track, 1510 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/vg_rhel6test-lv_swap: 3154 MB, 3154116608 bytes
255 heads, 63 sectors/track, 383 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
[root@rhel6-test ~]#
If you see the output from dumpe2fs it clearly defines different superblock for the partitions or lvs. Incase my /dev/vda2 has got superblock corruption it will affect my root_lv not my /boot. However, that is not the case with /dev/vdb. As I am using /dev/vdb as single disk it has got single set of superblocks, not different like incase of partitioned disk. I hope this helps.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.