Linux - HardwareThis forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I have a Highpoint RAID controller (HPT370) in my RH 9.0 system. I just installed the controller (bios 2.34) and two Samsung 160Gb drives in a RAID-1 config. Everything seess to be fine: the BIOS of the controller reports a 160.04 Gb RAID-1 array to be fully working.
Ok, I partitioned the array into 1 primary EXT3 partition. Both cfdisk and fdisk show a disk size of 156Gb. Sounds normal to me for a 160Gb disk...
Anyhow, after making the EXT3 fs (I use mke2fs -j), when I do a df -h, it shows that my RAID array is only 147Gb big (where are those 9Gb?) and only 140GB free, while there's only 33MB used (by the system that is).
"Anyhow, after making the EXT3 fs (I use mke2fs -j), when I do a df -h, it shows that my RAID array is only 147Gb big (where are those 9Gb?) and only 140GB free, while there's only 33MB used (by the system that is)."
Hard drive manufacturers lie about the size of their drives by defining:
1K = 1000 bytes
1M = 1000K
1G = 1000M
Also when you create a Linux filesystem then roughly 5% of your partition space is used for the filesystem superblocks and inodes. This filesystem overhead is not reported when counting either used space or free space. This is a high overhead but Linux filesystems contain some redundency which makes recovering broken filesystems much easier.
Yes I know the story of the manufacturers and the commercial kilo/mega/gigabytes (1000/1024) and that's what I took into account, but as I miss 20Gb I thought it to be a little too much.
I just noticed that my 2x 80Gb raid 1 array (there are two raids in the system) only has 74Gb available... but ok if this is true I'm still missing 4 Gb.... can that be explained by the system's overhead?
When you format an ext3 partition by default 5% is reserved by root which would account for the missing 4GB. Reserve space is supposed to reduce fragementation and allow root to log in to perform maintenance if the filesystem free space is zero. You can reduce reserve using the tune2fs command.
HD manufacturers use the metric system, so they use kilobytes (thousand), megabytes (million), and gigabytes (billion). This is respectively 1000 bytes, 1000 KB, and 1000 MB.
Linux and other OS'es refer to these as 1024 bytes, 1024 KB, or 1024 MB, instead, or 2 to the tenth power for each. However, this is incorrect. These should be refered to as kibibytes (KiB), mebibytes (MiB), and Gibibytes (GiB). A difference of 24 dosen't seem like a lot, but that can greatly contribute for Linux "seeing less" of your hard drive.
Its not Operating Systems fault when partitioning and formatting storage mediums. In machine language everything is base 2. We humans state our number system as base 10. 1 kilobyte to us is 1000 bytes but in machine langauge its 1024 bytes.
I have three 120 gigabyte hard drives from different manufactures. One hard drive after formatting is 115 gigabytes. The second hard drive is 112 gigabytes after formatting. The other hard drive is about 112 gigabytes after formatting (I think). You have to calculate the cylinders, heads, and sectors and this will give you the true capacity.
Did you read the information what mke2fs did exactly. It could be it made a huge journal allocation unit to compensate for your hard drive.
You can try other filesystems.
young1024, I do not know where you came from but the computer metric system is bits, bytes, kilobytes, megabytes, gigabytes, terabytes.
Might want to take a look at ext3 in general, its by far the hog of the journaling fs's and is really just ext2 with a journal. Don't get me wrong, ext2 is and always will be one of the best filesystems every for repairability, stability, and a whole lot of other good words that end in bility, but when it gets down to it, you start losing space due to filesystem structure overhead on a exponential scale as the partition sizes increase. Might want to take a look at Reiser or XFS...
This is probably not due to the filesystem reserving this space or to marketing vs real gigabytes but more likely due to a kernel limit.
Until recent kernels disks greater than than 128Gb could not be accessed. The 128Gb limit is because the drive geometry is described by 8 bits for sector, 16 bits for cylinder, and 4 bits for head a total of 28bits 2^28 is 268435456 blocks, a block being 512 bytes.
Newer kernels, at least newer than 2.4.18 support the newer ATA-6 addressing scheme which allows disks upto the 2^48 bits in size - 128Pb (PetaByte = 1024 TerraBytes = 1024 Gigabytes)
C = Cynlinders
H = Heads (usually it is 255 heads for multi-gigabyte hard drives)
S = Sectors (it is fixed at 63 X 512 bytes)
This information should be noted on the hard drive's label. Not all manufactures state this information.
On my Hitachi 120 GB model 180GXP there are 16383 cylinders, 255 heads, 63 sectors. After creating a partition and formatting, it comes to 115 GB. It does not matter what filesystem that I use. I lost about 5 GB from the stated 120 GB. I have Seagate and Western Digital which are 120 GB. My Western Digital hard drives loses about 8 GB and the Seagate loses 10 GB.