Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Previously, i had partition tabled and partition it, and encrypted it. Then I used the dd command to erase everything. But the hdd doesn't seem to be back to square one with available size - hard to tell. I don't see why I am missing "0.3tb" from the hdd - according to terminal output.
appreciate the help.
Last edited by blooperx3; 11-22-2020 at 10:31 PM.
Reason: correction for terminal 'size'
Gparted and df show the sizes of different things, I would guess. There is also the possibility that one tool shows TiB, and the other TB.
However, you don't provide sufficient information for helping any better. Can you share the full output of the df command, and the output of fdisk -l (you will have to be superuser for the fdisk command) or lsblk.
Please use code tags to make the output readable.
Last edited by berndbausch; 11-22-2020 at 10:35 PM.
$ df -h
Filesystem Size Used Avail Use% Mounted on
...
/dev/mapper/6black 5.5T 89M 5.2T 1% /mnt
The device on which the fileystem resides has a size of 5.5 TiB. About 200GB (or GiB) is overhead for filesystem data structures, 89MB is used for filesystem objects, 5.2 TiB is free. I have to admit that I don't know where gparted gets its data from.
If you want more details, replace the -h option with -BM for expressing sizes in megabytes, or -BG for gigabytes.
Last edited by berndbausch; 11-23-2020 at 03:55 AM.
Reason: grammar in the last paragraph
This cannot tell you anything about the drive itself. That is a logical volume (partition).
Please rerun that command as "sudo fdisk -l" so we can see the physical device information.
disk sizes lose some space to partitioning, partitions lose some space to formatting, so the command df gives filesystem details, fdisk gives device details, then you have to know which is GiB and which is GB. All make it a bit confusing since the manufacturer almost always uses GB for marketing.
Previously, i had partition tabled and partition it, and encrypted it. Then I used the dd command to erase everything. But the hdd doesn't seem to be back to square one with available size - hard to tell. I don't see why I am missing "0.3tb" from the hdd - according to terminal output.
Please define what you mean by "back to square one". Your posted output indicates that your 5.5T drive is mounted on /mnt and is encrypted. Is this what you want? Also, please post the output of:
Code:
$ lslbk -f
That will give the filesystem on the drive as well as the underlying block device that /dev/mapper is mapping to /dev/mapper/6black.
I'm guessing your 5.5T drive is formatted with ext4 which reserves 5% of the drive by default for root processes and possible rescue actions. In addition historically, ext* filesystems suffered from fragmentation and performance issues when the disk was near full so by reserving 5%, the disk never became so full that these issues arose. For a discussion on these issues see:
Doing the simple math, 5% of 5.5T is .275T and subtracting, that leaves approximately 5.2T available for use.
Reformatting your drive to xfs will eliminate the 5% reserve and reclaim the space. Alternatively, you can reset the 5% default reserve to 1% or even 0 by using the ext4 utility, tune2fs, as more fully described here:
Please define what you mean by "back to square one". Your posted output indicates that your 5.5T drive is mounted on /mnt and is encrypted. Is this what you want? Also, please post the output of:
Code:
$ lslbk -f
That will give the filesystem on the drive as well as the underlying block device that /dev/mapper is mapping to /dev/mapper/6black.
I'm guessing your 5.5T drive is formatted with ext4 which reserves 5% of the drive by default for root processes and possible rescue actions. In addition historically, ext* filesystems suffered from fragmentation and performance issues when the disk was near full so by reserving 5%, the disk never became so full that these issues arose. For a discussion on these issues see:
Doing the simple math, 5% of 5.5T is .275T and subtracting, that leaves approximately 5.2T available for use.
Reformatting your drive to xfs will eliminate the 5% reserve and reclaim the space. Alternatively, you can reset the 5% default reserve to 1% or even 0 by using the ext4 utility, tune2fs, as more fully described here:
Is it dangerous to set the reserve space to 1%? I see the person in the article uses 2%, but whose to say that is good as well as they are talking about an ssd - mine is hdd.
I set mine to 1%. And have for a while, since drives were 250GB. I haven't had any problems from that which I know of. There is no reason to tie up 100GB on a 1TB drive, for the kernel. I think that 10% was made back when drives were 2-3GB in size. It makes sense for that size.
You'll have to decide for yourself. And maybe more members can give info. Mine are at 1%.
Since drives have gotten so VERY big, and I seldom need the entire thing, I reserve 7% to get better performance. I now find that SSD gives excellent performance with BTRFS (or LVM and EXT4 if I need RAID-6 on a server) and allows for more flexible use of storage.
The days of managing every single block because 10 Meg was a BIG disk are long gone. If you really need more storage, just get a bigger drive. They are cheap these days.
Is it dangerous to set the reserve space to 1%? I see the person in the article uses 2%, but whose to say that is good as well as they are talking about an ssd - mine is hdd.
The main reason for the 5% reserve is because of fragmentation problems and resulting performance issues when the hard drive is nearly full. On ext2 and ext3 that was a significant issue. On ext4, there were improvements which made it much more fragmentation resistant. The creator and maintainer of ext4, Theodore Tso of Redhat, has this to say on this issue:
Given this is an external drive and probably used for backup/archival purposes I don't think setting it to 1% should be that much of an issue. It won't even come into play until you hit about 95% full and even then the ext4 improvements along with your use case(files not changing that often) should result in no problem for you.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.