LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 11-22-2020, 10:25 PM   #1
blooperx3
Member
 
Registered: Nov 2020
Posts: 67

Rep: Reputation: Disabled
How can I get back some of my ext hdd disc space?


terminal: df -h

size: 5.5tb
used: 58m
avail: 5.2tb
use: 1%

GPARTED:
size: 5.46tb
used: 45.09gb
unused: 5.41tb

I really don't know if 5.2tb is available or 5.41

Previously, i had partition tabled and partition it, and encrypted it. Then I used the dd command to erase everything. But the hdd doesn't seem to be back to square one with available size - hard to tell. I don't see why I am missing "0.3tb" from the hdd - according to terminal output.

appreciate the help.

Last edited by blooperx3; 11-22-2020 at 10:31 PM. Reason: correction for terminal 'size'
 
Old 11-22-2020, 10:30 PM   #2
berndbausch
LQ Addict
 
Registered: Nov 2013
Location: Tokyo
Distribution: Mostly Ubuntu and Centos
Posts: 6,316

Rep: Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002
Gparted and df show the sizes of different things, I would guess. There is also the possibility that one tool shows TiB, and the other TB.

However, you don't provide sufficient information for helping any better. Can you share the full output of the df command, and the output of fdisk -l (you will have to be superuser for the fdisk command) or lsblk.

Please use code tags to make the output readable.

Last edited by berndbausch; 11-22-2020 at 10:35 PM.
 
Old 11-22-2020, 11:48 PM   #3
blooperx3
Member
 
Registered: Nov 2020
Posts: 67

Original Poster
Rep: Reputation: Disabled
wrong post...

Last edited by blooperx3; 11-23-2020 at 12:16 AM.
 
Old 11-23-2020, 12:12 AM   #4
blooperx3
Member
 
Registered: Nov 2020
Posts: 67

Original Poster
Rep: Reputation: Disabled
UPDATE:

Code:
DF: 
$ df -h
Filesystem          Size  Used Avail Use% Mounted on
udev                937M     0  937M   0% /dev
tmpfs               202M  3.3M  198M   2% /run
/dev/mmcblk0p1       15G  6.5G  7.8G  46% /
tmpfs              1007M     0 1007M   0% /dev/shm
tmpfs               5.0M  4.0K  5.0M   1% /run/lock
tmpfs              1007M     0 1007M   0% /sys/fs/cgroup
tmpfs              1007M  8.0K 1007M   1% /tmp
/dev/zram0           49M  2.1M   43M   5% /var/log
tmpfs               202M  8.0K  202M   1% /run/user/1000
/dev/mapper/6black  5.5T   89M  5.2T   1% /mnt

------------------------------------ 
$ sudo fdisk -l /dev/mapper/6black
Disk /dev/mapper/6black: 5.5 TiB, 6001156685824 bytes, 11721009152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 33553920 bytes

----------------------------- 

$ lsblk
NAME         MAJ:MIN RM    SIZE RO TYPE  MOUNTPOINT
sdb            8:16   0    5.5T  0 disk  
└─sdb1         8:17   0    5.5T  0 part  
  └─6black   253:0    0    5.5T  0 crypt /mnt
mmcblk0      179:0    0   14.9G  0 disk  
└─mmcblk0p1  179:1    0   14.7G  0 part  /
mmcblk1      179:8    0   14.6G  0 disk  
├─mmcblk1p1  179:9    0   12.2G  0 part  
├─mmcblk1p2  179:10   0     16M  0 part  
├─mmcblk1p3  179:11   0      1K  0 part  
├─mmcblk1p5  179:13   0     16M  0 part  
├─mmcblk1p6  179:14   0     16M  0 part  
├─mmcblk1p7  179:15   0    768M  0 part  
├─mmcblk1p8  259:0    0     16M  0 part  
├─mmcblk1p9  259:1    0     32M  0 part  
├─mmcblk1p10 259:2    0    768M  0 part  
├─mmcblk1p11 259:3    0     16M  0 part  
├─mmcblk1p12 259:4    0     16M  0 part  
├─mmcblk1p13 259:5    0     16M  0 part  
├─mmcblk1p14 259:6    0     32M  0 part  
├─mmcblk1p15 259:7    0     16M  0 part  
└─mmcblk1p16 259:8    0    640M  0 part  
mmcblk1boot0 179:16   0      4M  1 disk  
mmcblk1boot1 179:24   0      4M  1 disk  
zram0        254:0    0     50M  0 disk  /var/log
zram1        254:1    0 1006.2M  0 disk  [SWAP]
 
Old 11-23-2020, 03:52 AM   #5
berndbausch
LQ Addict
 
Registered: Nov 2013
Location: Tokyo
Distribution: Mostly Ubuntu and Centos
Posts: 6,316

Rep: Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002Reputation: 2002
Quote:
Originally Posted by blooperx3 View Post
UPDATE:

Code:
$ df -h
Filesystem          Size  Used Avail Use% Mounted on
...
/dev/mapper/6black  5.5T   89M  5.2T   1% /mnt
The device on which the fileystem resides has a size of 5.5 TiB. About 200GB (or GiB) is overhead for filesystem data structures, 89MB is used for filesystem objects, 5.2 TiB is free. I have to admit that I don't know where gparted gets its data from.

If you want more details, replace the -h option with -BM for expressing sizes in megabytes, or -BG for gigabytes.

Last edited by berndbausch; 11-23-2020 at 03:55 AM. Reason: grammar in the last paragraph
 
Old 11-23-2020, 09:33 AM   #6
computersavvy
Senior Member
 
Registered: Aug 2016
Posts: 3,345

Rep: Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484Reputation: 1484
Quote:
Originally Posted by blooperx3 View Post
------------------------------------
$ sudo fdisk -l /dev/mapper/6black
Disk /dev/mapper/6black: 5.5 TiB, 6001156685824 bytes, 11721009152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 4096 bytes / 33553920 bytes

-----------------------------
This cannot tell you anything about the drive itself. That is a logical volume (partition).
Please rerun that command as "sudo fdisk -l" so we can see the physical device information.

disk sizes lose some space to partitioning, partitions lose some space to formatting, so the command df gives filesystem details, fdisk gives device details, then you have to know which is GiB and which is GB. All make it a bit confusing since the manufacturer almost always uses GB for marketing.
 
Old 11-23-2020, 11:54 AM   #7
kilgoretrout
Senior Member
 
Registered: Oct 2003
Posts: 2,987

Rep: Reputation: 388Reputation: 388Reputation: 388Reputation: 388
Quote:
Previously, i had partition tabled and partition it, and encrypted it. Then I used the dd command to erase everything. But the hdd doesn't seem to be back to square one with available size - hard to tell. I don't see why I am missing "0.3tb" from the hdd - according to terminal output.
Please define what you mean by "back to square one". Your posted output indicates that your 5.5T drive is mounted on /mnt and is encrypted. Is this what you want? Also, please post the output of:
Code:
$ lslbk -f
That will give the filesystem on the drive as well as the underlying block device that /dev/mapper is mapping to /dev/mapper/6black.

I'm guessing your 5.5T drive is formatted with ext4 which reserves 5% of the drive by default for root processes and possible rescue actions. In addition historically, ext* filesystems suffered from fragmentation and performance issues when the disk was near full so by reserving 5%, the disk never became so full that these issues arose. For a discussion on these issues see:

https://unix.stackexchange.com/quest...filesystem-why

Doing the simple math, 5% of 5.5T is .275T and subtracting, that leaves approximately 5.2T available for use.

Reformatting your drive to xfs will eliminate the 5% reserve and reclaim the space. Alternatively, you can reset the 5% default reserve to 1% or even 0 by using the ext4 utility, tune2fs, as more fully described here:

https://unix.stackexchange.com/quest...esystem#335243
 
1 members found this post helpful.
Old 11-23-2020, 01:08 PM   #8
teckk
LQ Guru
 
Registered: Oct 2004
Distribution: Arch
Posts: 5,137
Blog Entries: 6

Rep: Reputation: 1826Reputation: 1826Reputation: 1826Reputation: 1826Reputation: 1826Reputation: 1826Reputation: 1826Reputation: 1826Reputation: 1826Reputation: 1826Reputation: 1826
Look at:
Code:
tune2fs -l /dev/sda2

tune2fs -m <reserved-blocks-percentage> /dev/sda2
Read the man page first.
 
Old 11-26-2020, 08:43 PM   #9
blooperx3
Member
 
Registered: Nov 2020
Posts: 67

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by kilgoretrout View Post
Please define what you mean by "back to square one". Your posted output indicates that your 5.5T drive is mounted on /mnt and is encrypted. Is this what you want? Also, please post the output of:
Code:
$ lslbk -f
That will give the filesystem on the drive as well as the underlying block device that /dev/mapper is mapping to /dev/mapper/6black.

I'm guessing your 5.5T drive is formatted with ext4 which reserves 5% of the drive by default for root processes and possible rescue actions. In addition historically, ext* filesystems suffered from fragmentation and performance issues when the disk was near full so by reserving 5%, the disk never became so full that these issues arose. For a discussion on these issues see:

https://unix.stackexchange.com/quest...filesystem-why

Doing the simple math, 5% of 5.5T is .275T and subtracting, that leaves approximately 5.2T available for use.

Reformatting your drive to xfs will eliminate the 5% reserve and reclaim the space. Alternatively, you can reset the 5% default reserve to 1% or even 0 by using the ext4 utility, tune2fs, as more fully described here:

https://unix.stackexchange.com/quest...esystem#335243
Is it dangerous to set the reserve space to 1%? I see the person in the article uses 2%, but whose to say that is good as well as they are talking about an ssd - mine is hdd.

Last edited by blooperx3; 11-26-2020 at 08:48 PM.
 
Old 11-26-2020, 08:43 PM   #10
blooperx3
Member
 
Registered: Nov 2020
Posts: 67

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by teckk View Post
Look at:
Code:
tune2fs -l /dev/sda2

tune2fs -m <reserved-blocks-percentage> /dev/sda2
Read the man page first.
Is it dangerous to set the reserve space to 1%?
 
Old 11-26-2020, 09:17 PM   #11
rkelsen
Senior Member
 
Registered: Sep 2004
Distribution: slackware
Posts: 4,448
Blog Entries: 7

Rep: Reputation: 2553Reputation: 2553Reputation: 2553Reputation: 2553Reputation: 2553Reputation: 2553Reputation: 2553Reputation: 2553Reputation: 2553Reputation: 2553Reputation: 2553
Quote:
Originally Posted by blooperx3 View Post
Is it dangerous to set the reserve space to 1%?
Do you need to?

Personally, I'd not touch it unless absolutely necessary.
 
1 members found this post helpful.
Old 11-26-2020, 09:41 PM   #12
blooperx3
Member
 
Registered: Nov 2020
Posts: 67

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by rkelsen View Post
Do you need to?

Personally, I'd not touch it unless absolutely necessary.
I'm leaning the same way as you.
 
Old 11-27-2020, 06:19 AM   #13
teckk
LQ Guru
 
Registered: Oct 2004
Distribution: Arch
Posts: 5,137
Blog Entries: 6

Rep: Reputation: 1826Reputation: 1826Reputation: 1826Reputation: 1826Reputation: 1826Reputation: 1826Reputation: 1826Reputation: 1826Reputation: 1826Reputation: 1826Reputation: 1826
I set mine to 1%. And have for a while, since drives were 250GB. I haven't had any problems from that which I know of. There is no reason to tie up 100GB on a 1TB drive, for the kernel. I think that 10% was made back when drives were 2-3GB in size. It makes sense for that size.

You'll have to decide for yourself. And maybe more members can give info. Mine are at 1%.
 
Old 11-27-2020, 08:20 AM   #14
wpeckham
LQ Guru
 
Registered: Apr 2010
Location: Continental USA
Distribution: Debian, Ubuntu, RedHat, DSL, Puppy, CentOS, Knoppix, Mint-DE, Sparky, VSIDO, tinycore, Q4OS,Manjaro
Posts: 5,623

Rep: Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695Reputation: 2695
Since drives have gotten so VERY big, and I seldom need the entire thing, I reserve 7% to get better performance. I now find that SSD gives excellent performance with BTRFS (or LVM and EXT4 if I need RAID-6 on a server) and allows for more flexible use of storage.

The days of managing every single block because 10 Meg was a BIG disk are long gone. If you really need more storage, just get a bigger drive. They are cheap these days.
 
Old 11-27-2020, 10:33 AM   #15
kilgoretrout
Senior Member
 
Registered: Oct 2003
Posts: 2,987

Rep: Reputation: 388Reputation: 388Reputation: 388Reputation: 388
Quote:
Is it dangerous to set the reserve space to 1%? I see the person in the article uses 2%, but whose to say that is good as well as they are talking about an ssd - mine is hdd.
The main reason for the 5% reserve is because of fragmentation problems and resulting performance issues when the hard drive is nearly full. On ext2 and ext3 that was a significant issue. On ext4, there were improvements which made it much more fragmentation resistant. The creator and maintainer of ext4, Theodore Tso of Redhat, has this to say on this issue:

https://www.redhat.com/archives/ext3.../msg00026.html

Given this is an external drive and probably used for backup/archival purposes I don't think setting it to 1% should be that much of an issue. It won't even come into play until you hit about 95% full and even then the ext4 improvements along with your use case(files not changing that often) should result in no problem for you.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Can create virtual machines with my ISO disc, but cannot boot as host from the disc Magson13 Linux - Laptop and Netbook 1 05-26-2014 08:08 PM
Format Ext HD into Ext3 format only (not Ext 3/4) John Glasgow Fedora 8 01-02-2012 08:56 AM
[SOLVED] Can I get back my HDD space + GParted help! LAPIII Linux - Hardware 2 02-13-2011 05:01 AM
Filesystems: ext 2, ext 3, reiserfs. Which one? r3dhatter Linux - Newbie 12 07-15-2004 12:53 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 09:31 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration