LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 04-14-2014, 06:25 AM   #1
imadsani
Member
 
Registered: Aug 2013
Distribution: CentOS 6.5
Posts: 64

Rep: Reputation: Disabled
reclaim hdd wasted space?


Hey,

I've been plagued with this notion for quite some time, I'm running 2 x 2TB drives in raid 0 with "/" receiving all the diskspace. I can only see 3.5TB available to it. Is there any way to reclaim the ~300-350GB?
Here's a screenshot

http://i1196.photobucket.com/albums/...ps2c51bc9b.png

I apologize for the extremely noob question, grasping at straws here.

Last edited by imadsani; 04-14-2014 at 06:43 AM.
 
Old 04-14-2014, 07:42 AM   #2
lleb
Senior Member
 
Registered: Dec 2005
Location: Florida
Distribution: CentOS/Fedora/Pop!_OS
Posts: 2,983

Rep: Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551
#1, thanks to all of the manufactures over a decade ago they dropped the true bit count of the drives and round things up. a 2TB drive is never going to be a true 2TB any longer. so maybe you have 1.8TB or so and it could be as little as 1.67TB

#2. when you created your raid you lost space for the pairing of the drives.

In all you have not lost as much as you think you have, if any at all out side of the creation of the raid.
 
Old 04-14-2014, 08:22 AM   #3
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
Quote:
Originally Posted by lleb View Post
#1, thanks to all of the manufactures over a decade ago they dropped the true bit count of the drives and round things up. a 2TB drive is never going to be a true 2TB any longer. so maybe you have 1.8TB or so and it could be as little as 1.67TB
It is not really rounding, it is just using a different scale. While a kilobyte usually was 1024 byte and a megabyte 1024 kilobyte those people use the factor 1000 instead of 1024.
So a 2TB disk has 2000GB, or 2000000MB, ..., coming down to 2000000000000 byte. If you now use the 1024 factor to scale that up to "real" terabyte you come to about 1863GB. In a RAID 0 with two of those disks you get to about 3.725TB total capacity. If you now format such a partition you will, dependent on the file system in use, need some space for the journal. Also, do not forget that most Linux filesystems reserve some space (usually 5%, in this case about 180GB) for the root user, so that in the end you have about 3.5TB usable space. You can change the amount of space reserved for root using the tune2fs command, if you use an ext filesystem.
 
Old 04-14-2014, 08:52 AM   #4
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
And don't forget the inodes, freelist bitmap, backup home blocks, as well as the journal also chew into the space.
 
Old 04-15-2014, 05:19 AM   #5
imadsani
Member
 
Registered: Aug 2013
Distribution: CentOS 6.5
Posts: 64

Original Poster
Rep: Reputation: Disabled
thank you all for your replies, feel stupid now for asking such a question. But thank you for clearing things up
 
Old 04-15-2014, 06:53 AM   #6
jpollard
Senior Member
 
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912

Rep: Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513Reputation: 1513
Not a problem. Happens to nearly everybody that deals with disks.

Originally, disk manufacturers measured disk capacity the same way that the computer engineers did - using units of 1024. Once disks reached 500MB (or the next generation) marketing got in the action - and immediately realized that if they used a smaller sized unit (1000) it would make the disks appear larger... On their side was that 1000 is the standard unit meaning for kilo. After the battle over units (k = 1000 vs K=1024) the printing remained using k, and disguising that fact using TB (using standard units), leading to a lot of confusion.

The filesystem overhead is USUALLY documented with the filesystem. Some don't because they can dynamically expand some tables (xfs for instance can't run out of inodes - it just allocates another allocation group. The downside is that inodes are not based on count, but where they are located - so inode numbers used in directories have 64 bit values). Again, the variation makes computing the default overhead tricky.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Compilation error : wasted space AmirJamez Linux - Software 6 09-17-2012 04:39 AM
how to reduce swap space and reclaim the space grantm Linux - Newbie 7 08-16-2012 07:05 AM
partitions : lots of space is wasted DBabo Linux - General 12 12-13-2009 11:56 AM
USB Stick and Wasted Space rioch Linux - Hardware 2 03-05-2005 11:58 AM
wasted space gonus Linux - General 15 03-03-2003 03:25 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 08:07 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration