[SOLVED] Ran out of space on a partition but df -h shows me 33% used.
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Ran out of space on a partition but df -h shows me 33% used.
Hi everyone,
I've been using CentOS for a while now but I've never really administrated servers before now.
I've been tasked to create a system that would store about a lot of logfiles for people to search through.
Anyway CentOS is built as a VM so I requested from the VM admins to get more space. At first I was given 15GB (we didn't know at the time the project would require more).
So with the 15GB I proceeded to create a new partition /dev/sda9 mounted it as /work. Formatted as ext4 by following different how-to I found around.
Soon I found out I needed more space since I had reached 95% usage so another 15 GB was given.
There basically I moved everything off to different partitions, luckily I had enough free space combined...
I deleted the partition and created a new one still as /dev/sda9 but with the full 30GB. mounted it as /work again.
Moved the logs back onto the /work partition and realized there was some I could do without and managed to get down to 25% usage. I knew I'd still need a good portion of that 30GB...
As you can see /dev/sda9 shows 33% usage. 19G free... But if I try to copy a 230 bite file into /work I get : cp: cannot create regular file `./liste.txt': No space left on device
I read that it is not possible to augment the number of inodes on an existing FS...
So as I understand it, 1 file uses 1 inode, 1 directory uses 1 inode.
So... if I were to gzip files together I would then free up inodes. I would just need to add an extra feature to temporary ungzip the files I need to access...
If you are able to backup the existing files somewhere, recreate the filesystem and copy the files back, then you should do so and increase the number of inodes when creating the filesystem. Check the "mkfs.ext4" man page, especially the "-i bytes-per-inode" parameter.
I read that it is not possible to augment the number of inodes on an existing FS...
So as I understand it, 1 file uses 1 inode, 1 directory uses 1 inode.
So... if I were to gzip files together I would then free up inodes. I would just need to add an extra feature to temporary ungzip the files I need to access...
Am I pretty much in the ball park here?
gzip compresses files on a single file basis, it's not like the DOS "pkzip/pkunzip" which creates archives. You could use tar to create .tar archives or zip to create zip files.
However you'd probably find that if you can allocate sufficient space (even on a temporary basis) you could create a filesystem with more inodes, move your data to that and then reformat your existing partition and then move the files back.
gzip compresses files on a single file basis, it's not like the DOS "pkzip/pkunzip" which creates archives. You could use tar to create .tar archives or zip to create zip files.
However you'd probably find that if you can allocate sufficient space (even on a temporary basis) you could create a filesystem with more inodes, move your data to that and then reformat your existing partition and then move the files back.
Yes sorry I meant to say tar... I mixed it up with gz since I already gz the bulk of the files to take less space (another reason why my 95% of 15GB was turned into 22% of 30GB)...
If I were to go the "recreate a filesystem" route as the last posters are suggesting.
How much bytes-per-inode do you suggest I'd need?
Also is sector size (which is set at 512B) synonymous to blocksize?
If so should I maybe attempt to make those smaller as well? I do have gziped files smaller than 512Bytes...
Just so you know. The logs I'm keeping are transactions from our 700 stores and the goal is to keep 92 days (based on the maximum number of days possible that would hold 3 months). Each store has a varying amount of transactions everyday. That's anywhere between 500 to 1000 per day (if not more) and these files are ranging anywhere from 200Bytes to 2.5KB. That's a lot of files lol...
Thanks for the quick help so far... I've been pulling my hair not understanding about inodes (actually not knowing such a thing exists)
Also is sector size (which is set at 512B) synonymous to blocksize?
No, not necessarily so. With filesystem sizes in the GB range, blocksize is (by default) 4096 bytes usually.
Quote:
I do have gziped files smaller than 512Bytes...
Even a 1B file will take one block of space.
Quote:
Just so you know. The logs I'm keeping are transactions from our 700 stores and the goal is to keep 92 days (based on the maximum number of days possible that would hold 3 months). Each store has a varying amount of transactions everyday. That's anywhere between 500 to 1000 per day (if not more) and these files are ranging anywhere from 200Bytes to 2.5KB.
700 clients * 92 days * 1000 files = 64,400,000 files. Even if you tinker with the blocksize and set it to 512 bytes, they consume > 30 GB of space. A large inode table needs much space, too. I think your current 30 GB are simply not enough.
And performance-wise, reiserfs might be a better option for many small files.
And performance-wise, reiserfs might be a better option for many small files.
I've read up on reiserfs and this looks promising.
I already have a directory structure for each store, and in each store I have a directory for each day.
So from there I see two choices
1- Use my first idea to tar the logs for each day thus bring the file count in these directory from 500-1000 down to 1. This tar will of course be gziped as well. I'll free up a bunch of inodes that way won't I?
2- I can request more space when needed so I might ask to 30GB, create a new partition with it format is as reiserfs (from what I read about reiserfs I don't need to worry about inodes, right?). Then I'd mount it with a temporary name move over the files and delete the old partition and re-mount the new one as /work.
Does that make any sense?
Last edited by MichelCote; 10-25-2016 at 11:18 AM.
Reason: read more about reiserfs so answered my own question about inodes.
1- Use my first idea to tar the logs for each day thus bring the file count in these directory from 500-1000 down to 1. This tar will of course be gziped as well. I'll free up a bunch of inodes that way won't I?
Yes.
Quote:
2- I can request more space when needed
That is, when software fails because "no space left on device". You already have missing data at that point, and possible subsequent failures.
Quote:
so I might ask to 30GB
Which will not be enough unless you lower the retention time and manually delete older files, or manually tar-zip something, or have more than one 30GB partition at hand.
Quote:
create a new partition with it format is as reiserfs (do I need to do anything with inodes with reisrfs?)
Unfortunately, I don't know that.
However, additional thoughts:
With that amount of data, you should use a database, not a filesystem. An organization with 700 branches shouldmust use a database with proper resources, security, isolation, monitoring and professional support ready to tackle problems. If you have to juggle with 30GB partitions for >64M files and ask for help on scalability in this forum, this looks to me like management saving money on the wrong end. I have a feeling this might end with us reading about a catastrophic outage in the IT news. Really, contact management and make absolutely clear that more resources are needed.
I did try a MySQL database but it started bugging down over the sheer amount of information I had to record.
I've only ever used MySQL with PHP, the search feature is on a intranet web page, so I have no knowledge of other, maybe with better performance, database systems.
I wouldn't mind giving this another go, but I'd need to move the database files onto the /work/ partition because my /var/ just wouldn't stand a chance... (lol)
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.