[SOLVED] Ran out of space on a partition but df -h shows me 33% used.
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
No, and is more aimed at large files, not your scenario. All filesystems are trying to accommodate this.
Given you are prepared to zip the data, (read) performance is not an issue ?.
I haven't used reiser in a long while - I prefer filesystems with current support.
As to your query re bytes-per-inode (ext4), let the mkfs worry about that - use the "-T small" parameter and it will adjust inode size and ratio for you. Have a look in /etc/mke2fs.conf for options.
Last edited by syg00; 10-25-2016 at 07:45 PM.
Reason: last sentence
No, and is more aimed at large files, not your scenario. All filesystems are trying to accommodate this.
Given you are prepared to zip the data, (read) performance is not an issue ?.
I haven't used reiser in a long while - I prefer filesystems with current support.
As to your query re bytes-per-inode (ext4), let the mkfs worry about that - use the "-T small" parameter and it will adjust inode size and ratio for you. Have a look in /etc/mke2fs.conf for options.
Thanks for the great answer.
And yes read performance is not a problem, well I haven't seen any yet.
Well I've decided after a night's sleep. I'm backing the files up and I'll reformat the partition using ext4 and the -T small parameter. As for tar-ing the files each day I've thought of something else... Since the files in each day directory are json files contructed from much bigger xml files where I remove anything I don't actually need, I'm thinking of testing the creation of one json file containing the whole days worth of transactions. Then the web page (HTML/AngularJS) will deal with the information. I know I can create the single json file but I'm not sure of the JavaScript on the client customer will be able to handle the amount of information. If that fails well I still have the tar-ing idea. Probable un-tar the days file when needed by the client application into a temp folder where it will be cleared off by a cron job.
So I'm curently in the process of backing up. Will let you all know how it goes.
6.9. Migrating from ext4 to XFS
A major change in Red Hat Enterprise 7 is the switch from ext4 to XFS as the default filesystem. This section highlights the differences which may be enountered when using or administering an XFS filesystem.
Note
The ext4 file system is still fully supported in Red Hat Enterprise Linux 7 and may be selected at installation if desired. While it is possible to migrate from ext4 to XFS it is not required.
Xfs is preferable when you can't predict how many inodes you will need.
What happens is xfs adds more inodes when you run out, without needing any intervention. It does have a small tendency to scatter the inodes around, which can impact performance - but it is relatively minor.
A bigger performance impact depends on how the files are organized. Directory searches (and deletes) can get slow if all the files are in one directory.
Xfs is preferable when you can't predict how many inodes you will need.
What happens is xfs adds more inodes when you run out, without needing any intervention. It does have a small tendency to scatter the inodes around, which can impact performance - but it is relatively minor.
A bigger performance impact depends on how the files are organized. Directory searches (and deletes) can get slow if all the files are in one directory.
Merci Leslie and jpollard,
I'll certainly keeps this in mind as I'm sure I'll have to add more to the server down the line.
Also, you should (IMHO) always install the system with "LVM = Logical Volume Management" installed. (With Ubuntu, it is a standard option available during installation.) This would have allowed you to increase the available disk space simply by adding another "physical volume" and adding it to the appropriate storage pool. Linux applications would perceive a single mount-point that had suddenly grown bigger. They would not perceive that the storage was actually being provided by multiple devices, nor how the data was being distributed among those devices.
Last edited by sundialsvcs; 10-27-2016 at 08:22 AM.
Also, you should (IMHO) always install the system with "LVM = Logical Volume Management" installed. (With Ubuntu, it is a standard option available during installation.) This would have allowed you to increase the available disk space simply by adding another "physical volume" and adding it to the appropriate storage pool. Linux applications would perceive a single mount-point that had suddenly grown bigger. They would not perceive that the storage was actually being provided by multiple devices, nor how the data was being distributed among those devices.
Thanks sundial,
Will keep this in mind as well. Feeling less and less noobish... lol
I am a Fedora user for the past 10 years. I recently installed SUSE's Tumbleweed. I have also run Centos, Ubuntu, etc.
RedHat, SUSE, and their derivatives have selected XFS for /home. I in fact use xfs for all partitions. Perhaps it is because I read that xfs is more resilient than ext4 (Good recovery after power failure so some statements claim). I am using xfs with an SSD and with spinning hard disks.
Can I as a workstation user, tell the difference between ext4 and xfs for performance or recovery? I think not. Have I lost data after a powerfailure crash? --No.
I've never lost data with ext2, 3 or 4 even after a crash. Now files that are partially written sure. Same problem with xfs.
The ony real advantage xfs has is automatically extending the inode list and maximum file size 8 exbibytes. Both are extent based filesystems, with ext4 providing upward compatibility from ext2 and 3. Both ext4 and xfs use journals. The main limitations of ext4 are a fixed number of inodes (set at creation time), and a maximum of 16TB file and volume size.
On linux xfs has been quite reliable (older Irix based xfs tended to lose free blocks on a crash).
I am a Fedora user for the past 10 years. I recently installed SUSE's Tumbleweed. I have also run Centos, Ubuntu, etc.
RedHat, SUSE, and their derivatives have selected XFS for /home. I in fact use xfs for all partitions. Perhaps it is because I read that xfs is more resilient than ext4 (Good recovery after power failure so some statements claim). I am using xfs with an SSD and with spinning hard disks.
Can I as a workstation user, tell the difference between ext4 and xfs for performance or recovery? I think not. Have I lost data after a powerfailure crash? --No.
Quote:
Originally Posted by jpollard
I've never lost data with ext2, 3 or 4 even after a crash. Now files that are partially written sure. Same problem with xfs.
The ony real advantage xfs has is automatically extending the inode list and maximum file size 8 exbibytes. Both are extent based filesystems, with ext4 providing upward compatibility from ext2 and 3. Both ext4 and xfs use journals. The main limitations of ext4 are a fixed number of inodes (set at creation time), and a maximum of 16TB file and volume size.
On linux xfs has been quite reliable (older Irix based xfs tended to lose free blocks on a crash).
Bonjour guys,
Thanks for the input. I've pretty much made up my mind to try xfs next time I have a partition added on my servers.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.