Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place! |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
|
10-25-2016, 08:32 AM
|
#1
|
LQ Newbie
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20
Rep:
|
Ran out of space on a partition but df -h shows me 33% used.
Hi everyone,
I've been using CentOS for a while now but I've never really administrated servers before now.
I've been tasked to create a system that would store about a lot of logfiles for people to search through.
Anyway CentOS is built as a VM so I requested from the VM admins to get more space. At first I was given 15GB (we didn't know at the time the project would require more).
So with the 15GB I proceeded to create a new partition /dev/sda9 mounted it as /work. Formatted as ext4 by following different how-to I found around.
Soon I found out I needed more space since I had reached 95% usage so another 15 GB was given.
There basically I moved everything off to different partitions, luckily I had enough free space combined...
I deleted the partition and created a new one still as /dev/sda9 but with the full 30GB. mounted it as /work again.
Moved the logs back onto the /work partition and realized there was some I could do without and managed to get down to 25% usage. I knew I'd still need a good portion of that 30GB...
Anyway now here's my current df -h:
Code:
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 9.5G 2.1G 7.0G 23% /
tmpfs 1.9G 80K 1.9G 1% /dev/shm
/dev/sda1 239M 156M 71M 69% /boot
/dev/sda8 14G 5.5G 7.2G 44% /home
/dev/sda7 969M 1.4M 917M 1% /tmp
/dev/sda5 9.5G 4.2G 4.9G 47% /usr
/dev/sda2 15G 4.6G 9.1G 34% /var
tmpfs 250M 0 250M 0% /var/nagiosramdisk
/dev/sda9 30G 9.0G 19G 33% /work
tmpfs 1.9G 44K 1.9G 1% /opt/omd/sites/prod/tmp
tmpfs 1.9G 4.1M 1.9G 1% /opt/omd/sites/sitename/tmp
As you can see /dev/sda9 shows 33% usage. 19G free... But if I try to copy a 230 bite file into /work I get : cp: cannot create regular file `./liste.txt': No space left on device
How could that be?
here's the result of parted / print
Code:
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 85.9GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 263MB 262MB primary ext4 boot
2 263MB 16.0GB 15.7GB primary ext4
3 16.0GB 26.5GB 10.5GB primary ext4
4 26.5GB 85.9GB 59.4GB extended
5 26.5GB 37.0GB 10.5GB logical ext4
6 37.0GB 38.0GB 1074MB logical linux-swap(v1)
7 38.0GB 39.1GB 1049MB logical ext4
8 39.1GB 53.7GB 14.6GB logical ext4
9 53.7GB 85.9GB 32.2GB logical ext4 lvm
Anyone has pointers as to how to fix this issue?
Many thanks
Last edited by MichelCote; 10-25-2016 at 08:33 AM.
|
|
|
10-25-2016, 08:42 AM
|
#2
|
Member
Registered: Nov 2010
Location: Germany
Distribution: Gentoo
Posts: 286
Rep:
|
Most probable you run out of inodes. Check the inode usage percentage with df:
|
|
1 members found this post helpful.
|
10-25-2016, 08:43 AM
|
#3
|
Senior Member
Registered: Dec 2003
Location: The Key Stone State
Distribution: CentOS Sabayon and now Gentoo
Posts: 1,249
Rep: 
|
What do you get when you run the following:
I have the sneaky suspicion that you are out of inodes.
|
|
1 members found this post helpful.
|
10-25-2016, 08:44 AM
|
#4
|
LQ Newbie
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20
Original Poster
Rep:
|
Right on the nose...
Code:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda9 1966080 1966080 0 100% /work
What can I do to increse these inodes?
|
|
|
10-25-2016, 09:00 AM
|
#5
|
LQ Newbie
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20
Original Poster
Rep:
|
Ok...
I read that it is not possible to augment the number of inodes on an existing FS...
So as I understand it, 1 file uses 1 inode, 1 directory uses 1 inode.
So... if I were to gzip files together I would then free up inodes. I would just need to add an extra feature to temporary ungzip the files I need to access...
Am I pretty much in the ball park here?
|
|
|
10-25-2016, 09:09 AM
|
#6
|
Member
Registered: Nov 2010
Location: Germany
Distribution: Gentoo
Posts: 286
Rep:
|
If you are able to backup the existing files somewhere, recreate the filesystem and copy the files back, then you should do so and increase the number of inodes when creating the filesystem. Check the "mkfs.ext4" man page, especially the "-i bytes-per-inode" parameter.
|
|
|
10-25-2016, 09:11 AM
|
#7
|
LQ Veteran
Registered: Jan 2011
Location: Abingdon, VA
Distribution: Catalina
Posts: 9,374
Rep: 
|
Can you get another 15G allocated to you?
|
|
|
10-25-2016, 09:11 AM
|
#8
|
Senior Member
Registered: Aug 2011
Location: Dublin
Distribution: Centos 5 / 6 / 7 / 8
Posts: 3,557
|
Quote:
Originally Posted by MichelCote
Ok...
I read that it is not possible to augment the number of inodes on an existing FS...
So as I understand it, 1 file uses 1 inode, 1 directory uses 1 inode.
So... if I were to gzip files together I would then free up inodes. I would just need to add an extra feature to temporary ungzip the files I need to access...
Am I pretty much in the ball park here?
|
gzip compresses files on a single file basis, it's not like the DOS "pkzip/pkunzip" which creates archives. You could use tar to create .tar archives or zip to create zip files.
However you'd probably find that if you can allocate sufficient space (even on a temporary basis) you could create a filesystem with more inodes, move your data to that and then reformat your existing partition and then move the files back.
|
|
|
10-25-2016, 10:17 AM
|
#9
|
LQ Newbie
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20
Original Poster
Rep:
|
Quote:
Originally Posted by TenTenths
gzip compresses files on a single file basis, it's not like the DOS "pkzip/pkunzip" which creates archives. You could use tar to create .tar archives or zip to create zip files.
However you'd probably find that if you can allocate sufficient space (even on a temporary basis) you could create a filesystem with more inodes, move your data to that and then reformat your existing partition and then move the files back.
|
Yes sorry I meant to say tar... I mixed it up with gz since I already gz the bulk of the files to take less space (another reason why my 95% of 15GB was turned into 22% of 30GB)...
If I were to go the "recreate a filesystem" route as the last posters are suggesting.
How much bytes-per-inode do you suggest I'd need?
Also is sector size (which is set at 512B) synonymous to blocksize?
If so should I maybe attempt to make those smaller as well? I do have gziped files smaller than 512Bytes...
Just so you know. The logs I'm keeping are transactions from our 700 stores and the goal is to keep 92 days (based on the maximum number of days possible that would hold 3 months). Each store has a varying amount of transactions everyday. That's anywhere between 500 to 1000 per day (if not more) and these files are ranging anywhere from 200Bytes to 2.5KB. That's a lot of files lol...
Thanks for the quick help so far... I've been pulling my hair not understanding about inodes (actually not knowing such a thing exists)
|
|
|
10-25-2016, 10:34 AM
|
#10
|
Member
Registered: Nov 2010
Location: Germany
Distribution: Gentoo
Posts: 286
Rep:
|
Quote:
Originally Posted by MichelCote
Also is sector size (which is set at 512B) synonymous to blocksize?
|
No, not necessarily so. With filesystem sizes in the GB range, blocksize is (by default) 4096 bytes usually.
Quote:
I do have gziped files smaller than 512Bytes...
|
Even a 1B file will take one block of space.
Quote:
Just so you know. The logs I'm keeping are transactions from our 700 stores and the goal is to keep 92 days (based on the maximum number of days possible that would hold 3 months). Each store has a varying amount of transactions everyday. That's anywhere between 500 to 1000 per day (if not more) and these files are ranging anywhere from 200Bytes to 2.5KB.
|
700 clients * 92 days * 1000 files = 64,400,000 files. Even if you tinker with the blocksize and set it to 512 bytes, they consume > 30 GB of space. A large inode table needs much space, too. I think your current 30 GB are simply not enough.
And performance-wise, reiserfs might be a better option for many small files.
|
|
1 members found this post helpful.
|
10-25-2016, 11:11 AM
|
#11
|
LQ Newbie
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20
Original Poster
Rep:
|
Quote:
Originally Posted by cepheus11
And performance-wise, reiserfs might be a better option for many small files.
|
I've read up on reiserfs and this looks promising.
I already have a directory structure for each store, and in each store I have a directory for each day.
So from there I see two choices
1- Use my first idea to tar the logs for each day thus bring the file count in these directory from 500-1000 down to 1. This tar will of course be gziped as well. I'll free up a bunch of inodes that way won't I?
2- I can request more space when needed so I might ask to 30GB, create a new partition with it format is as reiserfs (from what I read about reiserfs I don't need to worry about inodes, right?). Then I'd mount it with a temporary name move over the files and delete the old partition and re-mount the new one as /work.
Does that make any sense?
Last edited by MichelCote; 10-25-2016 at 11:18 AM.
Reason: read more about reiserfs so answered my own question about inodes.
|
|
|
10-25-2016, 11:38 AM
|
#12
|
Member
Registered: Nov 2010
Location: Germany
Distribution: Gentoo
Posts: 286
Rep:
|
Quote:
Originally Posted by MichelCote
1- Use my first idea to tar the logs for each day thus bring the file count in these directory from 500-1000 down to 1. This tar will of course be gziped as well. I'll free up a bunch of inodes that way won't I?
|
Yes.
Quote:
2- I can request more space when needed
|
That is, when software fails because "no space left on device". You already have missing data at that point, and possible subsequent failures.
Which will not be enough unless you lower the retention time and manually delete older files, or manually tar-zip something, or have more than one 30GB partition at hand.
Quote:
create a new partition with it format is as reiserfs (do I need to do anything with inodes with reisrfs?)
|
Unfortunately, I don't know that.
However, additional thoughts:
With that amount of data, you should use a database, not a filesystem. An organization with 700 branches shouldmust use a database with proper resources, security, isolation, monitoring and professional support ready to tackle problems. If you have to juggle with 30GB partitions for >64M files and ask for help on scalability in this forum, this looks to me like management saving money on the wrong end. I have a feeling this might end with us reading about a catastrophic outage in the IT news. Really, contact management and make absolutely clear that more resources are needed.
|
|
|
10-25-2016, 11:51 AM
|
#13
|
LQ Newbie
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20
Original Poster
Rep:
|
Thanks for the reply cepheus11,
I did try a MySQL database but it started bugging down over the sheer amount of information I had to record.
I've only ever used MySQL with PHP, the search feature is on a intranet web page, so I have no knowledge of other, maybe with better performance, database systems.
I wouldn't mind giving this another go, but I'd need to move the database files onto the /work/ partition because my /var/ just wouldn't stand a chance... (lol)
|
|
|
10-25-2016, 11:55 AM
|
#14
|
Member
Registered: Jul 2005
Location: Montreal Canada
Distribution: Fedora 31and Tumbleweed) Gnome versions
Posts: 311
Rep:
|
I presume you can backup /work
Do so and convert to xfs
Don't forget to also change /etc/fstab to note /work is now xfs
1) xfs is a different architecture, also somewhat more efficient than ext4.
2) xfs negative aspect, can't be shrunk in size, but it can be enlarged.
Benchmarks show xfs as better performing when compared to ext3 or ext4
|
|
|
10-25-2016, 11:58 AM
|
#15
|
LQ Newbie
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20
Original Poster
Rep:
|
Quote:
Originally Posted by Lsatenstein
I presume you can backup /work
Do so and convert to xfs
Don't forget to also change /etc/fstab to note /work is now xfs
1) xfs is a different architecture, also somewhat more efficient than ext4.
2) xfs negative aspect, can't be shrunk in size, but it can be enlarged.
Benchmarks show xfs as better performing when compared to ext3 or ext4
|
Thanks for the reply, does xfs solve the inodes issue?
|
|
|
All times are GMT -5. The time now is 06:30 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|