LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie
User Name
Password
Linux - Newbie This Linux forum is for members that are new to Linux.
Just starting out and have a question? If it is not in the man pages or the how-to's this is the place!

Notices


Reply
  Search this Thread
Old 10-25-2016, 08:32 AM   #1
MichelCote
LQ Newbie
 
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20

Rep: Reputation: 0
Ran out of space on a partition but df -h shows me 33% used.


Hi everyone,

I've been using CentOS for a while now but I've never really administrated servers before now.

I've been tasked to create a system that would store about a lot of logfiles for people to search through.

Anyway CentOS is built as a VM so I requested from the VM admins to get more space. At first I was given 15GB (we didn't know at the time the project would require more).

So with the 15GB I proceeded to create a new partition /dev/sda9 mounted it as /work. Formatted as ext4 by following different how-to I found around.

Soon I found out I needed more space since I had reached 95% usage so another 15 GB was given.

There basically I moved everything off to different partitions, luckily I had enough free space combined...

I deleted the partition and created a new one still as /dev/sda9 but with the full 30GB. mounted it as /work again.

Moved the logs back onto the /work partition and realized there was some I could do without and managed to get down to 25% usage. I knew I'd still need a good portion of that 30GB...

Anyway now here's my current df -h:
Code:
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3       9.5G  2.1G  7.0G  23% /
tmpfs           1.9G   80K  1.9G   1% /dev/shm
/dev/sda1       239M  156M   71M  69% /boot
/dev/sda8        14G  5.5G  7.2G  44% /home
/dev/sda7       969M  1.4M  917M   1% /tmp
/dev/sda5       9.5G  4.2G  4.9G  47% /usr
/dev/sda2        15G  4.6G  9.1G  34% /var
tmpfs           250M     0  250M   0% /var/nagiosramdisk
/dev/sda9        30G  9.0G   19G  33% /work
tmpfs           1.9G   44K  1.9G   1% /opt/omd/sites/prod/tmp
tmpfs           1.9G  4.1M  1.9G   1% /opt/omd/sites/sitename/tmp
As you can see /dev/sda9 shows 33% usage. 19G free... But if I try to copy a 230 bite file into /work I get : cp: cannot create regular file `./liste.txt': No space left on device

How could that be?

here's the result of parted / print
Code:
Model: VMware Virtual disk (scsi)
Disk /dev/sda: 85.9GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type      File system     Flags
 1      1049kB  263MB   262MB   primary   ext4            boot
 2      263MB   16.0GB  15.7GB  primary   ext4
 3      16.0GB  26.5GB  10.5GB  primary   ext4
 4      26.5GB  85.9GB  59.4GB  extended
 5      26.5GB  37.0GB  10.5GB  logical   ext4
 6      37.0GB  38.0GB  1074MB  logical   linux-swap(v1)
 7      38.0GB  39.1GB  1049MB  logical   ext4
 8      39.1GB  53.7GB  14.6GB  logical   ext4
 9      53.7GB  85.9GB  32.2GB  logical   ext4            lvm
Anyone has pointers as to how to fix this issue?

Many thanks

Last edited by MichelCote; 10-25-2016 at 08:33 AM.
 
Old 10-25-2016, 08:42 AM   #2
cepheus11
Member
 
Registered: Nov 2010
Location: Germany
Distribution: Gentoo
Posts: 286

Rep: Reputation: 91
Most probable you run out of inodes. Check the inode usage percentage with df:

Code:
df -i /work
 
1 members found this post helpful.
Old 10-25-2016, 08:43 AM   #3
lazydog
Senior Member
 
Registered: Dec 2003
Location: The Key Stone State
Distribution: CentOS Sabayon and now Gentoo
Posts: 1,249
Blog Entries: 3

Rep: Reputation: 194Reputation: 194
What do you get when you run the following:
Code:
df -i
I have the sneaky suspicion that you are out of inodes.
 
1 members found this post helpful.
Old 10-25-2016, 08:44 AM   #4
MichelCote
LQ Newbie
 
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20

Original Poster
Rep: Reputation: 0
Right on the nose...

Code:
Filesystem      Inodes   IUsed IFree IUse% Mounted on
/dev/sda9      1966080 1966080     0  100% /work
What can I do to increse these inodes?
 
Old 10-25-2016, 09:00 AM   #5
MichelCote
LQ Newbie
 
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20

Original Poster
Rep: Reputation: 0
Ok...

I read that it is not possible to augment the number of inodes on an existing FS...

So as I understand it, 1 file uses 1 inode, 1 directory uses 1 inode.

So... if I were to gzip files together I would then free up inodes. I would just need to add an extra feature to temporary ungzip the files I need to access...

Am I pretty much in the ball park here?
 
Old 10-25-2016, 09:09 AM   #6
cepheus11
Member
 
Registered: Nov 2010
Location: Germany
Distribution: Gentoo
Posts: 286

Rep: Reputation: 91
If you are able to backup the existing files somewhere, recreate the filesystem and copy the files back, then you should do so and increase the number of inodes when creating the filesystem. Check the "mkfs.ext4" man page, especially the "-i bytes-per-inode" parameter.
 
Old 10-25-2016, 09:11 AM   #7
Habitual
LQ Veteran
 
Registered: Jan 2011
Location: Abingdon, VA
Distribution: Catalina
Posts: 9,374
Blog Entries: 37

Rep: Reputation: Disabled
Can you get another 15G allocated to you?
 
Old 10-25-2016, 09:11 AM   #8
TenTenths
Senior Member
 
Registered: Aug 2011
Location: Dublin
Distribution: Centos 5 / 6 / 7 / 8
Posts: 3,557

Rep: Reputation: 1600Reputation: 1600Reputation: 1600Reputation: 1600Reputation: 1600Reputation: 1600Reputation: 1600Reputation: 1600Reputation: 1600Reputation: 1600Reputation: 1600
Quote:
Originally Posted by MichelCote View Post
Ok...

I read that it is not possible to augment the number of inodes on an existing FS...

So as I understand it, 1 file uses 1 inode, 1 directory uses 1 inode.

So... if I were to gzip files together I would then free up inodes. I would just need to add an extra feature to temporary ungzip the files I need to access...

Am I pretty much in the ball park here?
gzip compresses files on a single file basis, it's not like the DOS "pkzip/pkunzip" which creates archives. You could use tar to create .tar archives or zip to create zip files.

However you'd probably find that if you can allocate sufficient space (even on a temporary basis) you could create a filesystem with more inodes, move your data to that and then reformat your existing partition and then move the files back.
 
Old 10-25-2016, 10:17 AM   #9
MichelCote
LQ Newbie
 
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by TenTenths View Post
gzip compresses files on a single file basis, it's not like the DOS "pkzip/pkunzip" which creates archives. You could use tar to create .tar archives or zip to create zip files.

However you'd probably find that if you can allocate sufficient space (even on a temporary basis) you could create a filesystem with more inodes, move your data to that and then reformat your existing partition and then move the files back.
Yes sorry I meant to say tar... I mixed it up with gz since I already gz the bulk of the files to take less space (another reason why my 95% of 15GB was turned into 22% of 30GB)...

If I were to go the "recreate a filesystem" route as the last posters are suggesting.

How much bytes-per-inode do you suggest I'd need?

Also is sector size (which is set at 512B) synonymous to blocksize?
If so should I maybe attempt to make those smaller as well? I do have gziped files smaller than 512Bytes...

Just so you know. The logs I'm keeping are transactions from our 700 stores and the goal is to keep 92 days (based on the maximum number of days possible that would hold 3 months). Each store has a varying amount of transactions everyday. That's anywhere between 500 to 1000 per day (if not more) and these files are ranging anywhere from 200Bytes to 2.5KB. That's a lot of files lol...

Thanks for the quick help so far... I've been pulling my hair not understanding about inodes (actually not knowing such a thing exists)
 
Old 10-25-2016, 10:34 AM   #10
cepheus11
Member
 
Registered: Nov 2010
Location: Germany
Distribution: Gentoo
Posts: 286

Rep: Reputation: 91
Quote:
Originally Posted by MichelCote View Post
Also is sector size (which is set at 512B) synonymous to blocksize?
No, not necessarily so. With filesystem sizes in the GB range, blocksize is (by default) 4096 bytes usually.

Quote:
I do have gziped files smaller than 512Bytes...
Even a 1B file will take one block of space.

Quote:
Just so you know. The logs I'm keeping are transactions from our 700 stores and the goal is to keep 92 days (based on the maximum number of days possible that would hold 3 months). Each store has a varying amount of transactions everyday. That's anywhere between 500 to 1000 per day (if not more) and these files are ranging anywhere from 200Bytes to 2.5KB.
700 clients * 92 days * 1000 files = 64,400,000 files. Even if you tinker with the blocksize and set it to 512 bytes, they consume > 30 GB of space. A large inode table needs much space, too. I think your current 30 GB are simply not enough.

And performance-wise, reiserfs might be a better option for many small files.
 
1 members found this post helpful.
Old 10-25-2016, 11:11 AM   #11
MichelCote
LQ Newbie
 
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by cepheus11 View Post
And performance-wise, reiserfs might be a better option for many small files.
I've read up on reiserfs and this looks promising.

I already have a directory structure for each store, and in each store I have a directory for each day.

So from there I see two choices

1- Use my first idea to tar the logs for each day thus bring the file count in these directory from 500-1000 down to 1. This tar will of course be gziped as well. I'll free up a bunch of inodes that way won't I?

2- I can request more space when needed so I might ask to 30GB, create a new partition with it format is as reiserfs (from what I read about reiserfs I don't need to worry about inodes, right?). Then I'd mount it with a temporary name move over the files and delete the old partition and re-mount the new one as /work.

Does that make any sense?

Last edited by MichelCote; 10-25-2016 at 11:18 AM. Reason: read more about reiserfs so answered my own question about inodes.
 
Old 10-25-2016, 11:38 AM   #12
cepheus11
Member
 
Registered: Nov 2010
Location: Germany
Distribution: Gentoo
Posts: 286

Rep: Reputation: 91
Quote:
Originally Posted by MichelCote View Post
1- Use my first idea to tar the logs for each day thus bring the file count in these directory from 500-1000 down to 1. This tar will of course be gziped as well. I'll free up a bunch of inodes that way won't I?
Yes.

Quote:
2- I can request more space when needed
That is, when software fails because "no space left on device". You already have missing data at that point, and possible subsequent failures.

Quote:
so I might ask to 30GB
Which will not be enough unless you lower the retention time and manually delete older files, or manually tar-zip something, or have more than one 30GB partition at hand.

Quote:
create a new partition with it format is as reiserfs (do I need to do anything with inodes with reisrfs?)
Unfortunately, I don't know that.


However, additional thoughts:

With that amount of data, you should use a database, not a filesystem. An organization with 700 branches shouldmust use a database with proper resources, security, isolation, monitoring and professional support ready to tackle problems. If you have to juggle with 30GB partitions for >64M files and ask for help on scalability in this forum, this looks to me like management saving money on the wrong end. I have a feeling this might end with us reading about a catastrophic outage in the IT news. Really, contact management and make absolutely clear that more resources are needed.
 
Old 10-25-2016, 11:51 AM   #13
MichelCote
LQ Newbie
 
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20

Original Poster
Rep: Reputation: 0
Thanks for the reply cepheus11,

I did try a MySQL database but it started bugging down over the sheer amount of information I had to record.

I've only ever used MySQL with PHP, the search feature is on a intranet web page, so I have no knowledge of other, maybe with better performance, database systems.

I wouldn't mind giving this another go, but I'd need to move the database files onto the /work/ partition because my /var/ just wouldn't stand a chance... (lol)
 
Old 10-25-2016, 11:55 AM   #14
Lsatenstein
Member
 
Registered: Jul 2005
Location: Montreal Canada
Distribution: Fedora 31and Tumbleweed) Gnome versions
Posts: 311
Blog Entries: 1

Rep: Reputation: 59
I presume you can backup /work

Do so and convert to xfs
Don't forget to also change /etc/fstab to note /work is now xfs

1) xfs is a different architecture, also somewhat more efficient than ext4.
2) xfs negative aspect, can't be shrunk in size, but it can be enlarged.

Benchmarks show xfs as better performing when compared to ext3 or ext4
 
Old 10-25-2016, 11:58 AM   #15
MichelCote
LQ Newbie
 
Registered: Oct 2009
Location: Laval Québec
Distribution: CentOS 6
Posts: 20

Original Poster
Rep: Reputation: 0
Quote:
Originally Posted by Lsatenstein View Post
I presume you can backup /work

Do so and convert to xfs
Don't forget to also change /etc/fstab to note /work is now xfs

1) xfs is a different architecture, also somewhat more efficient than ext4.
2) xfs negative aspect, can't be shrunk in size, but it can be enlarged.

Benchmarks show xfs as better performing when compared to ext3 or ext4
Thanks for the reply, does xfs solve the inodes issue?
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Linux: No space left on device while df command shows lot of free space LXer Syndicated Linux News 0 08-26-2013 02:50 PM
df shows Missing Space on partition duryodhan Slackware 5 04-15-2008 08:21 AM
Ran out of disk space on / fof3 Linux - Newbie 5 12-17-2007 07:27 PM
i think i ran out of space simeandrews Linux - General 6 09-01-2005 02:54 PM
My /home partition shows up as unpartitioned space! purplecow Linux - Hardware 2 07-06-2004 05:20 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Newbie

All times are GMT -5. The time now is 06:30 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration