Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm with Wim on this one, isn't it astounding that SEVEN years later, Linux still throws a "Disk Full" error when the failure condition is "To Many Files In Some Unspecified Directory"?
Oh, well, at least Google can still find this thread, and the fix still works.
It is pretty much true of any filesystem that a disk is considered (or reported as) "full" when any critical data structure runs out of space ... and yes, that can include an individual sub-directory. There will always be some amount of disk space which is set-aside for administrative purposes.
And, yes, this kind of message is always confusing. Even seven years later.
Use the command "df -iP".
It will show whether the inode threshold has been reached.
Then rm command has a limit of accepting number of files. That is the reason it has given too many inputs error.
As andrewthomas mentioned, the best to remove the directory itself and then recreate the directory.
If the entire directory can't be removed, some filtering mechanism can be used.
rm *.jpg .. etc
if that also too large, I usually use a for loop mechanism.. But that is a weird one.. there might be some other better options.
and I'm getting errors while running my Perl scripts like:
Exception 450: Output file write error --- out of disk space? `/home/Movies/jpgs/P11246.jpg' @ error/jpeg.c/EmitMessage/235 at pool01.pl line 247.
Where the 'offending' lines in pool01.jpg is where I use ImageMagick to write another output file:
$status = $Image->Write($OFN);
warn "$status" if "$status";
Centos 5.5, Linux 2.6.18-194.3.1.el5PAE
/dev/md0 is the source of the images, and they are being written to /dev/sda1 at /home/Movies/jpgs if that helps.
Any thoughts as to what might be going wrong, and how I could fix it? I've checked /dev/sda1 a couple of times on boot with tune2fs, but that didn't help...
I don't think I'm leaving temporary files somewhere else that might be clogging up a different directory, could there be something else going on? Is there another error message that I find (or increase a logging level of) so that I could get a better idea of what I'm running out of?
For some reason I thought that /dev/mapper/VolGroup00-LogVol00 was some virtual thing that was always full, as I seem to remember always seeing it full.
I'm clearing out over 100G of space on it now, and will probably end up moving the movies to the raid array.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.