Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I just encountered a nasty bug in the Acronis Cloud software. The bug seems to produce an indefinite number of log files in /var/lib/Acronis/msp/zmq/logs. They've since came out with a patch for this, but I'm still stuck with a directory full of millions of logs that I cannot delete. I'm estimating there are about 15 million files in that directory, but I'm not even sure about that. Honestly for all I know, there could be 150 million.
I can't even touch this directory. Any command I try (ls, rm, find) hangs where this directory is concerned. I've tried all of the suggestions here with no luck: http://www.slashroot.in/which-is-the...files-in-linux
I let each of the methods mentioned in that article run for at least several hours. When I would come back and run `df -h` in a separate shell, there was no change in the disk usage. On the rsync method, I tried adding `-v` to see if it was actually doing anything, but the only line I got was "sending incremental file list". This was last night, and the operation continued to hang like that when I came back and checked on it this morning.
If it matters, this machine is running CentOS 5 and the file system type is ext3. Everything except /boot is installed on one root (/) LVM volume.
Code:
[root@devlinux ~]# /usr/sbin/lvdisplay
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol00
VG Name VolGroup00
LV UUID O24ktK-T0oK-qKGZ-xPJG-QQeS-CFhA-cbHK21
LV Write Access read/write
LV Status available
# open 1
LV Size 147.00 GB
Current LE 4704
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Name /dev/VolGroup00/LogVol01
VG Name VolGroup00
LV UUID 36ntD4-car3-3F78-UKSZ-9RPs-bxUl-FChL9y
LV Write Access read/write
LV Status available
# open 1
LV Size 1.94 GB
Current LE 62
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
[root@devlinux ~]# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
Is there any way to resolve this or am I better off backing up the important data and reinstalling?
I just encountered a nasty bug in the Acronis Cloud software. The bug seems to produce an indefinite number of log files in /var/lib/Acronis/msp/zmq/logs. They've since came out with a patch for this, but I'm still stuck with a directory full of millions of logs that I cannot delete. I'm estimating there are about 15 million files in that directory, but I'm not even sure about that. Honestly for all I know, there could be 150 million.
I can't even touch this directory. Any command I try (ls, rm, find) hangs where this directory is concerned. I've tried all of the suggestions here with no luck: http://www.slashroot.in/which-is-the...files-in-linux
I let each of the methods mentioned in that article run for at least several hours. When I would come back and run `df -h` in a separate shell, there was no change in the disk usage. On the rsync method, I tried adding `-v` to see if it was actually doing anything, but the only line I got was "sending incremental file list". This was last night, and the operation continued to hang like that when I came back and checked on it this morning.
If it matters, this machine is running CentOS 5 and the file system type is ext3. Everything except /boot is installed on one root (/) LVM volume.
Is there any way to resolve this or am I better off backing up the important data and reinstalling?
First thing I'd do, would be to stop/kill the Acronis process. If the files are open when they're 'deleted', the disk space will still remain in use. Stopping the process may help. It may also let you do the "rm -fR" on that directory.
I've had that too-many-files problem before, and sadly, have had to hack at it sometimes, deleting 'chunks' of files that match a smaller pattern, to get things down a bit. Something like "rm *2345.log", etc. (you see where I'm going), then do *2346.log...lather-rinse-repeat. Sometimes that's the only way.
First thing I'd do, would be to stop/kill the Acronis process. If the files are open when they're 'deleted', the disk space will still remain in use. Stopping the process may help.
Yep! You and I think alike!
Quote:
Originally Posted by TB0ne
It may also let you do the "rm -fR" on that directory.
That was one of the first things I tried, with no luck.
Quote:
Originally Posted by TB0ne
I've had that too-many-files problem before, and sadly, have had to hack at it sometimes, deleting 'chunks' of files that match a smaller pattern, to get things down a bit. Something like "rm *2345.log", etc. (you see where I'm going), then do *2346.log...lather-rinse-repeat. Sometimes that's the only way.
I think this will have to be my next attempt. Unfortunately, since I can't read the directory, all I have to go on are the example file names the end user gave me when they reported the issue.
Thanks for your response, I'll keep this thread updated on what happens.
That was one of the first things I tried, with no luck.
I figured you did, since that was on the list from that link, but I didn't know if you had tried it after stopping the Acronis service. And out of curiosity...did you try renaming the directory???
Quote:
I think this will have to be my next attempt. Unfortunately, since I can't read the directory, all I have to go on are the example file names the end user gave me when they reported the issue.
Yeah, I've hated having to do that in the past, but sometimes you're left with no choice.
Yeah, there were three Acronis services running. We stopped all three of them and disk space didn't change.
Quote:
Originally Posted by TB0ne
I figured you did, since that was on the list from that link, but I didn't know if you had tried it after stopping the Acronis service. And out of curiosity...did you try renaming the directory???
I didn't try renaming the directory. Why would I want to try that?
Quote:
Originally Posted by TB0ne
Yeah, I've hated having to do that in the past, but sometimes you're left with no choice.
Yeah, I'm still trying to confirm the pattern of file name I would use. The example the user gave me when they were able to read the directory is "client_session-libzmq_infra-2016-01-01-20-01-28-199.log". I tried deleting that one file in particular using rm, but rm just hung for probably about 30 seconds before telling me there was no such file or directory. This probably means the user already deleted that particular file. Now I'm trying `find . -name client_session-libzmq_infra-2016*` to try and confirm the pattern, but it's just hanging.
`find . -name client_session-libzmq_infra-2016*` to try and confirm the pattern, but it's just hanging.
If you don't have that "*" quoted, your shell is trying to expand that wildcard, which requires reading through the entire directory, finding all the names that match the pattern, and then sorting the resulting list. If you put that pattern in quotes, then it is the find process that reads through the directory and processes each matching name as it finds it.
If you don't have that "*" quoted, your shell is trying to expand that wildcard, which requires reading through the entire directory, finding all the names that match the pattern, and then sorting the resulting list. If you put that pattern in quotes, then it is the find process that reads through the directory and processes each matching name as it finds it.
Code:
find . -name 'client_session-libzmq_infra-2016*'
Thanks for the tip. I've killed my existing operation and used your example. Still not showing any results, but here's hoping.
Quote:
Originally Posted by keefaz
Code:
perl -e 'map{unlink or warn "$!\n"}</var/lib/Acronis/msp/zmq/logs/*>'
Can you explain what this does? I'm not familiar with Perl at all...
Can you explain what this does? I'm not familiar with Perl at all...
It deletes all files in /var/lib/Acronis/msp/zmq/logs/
map is a function that apply an expression for each element of a list (it's like a loop)
So for each element in /var/lib/Acronis/msp/zmq/logs/*, it unlinks it (deletes it) or warn if there is an error trying to delete that file
It deletes all files in /var/lib/Acronis/msp/zmq/logs/
map is a function that apply an expression for each element of a list (it's like a loop)
So for each element in /var/lib/Acronis/msp/zmq/logs/*, it unlinks it (deletes it) or warn if there is an error trying to delete that file
I've reached the point where I'm giving up. I've spent hours attempting to delete these files, when the server is due for an upgrade/migration anyway, so we're just going to move the data to a newer server and decommission it.
I appreciate all the suggestions. I didn't get a chance to try that Perl command, but I'll definitely keep it in mind if something like this happens again.
I'm not sure whether or not to mark this thread as SOLVED, since I didn't really solve the problem. I'll leave that up to a moderator.
I've reached the point where I'm giving up. I've spent hours attempting to delete these files, when the server is due for an upgrade/migration anyway, so we're just going to move the data to a newer server and decommission it.
I appreciate all the suggestions. I didn't get a chance to try that Perl command, but I'll definitely keep it in mind if something like this happens again.
I'm not sure whether or not to mark this thread as SOLVED, since I didn't really solve the problem. I'll leave that up to a moderator.
Heck, I'd give the perl one-liner a shot just to see what it does. At this point, you've got nothing to lose.
Heck, I'd give the perl one-liner a shot just to see what it does. At this point, you've got nothing to lose.
I would have liked to, but this isn't a machine owned or managed by me. The decision was that of the user who owned it and I no longer have access to it.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.