LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 05-04-2016, 11:11 AM   #1
resuni
Member
 
Registered: Oct 2009
Location: Colorado, USA
Distribution: I use Arch btw
Posts: 140

Rep: Reputation: 12
Delete millions of files from a directory


I just encountered a nasty bug in the Acronis Cloud software. The bug seems to produce an indefinite number of log files in /var/lib/Acronis/msp/zmq/logs. They've since came out with a patch for this, but I'm still stuck with a directory full of millions of logs that I cannot delete. I'm estimating there are about 15 million files in that directory, but I'm not even sure about that. Honestly for all I know, there could be 150 million.

I can't even touch this directory. Any command I try (ls, rm, find) hangs where this directory is concerned. I've tried all of the suggestions here with no luck: http://www.slashroot.in/which-is-the...files-in-linux

I let each of the methods mentioned in that article run for at least several hours. When I would come back and run `df -h` in a separate shell, there was no change in the disk usage. On the rsync method, I tried adding `-v` to see if it was actually doing anything, but the only line I got was "sending incremental file list". This was last night, and the operation continued to hang like that when I came back and checked on it this morning.

If it matters, this machine is running CentOS 5 and the file system type is ext3. Everything except /boot is installed on one root (/) LVM volume.

Code:
[root@devlinux ~]# /usr/sbin/lvdisplay
  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol00
  VG Name                VolGroup00
  LV UUID                O24ktK-T0oK-qKGZ-xPJG-QQeS-CFhA-cbHK21
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                147.00 GB
  Current LE             4704
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Name                /dev/VolGroup00/LogVol01
  VG Name                VolGroup00
  LV UUID                36ntD4-car3-3F78-UKSZ-9RPs-bxUl-FChL9y
  LV Write Access        read/write
  LV Status              available
  # open                 1
  LV Size                1.94 GB
  Current LE             62
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

[root@devlinux ~]# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
Is there any way to resolve this or am I better off backing up the important data and reinstalling?
 
Old 05-04-2016, 11:25 AM   #2
TB0ne
LQ Guru
 
Registered: Jul 2003
Location: Birmingham, Alabama
Distribution: SuSE, RedHat, Slack,CentOS
Posts: 24,477

Rep: Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246
Quote:
Originally Posted by Bradj47 View Post
I just encountered a nasty bug in the Acronis Cloud software. The bug seems to produce an indefinite number of log files in /var/lib/Acronis/msp/zmq/logs. They've since came out with a patch for this, but I'm still stuck with a directory full of millions of logs that I cannot delete. I'm estimating there are about 15 million files in that directory, but I'm not even sure about that. Honestly for all I know, there could be 150 million.

I can't even touch this directory. Any command I try (ls, rm, find) hangs where this directory is concerned. I've tried all of the suggestions here with no luck: http://www.slashroot.in/which-is-the...files-in-linux

I let each of the methods mentioned in that article run for at least several hours. When I would come back and run `df -h` in a separate shell, there was no change in the disk usage. On the rsync method, I tried adding `-v` to see if it was actually doing anything, but the only line I got was "sending incremental file list". This was last night, and the operation continued to hang like that when I came back and checked on it this morning.

If it matters, this machine is running CentOS 5 and the file system type is ext3. Everything except /boot is installed on one root (/) LVM volume.
Is there any way to resolve this or am I better off backing up the important data and reinstalling?
First thing I'd do, would be to stop/kill the Acronis process. If the files are open when they're 'deleted', the disk space will still remain in use. Stopping the process may help. It may also let you do the "rm -fR" on that directory.

I've had that too-many-files problem before, and sadly, have had to hack at it sometimes, deleting 'chunks' of files that match a smaller pattern, to get things down a bit. Something like "rm *2345.log", etc. (you see where I'm going), then do *2346.log...lather-rinse-repeat. Sometimes that's the only way.
 
Old 05-04-2016, 11:30 AM   #3
resuni
Member
 
Registered: Oct 2009
Location: Colorado, USA
Distribution: I use Arch btw
Posts: 140

Original Poster
Rep: Reputation: 12
Quote:
Originally Posted by TB0ne View Post
First thing I'd do, would be to stop/kill the Acronis process. If the files are open when they're 'deleted', the disk space will still remain in use. Stopping the process may help.
Yep! You and I think alike!

Quote:
Originally Posted by TB0ne View Post
It may also let you do the "rm -fR" on that directory.
That was one of the first things I tried, with no luck.

Quote:
Originally Posted by TB0ne View Post
I've had that too-many-files problem before, and sadly, have had to hack at it sometimes, deleting 'chunks' of files that match a smaller pattern, to get things down a bit. Something like "rm *2345.log", etc. (you see where I'm going), then do *2346.log...lather-rinse-repeat. Sometimes that's the only way.
I think this will have to be my next attempt. Unfortunately, since I can't read the directory, all I have to go on are the example file names the end user gave me when they reported the issue.

Thanks for your response, I'll keep this thread updated on what happens.
 
Old 05-04-2016, 11:48 AM   #4
TB0ne
LQ Guru
 
Registered: Jul 2003
Location: Birmingham, Alabama
Distribution: SuSE, RedHat, Slack,CentOS
Posts: 24,477

Rep: Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246
Quote:
Originally Posted by Bradj47 View Post
Yep! You and I think alike!
So you did stop it, and disk space stayed up?
Quote:
That was one of the first things I tried, with no luck.
I figured you did, since that was on the list from that link, but I didn't know if you had tried it after stopping the Acronis service. And out of curiosity...did you try renaming the directory???
Quote:
I think this will have to be my next attempt. Unfortunately, since I can't read the directory, all I have to go on are the example file names the end user gave me when they reported the issue.
Yeah, I've hated having to do that in the past, but sometimes you're left with no choice.
 
Old 05-04-2016, 11:59 AM   #5
resuni
Member
 
Registered: Oct 2009
Location: Colorado, USA
Distribution: I use Arch btw
Posts: 140

Original Poster
Rep: Reputation: 12
Quote:
Originally Posted by TB0ne View Post
So you did stop it, and disk space stayed up?
Yeah, there were three Acronis services running. We stopped all three of them and disk space didn't change.

Quote:
Originally Posted by TB0ne View Post
I figured you did, since that was on the list from that link, but I didn't know if you had tried it after stopping the Acronis service. And out of curiosity...did you try renaming the directory???
I didn't try renaming the directory. Why would I want to try that?

Quote:
Originally Posted by TB0ne View Post
Yeah, I've hated having to do that in the past, but sometimes you're left with no choice.
Yeah, I'm still trying to confirm the pattern of file name I would use. The example the user gave me when they were able to read the directory is "client_session-libzmq_infra-2016-01-01-20-01-28-199.log". I tried deleting that one file in particular using rm, but rm just hung for probably about 30 seconds before telling me there was no such file or directory. This probably means the user already deleted that particular file. Now I'm trying `find . -name client_session-libzmq_infra-2016*` to try and confirm the pattern, but it's just hanging.
 
Old 05-04-2016, 12:14 PM   #6
rknichols
Senior Member
 
Registered: Aug 2009
Distribution: CentOS
Posts: 4,592

Rep: Reputation: 2114Reputation: 2114Reputation: 2114Reputation: 2114Reputation: 2114Reputation: 2114Reputation: 2114Reputation: 2114Reputation: 2114Reputation: 2114Reputation: 2114
Quote:
Originally Posted by Bradj47 View Post
`find . -name client_session-libzmq_infra-2016*` to try and confirm the pattern, but it's just hanging.
If you don't have that "*" quoted, your shell is trying to expand that wildcard, which requires reading through the entire directory, finding all the names that match the pattern, and then sorting the resulting list. If you put that pattern in quotes, then it is the find process that reads through the directory and processes each matching name as it finds it.
Code:
find . -name 'client_session-libzmq_infra-2016*'
 
1 members found this post helpful.
Old 05-04-2016, 12:28 PM   #7
keefaz
LQ Guru
 
Registered: Mar 2004
Distribution: Slackware
Posts: 6,325

Rep: Reputation: 760Reputation: 760Reputation: 760Reputation: 760Reputation: 760Reputation: 760Reputation: 760
Code:
perl -e 'map{unlink or warn "$!\n"}</var/lib/Acronis/msp/zmq/logs/*>'
 
Old 05-04-2016, 12:54 PM   #8
resuni
Member
 
Registered: Oct 2009
Location: Colorado, USA
Distribution: I use Arch btw
Posts: 140

Original Poster
Rep: Reputation: 12
Quote:
Originally Posted by rknichols View Post
If you don't have that "*" quoted, your shell is trying to expand that wildcard, which requires reading through the entire directory, finding all the names that match the pattern, and then sorting the resulting list. If you put that pattern in quotes, then it is the find process that reads through the directory and processes each matching name as it finds it.
Code:
find . -name 'client_session-libzmq_infra-2016*'
Thanks for the tip. I've killed my existing operation and used your example. Still not showing any results, but here's hoping.

Quote:
Originally Posted by keefaz View Post
Code:
perl -e 'map{unlink or warn "$!\n"}</var/lib/Acronis/msp/zmq/logs/*>'
Can you explain what this does? I'm not familiar with Perl at all...
 
Old 05-04-2016, 01:10 PM   #9
keefaz
LQ Guru
 
Registered: Mar 2004
Distribution: Slackware
Posts: 6,325

Rep: Reputation: 760Reputation: 760Reputation: 760Reputation: 760Reputation: 760Reputation: 760Reputation: 760
Quote:
Originally Posted by Bradj47 View Post
Can you explain what this does? I'm not familiar with Perl at all...
It deletes all files in /var/lib/Acronis/msp/zmq/logs/

map is a function that apply an expression for each element of a list (it's like a loop)
So for each element in /var/lib/Acronis/msp/zmq/logs/*, it unlinks it (deletes it) or warn if there is an error trying to delete that file
 
2 members found this post helpful.
Old 05-04-2016, 02:00 PM   #10
TB0ne
LQ Guru
 
Registered: Jul 2003
Location: Birmingham, Alabama
Distribution: SuSE, RedHat, Slack,CentOS
Posts: 24,477

Rep: Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246
Quote:
Originally Posted by keefaz View Post
It deletes all files in /var/lib/Acronis/msp/zmq/logs/

map is a function that apply an expression for each element of a list (it's like a loop)
So for each element in /var/lib/Acronis/msp/zmq/logs/*, it unlinks it (deletes it) or warn if there is an error trying to delete that file
+1 for the perl one-liner. Always love them.
 
Old 05-04-2016, 03:07 PM   #11
resuni
Member
 
Registered: Oct 2009
Location: Colorado, USA
Distribution: I use Arch btw
Posts: 140

Original Poster
Rep: Reputation: 12
I've reached the point where I'm giving up. I've spent hours attempting to delete these files, when the server is due for an upgrade/migration anyway, so we're just going to move the data to a newer server and decommission it.

I appreciate all the suggestions. I didn't get a chance to try that Perl command, but I'll definitely keep it in mind if something like this happens again.

I'm not sure whether or not to mark this thread as SOLVED, since I didn't really solve the problem. I'll leave that up to a moderator.
 
Old 05-04-2016, 03:22 PM   #12
TB0ne
LQ Guru
 
Registered: Jul 2003
Location: Birmingham, Alabama
Distribution: SuSE, RedHat, Slack,CentOS
Posts: 24,477

Rep: Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246Reputation: 7246
Quote:
Originally Posted by Bradj47 View Post
I've reached the point where I'm giving up. I've spent hours attempting to delete these files, when the server is due for an upgrade/migration anyway, so we're just going to move the data to a newer server and decommission it.

I appreciate all the suggestions. I didn't get a chance to try that Perl command, but I'll definitely keep it in mind if something like this happens again.

I'm not sure whether or not to mark this thread as SOLVED, since I didn't really solve the problem. I'll leave that up to a moderator.
Heck, I'd give the perl one-liner a shot just to see what it does. At this point, you've got nothing to lose.
 
Old 05-04-2016, 03:31 PM   #13
resuni
Member
 
Registered: Oct 2009
Location: Colorado, USA
Distribution: I use Arch btw
Posts: 140

Original Poster
Rep: Reputation: 12
Quote:
Originally Posted by TB0ne View Post
Heck, I'd give the perl one-liner a shot just to see what it does. At this point, you've got nothing to lose.
I would have liked to, but this isn't a machine owned or managed by me. The decision was that of the user who owned it and I no longer have access to it.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Deleting a Directory with millions of files and sub directories ramsforums Linux - Software 41 08-26-2015 07:34 PM
[SOLVED] Cannot delete a recently created directory in Lubuntu, or files in that directory l33y Linux - Newbie 26 06-07-2014 06:49 AM
How to extract all files on a directory and delete all files? moisespedro Slackware 9 01-21-2014 12:55 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 04:06 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration