Quicker way to delete folders than rm -r folder_name
Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
If rm could be faster, it would be faster. It has had many decades to improve.
Most of the stuff is done by the fs driver, so, the only way to improve the performance would be to look into the file system (change filesystem, tune it, change options when formatting, etc. etc. etc.).
What filesystem are you using ? If it's ext3 then I can see why it takes hours, I bet the same would only take minutes with JFS (fastest delete speed) or XFS.
That's not my experience with ext3 at all.
$ time for dir1 in $(seq 1 1000); do mkdir $dir1; for dir2 in $(seq 1 100); do mkdir $dir1/$dir2; done; done
$ time rm -rf *
Under ten seconds for 100,000 directories. This is a sempron 3000+ (relatively old machine) with a sata disk (not sata 2). Filesystem is ext3, it's formated with -O dir_index, though. Creation of directories is not that fast, however that's to be expected.
Out of curiosity, I tested a loopback fs formated and mounted with the standard options (no dir_index), just to be fair. The results are very similar.
$ time rm -rf *
A really small difference.
From my experience, I know that ext3 is a very stable and fast filesystem overall even if people usually don't like to admit it because of I don't know what reason. Sure that some other fs's do X thing better, but they also do other things *much* worse. I find that ext3 does everything adequately.
If the OP really find that deleting 100,000 files takes that long, there are a number of probably causes.
Defective or experimental fs (not ext3), like reiser4 or ext4. I don't know if reiserfs (3.x) can have problems with this, but I know from first hand that it does have serious problems with fragmentation.
Defective hardware, look on the dmesg output for I/O errors when doing fs operations.
Your cpu is being hogged by something else. Check top or htop.
There might be other possible problems. But rm is not one of them.
Hadn't seen post by i92guboj - I did some tests too. I just created 100000 copies of a small (few hundred bytes) file. Took just less than 10 and a half minutes.
Rebooted and "rm-rf ..." - less than 10 seconds.
Hardware RAID5 on an old idle quad (P-III based) Xeon server. EXT3 mounted noatime, nodiratime - because I always have them that way.
Its a cache of a website that hosted on a shared server.
I think that the files are stored on a storage cluster. Don't have any control of the file system or other stuff. The problem is that I exceed the 500,000 files limit every few days and delete manually until I optimize the caching.
So if you do that daily, wouldn't it make sense to set up a cron job or something that takes care of it for you?
That's the way to go. Just create a cron job. He might consider using an higher niceness so it doesn't hit the cpu so badly, though, sincerely, in a cluster I don't think that cpu is the problem. I am rather inclined to think that's something to do with the fs or the hardware.