Deleting a Directory with millions of files and sub directories
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
In addition, sudo doesn't work too well with redirecting and pipes. You need to run that command by using su or sudo bash.
Code:
su -c "nohup /usr/bin/rm -rf <directorypath> >/tmp/nohup.out 2>&1 </dev/null"
Thanks.
If I run on entire directory it may take several hours.
I tried as follows
.. snip... Then executed command as per your suggestion. But the directory is not getting removed.
I copied and pasted a previous poster's response without checking. The command as posted silently fails. D'oh! Try it again, removing /usr (rm is in /bin on my system). Or, even still,
Code:
su -c "nohup $(which rm) -rf <directorypath> >/tmp/nohup.out 2>&1 </dev/null"
$(which rm) will substitute the correct location on your system
There are a few. XFS can have "billions"... (it dynamically adds inodes as needed).
In this case, I think it is referring to "more than I can count".
There is a limit to how many links to a file can exist (I don't remember if it is a 16 bit field or a 32 bit field).
For ext4 there is a 65000 link limit, but I don't know whether that's due to the filesystem or a kernel limit for all filesystems.
The top level directory size is about 1 gigabyte, so there can't be "billions" of entries there, but there's been no indication of how deep the tree goes.
I copied and pasted a previous poster's response without checking. The command as posted silently fails. D'oh! Try it again, removing /usr (rm is in /bin on my system). Or, even still,
Code:
su -c "nohup $(which rm) -rf <directorypath> >/tmp/nohup.out 2>&1 </dev/null"
$(which rm) will substitute the correct location on your system
Thanks. I have tried as per your suggestion. This too silently failed
FWIW, I loaded a directory with ~5,600,000 entries (@ 10,000 links per inode). It took much of the day to fill it, and "rm -r" took 50 minutes to clean it out. This was on an ext4 filesystem on a not terribly fast 320GB disk and with a 1.6GHz AMD CPU.
Are there any filesystems that use a b+ tree. For a while I thought that btrfs was based on the b+, but I guess I am wrong.
In my old IBM days we used VSAM. (Very successful access method). We joked that it was a ("very special access method).
VSAM was essentially a b+ tree organization, used with spinning disks that provided rotational positional sensing. One dedicated track told the disk controller the realtime angle of the disk. The theory and practice at that time being that the driver would choose the buffer for the sector that was just ahead of the disk head. The software did not have to wait for the disk index marker to be detected before beginning to search to write a buffer.
VSAM brought back memories of control areas, control invervals, free space within each, and also small sized data blocks stored along with the index.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.