Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
|
08-24-2015, 06:33 PM
|
#16
|
Senior Member
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912
|
A directory can't use a binary search... The files are in creation order.. To see what order a directory has files (the easy one) just try "echo *". They are not sorted. I believe those directories that use a btree for storage also have a list pointer that runs through the tree to preserve the "correct" order for use with seekdir/readdir/...
This is so a user scanning through a directory won't miss one when the tree gets rebalanced...
|
|
|
08-24-2015, 08:28 PM
|
#17
|
Senior Member
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,792
|
Quote:
Originally Posted by Lsatenstein
If the midpoint of the directory is not valid, I would guess that the system search algorithm would creep up to the next recognized entry and continue with some form of binary search (My BS).
|
Since ext2/3/4 directory entries are not kept in collating sequence, without dir_index only a linear search is possible.
(Hmmm, echo in here. Sorry, jpollard, didn't mean to step on your foot. Page break came just ahead of your posting.)
Last edited by rknichols; 08-24-2015 at 08:34 PM.
Reason: add "without dir_index"
|
|
|
08-24-2015, 09:21 PM
|
#18
|
Senior Member
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912
|
no problem.
|
|
|
08-25-2015, 03:49 AM
|
#19
|
Member
Registered: Jun 2007
Posts: 66
Original Poster
Rep:
|
Thank you everyone for the help.
I am unable to delete the Directory. It contains billions of directories.
See the below Screen for the directory structure. The owner of directory is root
http://i.imgur.com/tGS7ajj.png
I have tried as follows
It was searching for an hour+ then started deleting folders from T6o1L9lGg-/ It ran over 10 hours. The laptop ScreenSaver activated and password protection was on. The hard disk was t but laptop was not responding. So I have to force shutdown by pressing the power button more than 10 seconds. upon booting, the directory and size remains same. no changes.
2) I have tried
Quote:
sudo mkdir empty
sudo rsync -av --delete ./empty/ ./T6o1L9IGg-/
|
The command executed and got a prompt
Quote:
sending incremental file list
|
After about 3 hours the folders starts displaying. It ran for about 6 hours then due to some power trip the laptop shutdown.
When I reboot the folder remains same size.
When I execute the g++ as mentioned in the first thread, nothing happens. Looks like the code is not working.
Any other suggestion? If no other option, I may need to format my laptop.
Thank you
Last edited by ramsforums; 08-25-2015 at 03:52 AM.
|
|
|
08-25-2015, 04:06 AM
|
#20
|
Member
Registered: Jun 2007
Posts: 66
Original Poster
Rep:
|
Quote:
Originally Posted by jpollard
The problem is the directory with a huge number of files.
The easiest way to deal with that is to let an "/usr/bin/rm -rf <directorypath>" run as long as it takes.(I suggest using a virtual terminal for this, you could do a "nohup /usr/bin/rm -rf <directorypath> >/tmp/nohup.out 2>&1 </dev/null &"
The problem is the way rm works and the interaction with certain filesystems. Those filesystems using a btree structure for the directory work the fastest, but most just use a linear list for the directory.
When a file gets deleted, the kernel has to copy the rest of the entries up one place in directory file. Then repeat for the next file... rm starts with the very first file - thus, the worst case delete.
You CAN make it run faster... but it depends on reading the directory list into memory, then reverse the order...
and delete each file. This is much faster because you avoid the kernel having to copy the remaining list of files.
Note: doing this with a million files takes a fairly large amount of memory. When I did it , I used perl. As I recall it went something like:
Code:
$some_dir = "directory with lots of files"; # this can be "." if you first do a cd to the directory...
opendir(my $dh, $some_dir) || die;
@list = reverse readdir($dh)
closedir($dh);
while (defined ($f = shift(@list)) ) {
next if (-d $f);
unlink $f || die "can't delete $f - $!\n";
}
Note - I have not tested this, it is from memory. How well it works depends on the filesystem. I'm not sure that it will help on btrfs (it uses a btree for the directory, so it should be fast anyway).
This particular bit of code will not delete directories (so . and .. will be left alone).
|
Thanks
I am unable to execute your first suggestion.
See below link for the error message.
http://i.imgur.com/nw66evE.png
as far as the perl script is concerned, how do I execute? should I create a script with .pl and execute as follows
sudo perl myscript.pl
Quote:
#!/usr/bin/perl
use strict;
use warnings;
$some_dir = "."; # this can be "." if you first do a cd to the directory...
opendir(my $dh, $some_dir) || die;
@list = reverse readdir($dh)
closedir($dh);
while (defined ($f = shift(@list)) ) {
next if (-d $f);
unlink $f || die "can't delete $f - $!\n";
}
|
I have tried to execute
I got the following error. Sorry I am unfamiliar with perl
Quote:
cd T6o1L9lGg-/
rama@develop ~/T6o1L9lGg-$ sudo perl ~/apps/rmdirtree.pl
Global symbol "$some_dir" requires explicit package name at /home/rama/apps/rmdirtree.pl line 5.
Global symbol "$some_dir" requires explicit package name at /home/rama/apps/rmdirtree.pl line 6.
Global symbol "@list" requires explicit package name at /home/rama/apps/rmdirtree.pl line 7.
syntax error at /home/rama/apps/rmdirtree.pl line 8, near ")
closedir"
Global symbol "$f" requires explicit package name at /home/rama/apps/rmdirtree.pl line 9.
Global symbol "@list" requires explicit package name at /home/rama/apps/rmdirtree.pl line 9.
Global symbol "$f" requires explicit package name at /home/rama/apps/rmdirtree.pl line 10.
Global symbol "$f" requires explicit package name at /home/rama/apps/rmdirtree.pl line 11.
Global symbol "$f" requires explicit package name at /home/rama/apps/rmdirtree.pl line 11.
Execution of /home/rama/apps/rmdirtree.pl aborted due to compilation errors.
|
|
|
|
08-25-2015, 04:11 AM
|
#21
|
Member
Registered: Jun 2007
Posts: 66
Original Poster
Rep:
|
Quote:
Originally Posted by syg00
The rsync trick has worked for me - not multi-millions tho'.
Reading the entire list into memory is the problem with ls and rm taking so long. this seems the best approach.
|
Thanks. How should I execute the command?
I have created a script in a file as follows (rmdirs.pl)
Quote:
#!/usr/bin/perl
chdir "T6o1L9lGg-" or die; opendir D, "."; while ($n = readdir D) { unlink $n }
|
When I execute the following command
rama@develop ~ $ sudo perl ~/apps/rmdirs.pl
command executes nothing happens. It returns immediately.
|
|
|
08-25-2015, 04:43 AM
|
#22
|
Senior Member
Registered: Dec 2009
Location: New Jersey, USA
Distribution: Fedora, OpenSUSE, FreeBSD, OpenBSD, macOS (hack). Past: Debian, Arch, RedHat (pre-RHEL).
Posts: 1,335
|
Quote:
Originally Posted by ramsforums
|
In addition, sudo doesn't work too well with redirecting and pipes. You need to run that command by using su or sudo bash.
Code:
su -c "nohup /usr/bin/rm -rf <directorypath> >/tmp/nohup.out 2&>1 </dev/null"
The last attempt of yours failed because you included the quotes.
Last edited by goumba; 08-25-2015 at 04:53 AM.
|
|
|
08-25-2015, 05:09 AM
|
#23
|
LQ Veteran
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,250
|
(re post #21)
That should work - if all the files are in the high-level directory (T6o1L9lGg-). However ...
unlink only works on files, not (sub-)directories. And it won't recurse down into sub-directories. Depending on how things are structured, it may not do anything. Then you might have to look at something like File::Remove from CPAN.
|
|
|
08-25-2015, 05:51 AM
|
#24
|
Senior Member
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912
|
You do have to put the directory you want cleaned in place of the <directorypath>
|
|
|
08-25-2015, 06:52 AM
|
#25
|
LQ Addict
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 22,733
|
you may try to go into that dir and rm -rf a*. Next: rm -rf b* and so on....
|
|
|
08-25-2015, 08:12 AM
|
#26
|
Member
Registered: May 2015
Location: US
Distribution: Fedora
Posts: 364
Rep:
|
I agree completely with pan64. I was reading through the thread thinking that, and then I saw pan64's post. If that still hangs it, try:
then cut down the "[a-m]" section to something like "[a-g]" if it still doesn't work, etc.
It might take a long time, but so would large backups.
Edit: I forgot about number files too, but those can be done with the same method.
Last edited by oldtechaa; 08-25-2015 at 08:14 AM.
Reason: Number files.
|
|
|
08-25-2015, 09:57 AM
|
#27
|
Member
Registered: Jun 2007
Posts: 66
Original Poster
Rep:
|
[QUOTE=goumba;5410741]In addition, sudo doesn't work too well with redirecting and pipes. You need to run that command by using su or sudo bash.
Code:
su -c "nohup /usr/bin/rm -rf <directorypath> >/tmp/nohup.out 2&>1 </dev/null"
Thanks.
If I run on entire directory it may take several hours.
I tried as follows
sudo mkdir test1
cd test1
sudo mkdir topdir{00..99}
sudo mkdir topdir{00..99}/subdir{00..19}
sudo touch topdir{00..99}/file{00..19}
Please see the attached screen
http://i.imgur.com/0QNb3zq.png
Then executed command as per your suggestion. But the directory is not getting removed.
Last edited by ramsforums; 08-25-2015 at 09:59 AM.
|
|
|
08-25-2015, 09:57 AM
|
#28
|
Senior Member
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,792
|
Note that the directory size will not decrease as files are deleted. For ext2/3/4 at least, the space allocated for a directory can only increase, never decrease except by running fsck.ext2 with the "-D" (optimize directories) option. Note: I don't recommend that you do that. It will take even longer than "rm -r".
|
|
|
08-25-2015, 10:22 AM
|
#29
|
Senior Member
Registered: Aug 2009
Distribution: Rocky Linux
Posts: 4,792
|
Quote:
Originally Posted by ramsforums
I am unable to delete the Directory. It contains billions of directories.
|
Wait! What kind of filesystem is that with "billions" of inodes? Or, are those directory entries all hard links to the same file? If the latter, then you could just use debugfs to clri the directory inode, then let fsck return the directory blocks to the free pool and move that one (1) real file to lost+found.
|
|
|
08-25-2015, 11:26 AM
|
#30
|
Senior Member
Registered: Dec 2012
Location: Washington DC area
Distribution: Fedora, CentOS, Slackware
Posts: 4,912
|
There are a few. XFS can have "billions"... (it dynamically adds inodes as needed).
In this case, I think it is referring to "more than I can count".
There is a limit to how many links to a file can exist (I don't remember if it is a 16 bit field or a 32 bit field).
|
|
|
All times are GMT -5. The time now is 02:34 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|