Solaris / OpenSolaris This forum is for the discussion of Solaris, OpenSolaris, OpenIndiana, and illumos.
General Sun, SunOS and Sparc related questions also go here. Any Solaris fork or distribution is welcome. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
09-15-2005, 11:25 AM
|
#1
|
Member
Registered: Sep 2003
Location: Stow, OH USA
Distribution: Fedora Core 4, Knoppix, openSUSE 10
Posts: 44
Rep:
|
finding empty directories with bash
At work, we hired a consultant to help us with our shopping cart software (because the tmp files were never getting cleaned out). His solution was handing us a shell script that would do it for us. The only problem is that it's not working on Solaris 9. I think Solaris 9 doesn't support the -empty option, but I'm not sure. Could anyone help me fix this script?
Code:
/usr/bin/bash
/usr/local/somedirectory/bin/expireall -r
rm -rf /usr/serverdirectory/tmp/addr_ctr/*
find /usr/serverdirectory/tmp -type f -mtime +2 |xargs rm -f
find /usr/serverdirectory/tmp -type d -empty -depth -mindepth 1 |xargs rmdir
|
|
|
09-15-2005, 12:05 PM
|
#2
|
LQ Veteran
Registered: Sep 2003
Posts: 10,532
|
Hi,
You are correct, the -empty option is not supported.
I do have a solution, although it's kinda 'dirty'.....
Replace this line:
find /usr/serverdirectory/tmp -type d -empty -depth -mindepth 1 |xargs rmdir
With:
find /usr/serverdirectory/tmp -depth -type d -exec rmdir {} 2>/dev/null \;
It searches for all directories and tries to remove them using rmdir. rmdir will refuse to remove dirs that are not empty. Check to see if there isn't an alias defined that overrides this!!! (man rmdir for details).
The 2>/dev/null is there to get rid of all the error messages rmdir produces on non-empty dirs.
If at all possible, try this on a test box first.
Like I said, not elegant, but it works.
|
|
|
09-15-2005, 12:52 PM
|
#3
|
Moderator
Registered: Feb 2004
Location: Outside Paris
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
|
A cleaner approach would be to use the find command for which the original script was written, aka gnufind or gfind, which is certainly available for Solaris, at least on blastwave.
|
|
|
02-02-2016, 07:31 PM
|
#4
|
Member
Registered: May 2006
Location: Brisbane, Australia
Distribution: linux
Posts: 158
Rep:
|
This is what I use that works on both solaris and linux...
Code:
find "$dir" -depth -type d |
while read sub; do
# case "$sub" in */*) ;; *) continue ;; esac # sub-dir only
[ "`cd "$sub"; echo .* * ?`" = ". .. * ?" ] || continue
echo rmdir "$sub"
#rmdir "$sub"
done
The adjust to suit buy commenting and uncommenting lines
If rmdir is uncommented it will NOT error because directory was already tested as being empty.
Note the weird looking 'continue' line is to test if directory is empty, and only uses shell built-ins making it faster that other methods of testing.
|
|
|
02-03-2016, 01:25 AM
|
#5
|
Moderator
Registered: Feb 2004
Location: Outside Paris
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
|
That's a interesting approach but it it should be adjusted if "cd" is not silent but removing it from the left part of the test or redirecting its stdout. Also, if a directory contains a large number of files, "echo *" might be quite slow and even output "arg list too long" messages.
|
|
|
02-03-2016, 08:20 PM
|
#6
|
Member
Registered: May 2006
Location: Brisbane, Australia
Distribution: linux
Posts: 158
Rep:
|
Quote:
Originally Posted by jlliagre
That's a interesting approach but it it should be adjusted if "cd" is not silent but removing it from the left part of the test or redirecting its stdout. Also, if a directory contains a large number of files, "echo *" might be quite slow and even output "arg list too long" messages.
|
All quite true.
But other than "find -empty" which is not available on older UNIX's there are few ways to test for empty directories, such that they do not read the whole directory, especially "extremely large" directories.
However 'find' has to still read the whole directory anyway as part of its efforts to recurse into sub-directories. As such the ideal solution is make it part of find (EG: --empty and --delete). Which leaves you with the same problem of not being available on older UNIX machines.
Or implement a special perl command (or other modern language with looped directory reads), that lets you abort the read as soon as it reads a single item (directory not empty). But that means perl (or other) needs to be available. But then these days, perl (at least) is generally available and a standard install even on most older UNIX's, even if GNU-find isn't.
Perl::Find empty directory removal script anyone? Without using non-standard library modules that may not be installed!
|
|
|
02-03-2016, 11:00 PM
|
#7
|
Moderator
Registered: Feb 2004
Location: Outside Paris
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
|
There is no fundamental need to check if the directory is empty as rmdir will check it too and won't remove non empty directories anyway. There is no doubt rmdir kernel implementation will check the directories emptiness much faster than whatever userland code.
|
|
|
02-04-2016, 12:57 AM
|
#8
|
LQ Addict
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 22,723
|
I would suggest to forget find and use ls -lR and parse output with perl/awk/whatever. That will run definitely faster.
|
|
|
02-04-2016, 06:05 PM
|
#9
|
Member
Registered: May 2006
Location: Brisbane, Australia
Distribution: linux
Posts: 158
Rep:
|
Quote:
Originally Posted by jlliagre
There is no fundamental need to check if the directory is empty as rmdir will check it too and won't remove non empty directories anyway. There is no doubt rmdir kernel implementation will check the directories emptiness much faster than whatever userland code.
|
That is true, as long as you don't mind removing all the error messages.
But then there can be more to it, depending on your definition on what constitutes a 'empty directory'.
Case in point. I had a directory structure containing more than a million files, and tens of thousands of diretories. An empty directory however was classed as being a directory that was really empty, or only contained a single Index/ReadMe type file. Any other file and the directory was not empty.
In that case I could not rely on 'rmdir' or even a 'find' to test for emptiness. Mind you I did not need to worry about recursion either, as all such directories were all at the same level.
|
|
|
02-04-2016, 07:28 PM
|
#10
|
Moderator
Registered: Feb 2004
Location: Outside Paris
Distribution: Solaris 11.4, Oracle Linux, Mint, Debian/WSL
Posts: 9,789
|
Removing not only empty directories but also non empty empty ones depending on some characteristics is ruling out generic commands and then requires some scripting/programming.
|
|
|
02-18-2016, 03:59 AM
|
#11
|
Senior Member
Registered: Dec 2011
Location: Simplicity
Distribution: Mint/MATE
Posts: 2,927
|
Here is a variant of my Unix cleantmp script, tailored for your needs:
Code:
#!/bin/sh
dirs_to_clean="/usr/serverdirectory/tmp"
max_days_mtime=2
max_days_atime=0
#owner_to_keep="root nobody"
owner_to_keep=""
omit=""
for i in $owner_to_keep
do
omit="$omit ( ! -user $i )"
done
for dir in $dirs_to_clean
do
[ -d "$dir" ] &&
cd "$dir" &&
find . -depth \! -type d \( -mtime +$max_days_mtime -o -mtime -0 \) \
\( -atime +$max_days_atime -ctime +$max_days_atime -o -type l \) \
$omit -exec rm -f {} \; -o \
-type d -links 2 $omit -mtime +$max_days_mtime -exec rmdir {} \; 2>/dev/null
sleep 1
done
# Notes:
# A just emptied directory is unlikely deleted immediately - but after another max_days_mtime.
# -mtime -0 detects files with a future time stamp (e.g. from an obscure archive).
# -links 2 should be replaced by -empty - if find would support it.
# -exec rm -f {} \; and -exec rmdir {} \; should be replaced by -delete - if find would support it.
More Notes:
The atime lines protect recently accessed files - delete these lines if you want to only consider the mtime.
The -links 2 is true for directories that have no sub-directories, thus reduces the number of rmdir attempts.
|
|
|
All times are GMT -5. The time now is 06:40 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|