Linux - NewbieThis Linux forum is for members that are new to Linux.
Just starting out and have a question?
If it is not in the man pages or the how-to's this is the place!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
What does "does not work properly" mean? Any error message? Does it not give the expected results? An advice: first try the find command without the -exec rm part, otherwise you might accidentaly delete the wrong files: once you got the desired results, add the -exec rm (or -delete) and run again.
I saw the other post of yours (please don't duplicate posts, nor hijack existing threads) and looking at the timestamp of the files it's clear now that it fails because of the -atime/-mtime rule of the find command:
Code:
-atime n
File was last accessed n*24 hours ago. When find figures out how many 24-hour periods ago the
file was last accessed, any fractional part is ignored, so to match -atime +1, a file has to
have been accessed at least two days ago.
To find file modified more than 24 hours ago, you have to use -mitime +0. If the files have been modified 0.24 or 0.78 days ago (that is less than one day) the fractional part will be ignored and the files don't match the +0 condition. On the other hand, files modified 1.01 or 1.65 days ago will be treated as if they were modified 1 day ago, that is more than 0. Hope this helps.
This is how the target location looks like and i want to retain only july6th (current date) logs and delete the rest.
If you want to delete all the files created yesterday (even if they were last modified less than 24 hours ago, e.g. Jul 5 23.59) you have to add the -daystart option:
Code:
-daystart
Measure times (for -amin, -atime, -cmin, -ctime, -mmin, and -mtime) from the beginning of
today rather than from 24 hours ago. This option only affects tests which appear later on the
command line.
It actually remove the following (older) files, right?
Code:
-rwxr-x--- 1 root root 800 Jan 26 09:29 log_http_clear
-rw-r--r-- 1 root root 0 Jan 17 17:33 ssl_access_log
-rw-r--r-- 1 root root 0 May 19 18:10 ssl_error_log
-rw-r--r-- 1 root root 0 Jan 17 17:33 ssl_request_log
But you used -mtime +1 so that all the files modified more than 47 hours and 59 minutes ago where found (and deleted). If you want to find all the files modified more then 23 hours and 59 minutes ago, you have to use
Code:
-mtime +0
because -mtime treats all the decimal values 0.1, 0.2, 0.3 ... 0.9999 as 0! The condition is +0, that is more than 0 and matches only files modified 1.0000 days ago (or more). If you use +1 it matches files modified 2.0000 days ago (or more) and not yesterday!
Again, if you want to delete all the files created on 5th Jul (even if they were created less than 24 hours ago, for example yesterday night at 23.59) you have to use the -daystart option.
Now try the following:
Code:
find /var/sitelogs/httpd -mtime +0 -exec ls -l {} \;
and look at the difference from the output of:
Code:
find /var/sitelogs/httpd -daystart -mtime +0 -exec ls -l {} \;
After that it should be a little more clear (hopefully).
The reason for which i delete those files is tht the file system gets full and it generates a ticket.So manually i keep deleting it daily,...according to post 3....i have to delete most of the 5th files and retain the 6th...so it looks fine if i use "find /var/sitelogs/httpd -daystart -mtime +0 -exec ls -l {} \;"
But in worst case if we are in need of the 5th supposingly if 6th dint arrive..then taht ll be a problem...So i thought why cant we move the files we grep'd to a new location (say my home dir).In this case the i will always have a backup rite?
I know th eworking of Move (mv) command ...but could you help me out in implementing in my scenario(post 3)?
You can first check if logfiles of today are available, then remove the older ones. I mean for example:
Code:
if [[ $(find /var/sitelogs/httpd -daystart -type f -mtime 0 | wc -l) -ne 0 ]]
then
find /var/sitelogs/httpd -daystart -type f -mtime +0 -exec echo rm {} \;
fi
Note the difference between -mtime 0 and -mtime +0.
Anyway, given your requirement you might try to use logrotate, which is more suitable for this kind of task. For example you can establish a rule to rotate all the logs inside /var/sitelogs/httpd on a daily basis and keep only those ones of the last two days, removing the rest.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.