I would like to keep space free by deleting unwanted files- instead of either going through and deleting when my working folders are too big, or just deleting everything above a specified age - or not accessed for, say, 1 year- I want to specify how long I want to keep a file.
My scripting is basic at best, so at present I have:
Code:
#!/bin/bash
clear
echo "deleting files older than specified expiry times"
find /{location specified}/3month -mtime +91 -exec rm {} \;
find /{location specified}/6month -mtime +182 -exec rm {} \;
find /{location specified}/1year -mtime +365 -exec rm {} \;
I.e. I have a collection of trash directories.
Other methods that occur to me:
put the name of the file I want to expire into a configuration file, with an expiry date and design a script to load this information then delete the files if their atime is older than the expiry date. This means getting variables from a file- I might be stretching bash beyond what it's designed for? If anyone can point me to a link showing a way forward that will be most helpful.
-something like debugfs to manipulate the ctime? At present I feel that the use of debugfs for this would be dangerous
It's OK for now, but I'd be interested to see how other linux users are doing this. Most of us keep far too much data, bunging up our hd's and our backups, and pinging ceaselessly to and from the cloud. Be great to deal with this more efficiently.
Cheers