Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
My question is, why are any of you logging in as root directly? Setup sudo, restrict access to rm -rf / and ding ding, you never have to use root's password to login and you can have root access to whatever you need by properly configuring sudoers in /etc
/me steps after installing Linux:
1. Install Linux
2. Login as root.
3. Create regular user account
4. Setup and update sudoers, giving regular user access needed using sudo
5. Change root's password to something long, hard and random, something non-related to myself and something I'll forget after a couple of minutes.
6. Logout and login as user, never use root password unless there's some type of emergency and I have to reset it the good ol fashion way, booting with rescue disk or into single user mode..
# Backup my home dir every day at midnight
00 00 * * * /bin/tar czvf /backup/`/bin/date '+%Y%m%d'`.tgz /home/rzaleski
# Update my urpmi update source at 02:00
00 02 * * * urpmi.update --update
For some reason my cron jobs didn't run. Is there a way I can find out why?
Also, how can I backup everything in my home directory except for the hidden (.*) and tmp folders?
It would probably be a good idea to implement this feature in the gnu rm command.
Unfortunately, I doubt it will be put in. I mean, its 2006 now and these GNU tools existed even before Linux (1991). It leads me to think that if they were going to put this in, they would've done it before now.
# Backup my home dir every day at midnight
00 00 * * * /bin/tar czvf /backup/`/bin/date '+%Y%m%d'`.tgz /home/rzaleski
# Update my urpmi update source at 02:00
00 02 * * * urpmi.update --update
For some reason my cron jobs didn't run. Is there a way I can find out why?
Also, how can I backup everything in my home directory except for the hidden (.*) and tmp folders?
Ryan
What user is running these jobs? I assume it is being run as root or the urpmi job will certainly fail. Have you created /backup? Is your date command in /bin ir /usr/bin?
I think the only way to delete safely is to use a script which moves the deleted files to somewhere (e.g. ~/.Trash) and automatically deletes WITH A CERTAIN DELAY. If you use 'rm -i' regularily, you start to press enter twice. And think thereafter. The problem is not only related to the root access. My lesson was doing 'rm * ~' instead of 'rm *~'.
Personally, I prefer a long delay, like a week, before the files really will be deleted.
well, there are ways of recovering the data, but it might already be to late. i am not a guru, but i have had my own experiences...
if u didnt remove the harddisk inmediately after the delete, a lot of data is probably already gone.
you can use grep to look for important text files:
like this:
********
Another method of file recovery is to use grep to search for text contained in the file. This approach is unlikely to work on anything but text files, and even then it may return a partial file or a file surrounded by text or binary junk. To use this approach, you type a command such as the following:
# grep -a -B5 -A100 "Dear Senator Jones" /dev/sda4 > recover.txt
This command searches for the text Dear Senator Jones on /dev/sda4 and returns the five lines before (-B5) and the 100 lines after (-A100) that string. The redirection operator stores the results in the file recover.txt. Because this operation involves a scan of the entire raw disk device, it's likely to take a while. (You can speed matters up slightly by omitting the redirection operator and instead cutting and pasting the returned lines from an xterm into a text editor; this enables you to hit Ctrl+C to cancel the operation once it's located the file. Another option is to use script to start a new shell that copies its output to a file, so you don't need to copy text into an editor.) This approach also works with any filesystem. If the file is fragmented, though, it will only return part of the file. If you misjudge the size of the file in lines, you'll either get just part of the file or too much -- possibly including binary data before, after, or even within the target file.
******
binary files are a lot harder to recover, the easiest method is analyzing the headers from files you have, and looking for similar files headers with grep, but for this is actually very hard.
things you can try to avoid this situation, if not already mentioned somewhere:
libtrash: http://freshmeat.net/projects/libtrash/
i think this is a lot better than using aliases. only drawback is that the trashcan can grow very fast, but its very configurable.
# Backup my home dir every day at midnight 00 00 * * * /bin/tar czvf /backup/`/bin/date '+%Y%m%d'`.tgz /home/rzaleski # Update my urpmi update source at 02:00 00 02 * * * urpmi.update --update
For some reason my cron jobs didn't run. Is there a way I can find out why?
Also, how can I backup everything in my home directory except for the hidden (.*) and tmp folders?
Just for furture reference... because it hasn't been mentioned yet...
KDE and Gnome (I think) both have a trash container they use when you delete something within there file viewers. However, there is no "recycling bin" or "Trash" implemented on a file system level. If you remove something using the standard rm or unlink program it is gone.
i will tell you some simple thing
do u know the chattr command?
after u have done your work, do the following
chattr +i * (in the /)
nothing can be deleted again
and if u do want to delete
chattr -i file or folder name
The crontab should be:
# m h dom mon dow user command
So you seem to be missing the "user" thingy.
To make things clean, I would suggest redirecting the output to somewhere where you have write access. So that you can see if any error occurs.
Maybe
command 2&> /backup/lastlog
Libtrash is more than a simple utility. It's a shared library that overrides the default actions Linux uses to delete files. Once libtrash is installed, deleted files will be moved into a subdirectory of the user's home directory named Trash. Libtrash allows users to use the normal Linux commands for deleting files, and libtrash will work with any files on the system.
It is far more flexible than traditional trashcan applications or frameworks, such as those used by GNOME or KDE, because it works automatically with any program that links dynamically against glibc - which, on the typical desktop system, is just about everything.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.