[SOLVED] Pass root permissions to user from a root script
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm trying to set up a udev script to automate backup when I plug in a particular hard drive.
udev calls a script, "bu_preflight", which checks the UID of the HD, to see if it should proceed. This will run as root. It then calls another script, "bu". The pertinent line in the first script is
I run the second script as $CURRENTUSER, so that output will be directed to the terminal (guake). This script also requires sudo, for root permissions.
The problem is that when the line in the first script runs, guake opens a new window, and demands the sudo password for $CURRENTUSER. As "bu_preflight" is run as root, I would hope that there would be a way to pass the root sudo to the script. I presume the problem is that the command runs the guake command as $CURRENTUSER (losing sudo), then attempts to regain sudo. Is there a way to preserve sudo, so that I do not need to type in my password?
Thanks in advance.
Click here to see the post LQ members have rated as the most helpful post in this thread.
IMHO you should reconsider rewriting the backup process script. Ameliorating the process with Desktop Environment-based terminal output really is a "nice to have". Backing up files itself is a relatively simple task that can be handled automagically and from the CLI only. You could for instance redirect backup process stdout and stderr to a file the unprivileged user can tail. This can be a separate, disconnected process that doesn't require root rights. BTW have you looked at existing backup scripts and applications? That way you don't have to reinvent the wheel...
* More importantly scripts that run as root must not ever reside in user-writable directories as it would offer the unprivileged user the opportunity to replace script contents and perform any task as root. That is a fundamental, fatal flaw.
Sorry, I probably should have given more information. I am using backintime as the main backup engine. My scripts do a few additional things, such as checking the UID of the attached HD, popping up a zenity dialogue box to ask if I really want to backup, setting the computer to not suspend while the backup is running, and after the backup, ejecting the disk (although the final part has proved a bit problematic). Hence, I do like seeing the stdout of backintime, as it provides information about its progress.
That's a great idea about sending stdout and stderr to another file. So I guess I could send these to a file in /tmp/, and then use tail -f for my user?
As far as keeping the scripts in user-writable directories, you make a very good point. I've come into the habit of keeping all my scripts in a sub-directory of my home folder, mainly so I can keep track of them (e.g. when I upgrade the operating system). I guess a secure alternative would be just to chmod a-w for the whole directory?
I do like seeing the stdout of backintime, as it provides information about its progress. That's a great idea about sending stdout and stderr to another file. So I guess I could send these to a file in /tmp/, and then use tail -f for my user?
Code:
~]$ Do you really want to reinvent the wheel? [y/N]:
...just asking because Back In Time supports user callback so you actually don't need to tail any logs. (One of the flaws that plague making backups is user interaction. Making a backup may seem like a nuisance or inconvenience but once you start missing files or whole directories you kick yourself for not letting it get on with business automagically. To put it more clearly: I vote no wrt the whole Zenity thing.) Also BIT can run ionice when tackling manual snapshots. Since it's Python-based you might want to check the source and hack your no-suspend in. Or else ask the developer as it's a useful option to have for any device running on battery.
Quote:
Originally Posted by Ahaaa
I guess a secure alternative would be just to chmod a-w for the whole directory?
No, there is no "alternative". Root-owned files should reside in root-owned directories. /usr/local/bin or /usr/local/sbin would be the traditional (FSSTND, LFS and such) choice but these days things seem to accumulate in just /usr/bin and /usr/sbin.
*BTW my take on the whole "safely remove" thing is that it is Windows-speek for "free up drive letter", the equivalent of 'umount'. None of the external Firewire, SATA or USB casings I've come across (that have with a power switch and external AC power like your MyBook does) would power down automagically after "safely removing" or umounting. I would not mark it as "expected behaviour" if it did nor would I want such behaviour.
Thank you so much for your help. I feel that I am learning so much more than I had bargained for, which is terrific!
Quote:
Originally Posted by unSpawn
...just asking because Back In Time supports user callback so you actually don't need to tail any logs.
Oh that's great! That should be very useful for at least some of what I'm trying to do (e.g. eject the disk), although my scripts would still require sudo (for example for "cryptsetup luksClose").
Quote:
Originally Posted by unSpawn
I vote no wrt the whole Zenity thing.)
So I guess I should elucidate further. The reason why I'm using Zenity (and the pre-backup script, i.e. before the callback), is as follows. AFAIK, udev can only be so specific with its triggers. It can run a script when it recognises a particular model of hard drive, but not by UID of my specific HD. So, I call the pre-backup script to test for the UID. After this, 99% of the time I'd want to run the backup, but sometimes I might want to recover something instead. This is where Zenity comes in. After the HD UID is recognised, Zenity pops up a dialogue box, asking if backup should commence. It times out after 5 seconds, and the backup automatically commences.
Quote:
Originally Posted by unSpawn
Also BIT can run ionice when tackling manual snapshots.
I wasn't quite sure what you meant by this sentence? I didn't quite understand the connection.
Quote:
Originally Posted by unSpawn
No, there is no "alternative". Root-owned files should reside in root-owned directories. /usr/local/bin or /usr/local/sbin would be the traditional (FSSTND, LFS and such) choice but these days things seem to accumulate in just /usr/bin and /usr/sbin.
But can I not make ~/Documents/Computer/Scripts/Bash/ a root-owned directory? I just figure it's so much easier to keep track of the files I create this way. I have /home/ on a separate partition on my system, and I only replace the / partition when I upgrade the OS. How would you do it? Do you keep your own scripts in one of these locations, then copy the whole folder across when you upgrade? I guess I also like the idea of separating those scripts that I write, and those that I download, etc.
Quote:
Originally Posted by unSpawn
*BTW my take on the whole "safely remove" thing is that it is Windows-speek for "free up drive letter", the equivalent of 'umount'. None of the external Firewire, SATA or USB casings I've come across (that have with a power switch and external AC power like your MyBook does) would power down automagically after "safely removing" or umounting. I would not mark it as "expected behaviour" if it did nor would I want such behaviour.
So I've worked out that there are three steps when ejecting an encrypted drive. These three steps can be accessed from the Disk Utility GUI, or via the CLI. They are "Unmount Volume" (udisks --unmount), "Lock Volume" (sudo cryptsetup luksClose), and "Safe Removal" (udisks --detach). Clicking "Safely remove drive" from nautilus, or "Safely remove" from the Unity launcher seems to do all three. With my previous enclosure, the final step would power down the device, which would stay off (which is also implied by the text in Disk Utility). (My previous enclosure had a power button too, although it was an on/off switch, rather than a "single-state" push-button.) My WD MB actually does power down, but then starts up again, unlike the old enclosure.
AFAIK, udev can only be so specific with its triggers. It can run a script when it recognises a particular model of hard drive, but not by UID of my specific HD. So, I call the pre-backup script to test for the UID. After this, 99% of the time I'd want to run the backup, but sometimes I might want to recover something instead. This is where Zenity comes in. After the HD UID is recognised, Zenity pops up a dialogue box, asking if backup should commence. It times out after 5 seconds, and the backup automatically commences.
Personally I would leave recovery out of the whole scheme as it is (or should be) something you need sporadically. Perfect for a manual approach but OK, it's your script and at least you've thought it over well.
Quote:
Originally Posted by Ahaaa
I wasn't quite sure what you meant by this sentence? I didn't quite understand the connection.
By pointing to the example of BIT running an external command I tried to convey that you could probably hack in your no-suspend command yourself.
Quote:
Originally Posted by Ahaaa
But can I not make ~/Documents/Computer/Scripts/Bash/ a root-owned directory? I just figure it's so much easier to keep track of the files I create this way. I have /home/ on a separate partition on my system, and I only replace the / partition when I upgrade the OS. How would you do it? Do you keep your own scripts in one of these locations, then copy the whole folder across when you upgrade? I guess I also like the idea of separating those scripts that I write, and those that I download, etc.
While I understand the rationale for say Fedora 17 to dump everything under /usr and other distributions to promote the "/ + /home" setup for various reasons I always use the traditional partitioning scheme and for backup purposes I use rsync over the network. Your setup shouldn't be a problem for you because BIT supports profiles so if you choose to place root-owned scripts in /usr/local/sbin you can probably add it to your or another profile.
Thanks very much for your replies. It's a lot to digest, but I feel I've learnt a lot.
I just wondered if you could clarify one last point. In my current setup, is it possible to make ~/Documents/Computer/Scripts/Bash/ a root-owned directory, which should prevent any malicious activity?
is it possible to make ~/Documents/Computer/Scripts/Bash/ a root-owned directory, which should prevent any malicious activity?
Ah ok. I tested it myself, and it appears to be okay. I guess I haven't tried all permutations though, so maybe I just need to find a root directory to put these few scripts in.
So if the directory is empty, a user without wx can still remove the directory. However, if the directory contains files, then they cannot remove or modify it.
works fine. So the user could just rename the directory and create a new one. So after all that, I should have just taken your advice without question, and moved root scripts to a "proper" location!
Oh well, at least I learnt something more about Linux! Thanks again for all your help.
It is too common for Linux users (new or seasoned and regardless of their or general perception of the distribution they favor) to regard the first reply they receive as "right" or take just any advice in good faith. What is less common is users showing evidence of the kind of inquisitiveness that should be the default for all Linux users. So in return I say thanks for testing things yourself and posting your findings.
So if the directory is empty, a user without wx can still remove the directory. However, if the directory contains files, then they cannot remove or modify it.
AFAIK, udev can only be so specific with its triggers. It can run a script when it recognises a particular model of hard drive, but not by UID of my specific HD.
The properties that udev is passed by the kernel on plugging in an HDD can be displayed by using, for example udevadm info -a -p /sys/block/sdc/sdc1 where the sdc1 is determined by running blkid after plugging it in.
I have not seen the UUID of a partition but the serial number of the device represented by /dev/sdc is commonly available.
What is less common is users showing evidence of the kind of inquisitiveness that should be the default for all Linux users.
Thank you for your kind words. Also, it might not surprise you that I am a scientist by profession, so I think empiricism comes with the territory.
Quote:
Originally Posted by catkin
The properties that udev is passed by the kernel on plugging in an HDD can be displayed by using, for example udevadm info -a -p /sys/block/sdc/sdc1 where the sdc1 is determined by running blkid after plugging it in.
The method that I use in my script looks at /dev/disk/by-uuid/ for the presence of the specific HD. The problem I had was that I have to wait 15–25 seconds before calling this subroutine, otherwise it wouldn't be attached here yet. Your methods sound good though, as presumably I would not have to code the wait time in.
Quote:
Originally Posted by catkin
Oops! I overlooked Ahaaa's post after the one quoted
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.