LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 04-13-2012, 10:36 PM   #1
Ahaaa
Member
 
Registered: Jan 2012
Location: Melbourne
Distribution: Ubuntu
Posts: 45

Rep: Reputation: 1
Pass root permissions to user from a root script


Hi,

I'm trying to set up a udev script to automate backup when I plug in a particular hard drive.

udev calls a script, "bu_preflight", which checks the UID of the HD, to see if it should proceed. This will run as root. It then calls another script, "bu". The pertinent line in the first script is
Code:
sudo -u $CURRENTUSER guake -n NEW_TAB -e "sudo /home/$CURRENTUSER/Documents/Computer/Scripts/Bash/bu"
I run the second script as $CURRENTUSER, so that output will be directed to the terminal (guake). This script also requires sudo, for root permissions.

The problem is that when the line in the first script runs, guake opens a new window, and demands the sudo password for $CURRENTUSER. As "bu_preflight" is run as root, I would hope that there would be a way to pass the root sudo to the script. I presume the problem is that the command runs the guake command as $CURRENTUSER (losing sudo), then attempts to regain sudo. Is there a way to preserve sudo, so that I do not need to type in my password?

Thanks in advance.
 
Click here to see the post LQ members have rated as the most helpful post in this thread.
Old 04-14-2012, 04:32 AM   #2
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
IMHO you should reconsider rewriting the backup process script. Ameliorating the process with Desktop Environment-based terminal output really is a "nice to have". Backing up files itself is a relatively simple task that can be handled automagically and from the CLI only. You could for instance redirect backup process stdout and stderr to a file the unprivileged user can tail. This can be a separate, disconnected process that doesn't require root rights. BTW have you looked at existing backup scripts and applications? That way you don't have to reinvent the wheel...

* More importantly scripts that run as root must not ever reside in user-writable directories as it would offer the unprivileged user the opportunity to replace script contents and perform any task as root. That is a fundamental, fatal flaw.
 
2 members found this post helpful.
Old 04-14-2012, 07:48 AM   #3
Ahaaa
Member
 
Registered: Jan 2012
Location: Melbourne
Distribution: Ubuntu
Posts: 45

Original Poster
Rep: Reputation: 1
Thank you for the reply.

Sorry, I probably should have given more information. I am using backintime as the main backup engine. My scripts do a few additional things, such as checking the UID of the attached HD, popping up a zenity dialogue box to ask if I really want to backup, setting the computer to not suspend while the backup is running, and after the backup, ejecting the disk (although the final part has proved a bit problematic). Hence, I do like seeing the stdout of backintime, as it provides information about its progress.

That's a great idea about sending stdout and stderr to another file. So I guess I could send these to a file in /tmp/, and then use tail -f for my user?

As far as keeping the scripts in user-writable directories, you make a very good point. I've come into the habit of keeping all my scripts in a sub-directory of my home folder, mainly so I can keep track of them (e.g. when I upgrade the operating system). I guess a secure alternative would be just to chmod a-w for the whole directory?

Thanks again.
 
Old 04-14-2012, 09:03 AM   #4
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
Quote:
Originally Posted by Ahaaa View Post
I do like seeing the stdout of backintime, as it provides information about its progress. That's a great idea about sending stdout and stderr to another file. So I guess I could send these to a file in /tmp/, and then use tail -f for my user?
Code:
~]$ Do you really want to reinvent the wheel? [y/N]:
...just asking because Back In Time supports user callback so you actually don't need to tail any logs. (One of the flaws that plague making backups is user interaction. Making a backup may seem like a nuisance or inconvenience but once you start missing files or whole directories you kick yourself for not letting it get on with business automagically. To put it more clearly: I vote no wrt the whole Zenity thing.) Also BIT can run ionice when tackling manual snapshots. Since it's Python-based you might want to check the source and hack your no-suspend in. Or else ask the developer as it's a useful option to have for any device running on battery.


Quote:
Originally Posted by Ahaaa View Post
I guess a secure alternative would be just to chmod a-w for the whole directory?
No, there is no "alternative". Root-owned files should reside in root-owned directories. /usr/local/bin or /usr/local/sbin would be the traditional (FSSTND, LFS and such) choice but these days things seem to accumulate in just /usr/bin and /usr/sbin.


*BTW my take on the whole "safely remove" thing is that it is Windows-speek for "free up drive letter", the equivalent of 'umount'. None of the external Firewire, SATA or USB casings I've come across (that have with a power switch and external AC power like your MyBook does) would power down automagically after "safely removing" or umounting. I would not mark it as "expected behaviour" if it did nor would I want such behaviour.
 
1 members found this post helpful.
Old 04-14-2012, 10:55 PM   #5
Ahaaa
Member
 
Registered: Jan 2012
Location: Melbourne
Distribution: Ubuntu
Posts: 45

Original Poster
Rep: Reputation: 1
Thank you so much for your help. I feel that I am learning so much more than I had bargained for, which is terrific!

Quote:
Originally Posted by unSpawn View Post
...just asking because Back In Time supports user callback so you actually don't need to tail any logs.
Oh that's great! That should be very useful for at least some of what I'm trying to do (e.g. eject the disk), although my scripts would still require sudo (for example for "cryptsetup luksClose").

Quote:
Originally Posted by unSpawn View Post
I vote no wrt the whole Zenity thing.)
So I guess I should elucidate further. The reason why I'm using Zenity (and the pre-backup script, i.e. before the callback), is as follows. AFAIK, udev can only be so specific with its triggers. It can run a script when it recognises a particular model of hard drive, but not by UID of my specific HD. So, I call the pre-backup script to test for the UID. After this, 99% of the time I'd want to run the backup, but sometimes I might want to recover something instead. This is where Zenity comes in. After the HD UID is recognised, Zenity pops up a dialogue box, asking if backup should commence. It times out after 5 seconds, and the backup automatically commences.

Quote:
Originally Posted by unSpawn View Post
Also BIT can run ionice when tackling manual snapshots.
I wasn't quite sure what you meant by this sentence? I didn't quite understand the connection.

Quote:
Originally Posted by unSpawn View Post
No, there is no "alternative". Root-owned files should reside in root-owned directories. /usr/local/bin or /usr/local/sbin would be the traditional (FSSTND, LFS and such) choice but these days things seem to accumulate in just /usr/bin and /usr/sbin.
But can I not make ~/Documents/Computer/Scripts/Bash/ a root-owned directory? I just figure it's so much easier to keep track of the files I create this way. I have /home/ on a separate partition on my system, and I only replace the / partition when I upgrade the OS. How would you do it? Do you keep your own scripts in one of these locations, then copy the whole folder across when you upgrade? I guess I also like the idea of separating those scripts that I write, and those that I download, etc.

Quote:
Originally Posted by unSpawn View Post
*BTW my take on the whole "safely remove" thing is that it is Windows-speek for "free up drive letter", the equivalent of 'umount'. None of the external Firewire, SATA or USB casings I've come across (that have with a power switch and external AC power like your MyBook does) would power down automagically after "safely removing" or umounting. I would not mark it as "expected behaviour" if it did nor would I want such behaviour.
So I've worked out that there are three steps when ejecting an encrypted drive. These three steps can be accessed from the Disk Utility GUI, or via the CLI. They are "Unmount Volume" (udisks --unmount), "Lock Volume" (sudo cryptsetup luksClose), and "Safe Removal" (udisks --detach). Clicking "Safely remove drive" from nautilus, or "Safely remove" from the Unity launcher seems to do all three. With my previous enclosure, the final step would power down the device, which would stay off (which is also implied by the text in Disk Utility). (My previous enclosure had a power button too, although it was an on/off switch, rather than a "single-state" push-button.) My WD MB actually does power down, but then starts up again, unlike the old enclosure.

Thank you so much for your help again!
 
Old 04-15-2012, 06:14 AM   #6
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
Quote:
Originally Posted by Ahaaa View Post
AFAIK, udev can only be so specific with its triggers. It can run a script when it recognises a particular model of hard drive, but not by UID of my specific HD. So, I call the pre-backup script to test for the UID. After this, 99% of the time I'd want to run the backup, but sometimes I might want to recover something instead. This is where Zenity comes in. After the HD UID is recognised, Zenity pops up a dialogue box, asking if backup should commence. It times out after 5 seconds, and the backup automatically commences.
Personally I would leave recovery out of the whole scheme as it is (or should be) something you need sporadically. Perfect for a manual approach but OK, it's your script and at least you've thought it over well.


Quote:
Originally Posted by Ahaaa View Post
I wasn't quite sure what you meant by this sentence? I didn't quite understand the connection.
By pointing to the example of BIT running an external command I tried to convey that you could probably hack in your no-suspend command yourself.


Quote:
Originally Posted by Ahaaa View Post
But can I not make ~/Documents/Computer/Scripts/Bash/ a root-owned directory? I just figure it's so much easier to keep track of the files I create this way. I have /home/ on a separate partition on my system, and I only replace the / partition when I upgrade the OS. How would you do it? Do you keep your own scripts in one of these locations, then copy the whole folder across when you upgrade? I guess I also like the idea of separating those scripts that I write, and those that I download, etc.
While I understand the rationale for say Fedora 17 to dump everything under /usr and other distributions to promote the "/ + /home" setup for various reasons I always use the traditional partitioning scheme and for backup purposes I use rsync over the network. Your setup shouldn't be a problem for you because BIT supports profiles so if you choose to place root-owned scripts in /usr/local/sbin you can probably add it to your or another profile.
 
Old 04-17-2012, 10:47 PM   #7
Ahaaa
Member
 
Registered: Jan 2012
Location: Melbourne
Distribution: Ubuntu
Posts: 45

Original Poster
Rep: Reputation: 1
Thanks very much for your replies. It's a lot to digest, but I feel I've learnt a lot.

I just wondered if you could clarify one last point. In my current setup, is it possible to make ~/Documents/Computer/Scripts/Bash/ a root-owned directory, which should prevent any malicious activity?
 
Old 04-20-2012, 09:14 PM   #8
Ahaaa
Member
 
Registered: Jan 2012
Location: Melbourne
Distribution: Ubuntu
Posts: 45

Original Poster
Rep: Reputation: 1
Quote:
Originally Posted by Ahaaa View Post
is it possible to make ~/Documents/Computer/Scripts/Bash/ a root-owned directory, which should prevent any malicious activity?
Ah ok. I tested it myself, and it appears to be okay. I guess I haven't tried all permutations though, so maybe I just need to find a root directory to put these few scripts in.

Code:
$ mkdir rootdir rootdirempty
$ sudo chown root rootdir rootdirempty
$ sudo chgrp root rootdir rootdirempty
$ sudo chmod go-wx rootdir/ rootdirempty/
$ ls -l
total 4
drwxr--r-- 2 root root 4096 2012-04-21 12:07 rootdir
drwxr--r-- 2 root root 4096 2012-04-21 12:07 rootdirempty
$ touch rootdir/test
touch: cannot touch `rootdir/test': Permission denied
$ sudo touch rootdir/test
$ ls rootdir/
ls: cannot access rootdir/test: Permission denied
test
$ rmdir rootdirempty/
$ rmdir rootdir/
rmdir: failed to remove `rootdir/': Directory not empty
$ rm -rf rootdir/
rm: cannot remove `rootdir': Permission denied
So if the directory is empty, a user without wx can still remove the directory. However, if the directory contains files, then they cannot remove or modify it.
 
Old 04-21-2012, 01:30 AM   #9
Ahaaa
Member
 
Registered: Jan 2012
Location: Melbourne
Distribution: Ubuntu
Posts: 45

Original Poster
Rep: Reputation: 1
Right... so I tried one more command.
Code:
$ mv rootdir/ rootdir2
works fine. So the user could just rename the directory and create a new one. So after all that, I should have just taken your advice without question, and moved root scripts to a "proper" location!

Oh well, at least I learnt something more about Linux! Thanks again for all your help.
 
Old 04-21-2012, 02:57 AM   #10
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
It is too common for Linux users (new or seasoned and regardless of their or general perception of the distribution they favor) to regard the first reply they receive as "right" or take just any advice in good faith. What is less common is users showing evidence of the kind of inquisitiveness that should be the default for all Linux users. So in return I say thanks for testing things yourself and posting your findings.
 
Old 04-21-2012, 03:19 AM   #11
catkin
LQ 5k Club
 
Registered: Dec 2008
Location: Tamil Nadu, India
Distribution: Debian
Posts: 8,578
Blog Entries: 31

Rep: Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208
Quote:
Originally Posted by Ahaaa View Post
So if the directory is empty, a user without wx can still remove the directory. However, if the directory contains files, then they cannot remove or modify it.
But ...
Code:
c@CW8:~$ rm -fr ~/Documents
rm: cannot remove `/home/c/Documents/Computer/Scripts/Bash/foo': Permission denied
c@CW8:~$ mv ~/Documents ~/Documents.aside
c@CW8:~$ mkdir -p ~/Documents/Computer/Scripts/Bash/ 
c@CW8:~$ touch ~/Documents/Computer/Scripts/Bash/foo
And now the non-root user can put whatever they want in foo and have it run by root

/usr/local/bin is a more appropriate choice for locally-developed scripts to be run by root.
 
Old 04-21-2012, 03:27 AM   #12
catkin
LQ 5k Club
 
Registered: Dec 2008
Location: Tamil Nadu, India
Distribution: Debian
Posts: 8,578
Blog Entries: 31

Rep: Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208
Quote:
Originally Posted by Ahaaa View Post
AFAIK, udev can only be so specific with its triggers. It can run a script when it recognises a particular model of hard drive, but not by UID of my specific HD.
The properties that udev is passed by the kernel on plugging in an HDD can be displayed by using, for example udevadm info -a -p /sys/block/sdc/sdc1 where the sdc1 is determined by running blkid after plugging it in.

I have not seen the UUID of a partition but the serial number of the device represented by /dev/sdc is commonly available.
 
Old 04-21-2012, 04:07 AM   #13
unSpawn
Moderator
 
Registered: May 2001
Posts: 29,415
Blog Entries: 55

Rep: Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600Reputation: 3600
Quote:
Originally Posted by catkin View Post
And now the non-root user can put whatever they want in foo and have it run by root
I already pointed that out and the OP already drew the right conclusion.
 
Old 04-21-2012, 04:18 AM   #14
catkin
LQ 5k Club
 
Registered: Dec 2008
Location: Tamil Nadu, India
Distribution: Debian
Posts: 8,578
Blog Entries: 31

Rep: Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208Reputation: 1208
Quote:
Originally Posted by unSpawn View Post
I already pointed that out and the OP already drew the right conclusion.
Oops! I overlooked Ahaaa's post after the one quoted
 
Old 04-21-2012, 05:50 AM   #15
Ahaaa
Member
 
Registered: Jan 2012
Location: Melbourne
Distribution: Ubuntu
Posts: 45

Original Poster
Rep: Reputation: 1
Quote:
Originally Posted by unSpawn View Post
What is less common is users showing evidence of the kind of inquisitiveness that should be the default for all Linux users.
Thank you for your kind words. Also, it might not surprise you that I am a scientist by profession, so I think empiricism comes with the territory.

Quote:
Originally Posted by catkin View Post
The properties that udev is passed by the kernel on plugging in an HDD can be displayed by using, for example udevadm info -a -p /sys/block/sdc/sdc1 where the sdc1 is determined by running blkid after plugging it in.
The method that I use in my script looks at /dev/disk/by-uuid/ for the presence of the specific HD. The problem I had was that I have to wait 15–25 seconds before calling this subroutine, otherwise it wouldn't be attached here yet. Your methods sound good though, as presumably I would not have to code the wait time in.

Quote:
Originally Posted by catkin View Post
Oops! I overlooked Ahaaa's post after the one quoted
No worries!
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
How to run a shell script as root (when user is not root) taylorkh Linux - Newbie 10 09-12-2008 06:05 PM
Granting root permissions to a bash script than can be run by a user? SirTristan Linux - Newbie 4 06-14-2008 10:29 PM
How can I have a script owned as root and run as root by a user: setuid? stickey bit? abefroman Linux - Newbie 9 04-19-2008 05:15 PM
pass root password in cgi script file explorer Linux - General 3 04-06-2004 10:13 PM
How to pass root permissions to other app? therut Linux - Newbie 7 08-19-2003 09:19 AM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 11:07 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration