Failed miserably to execute bash script via PATH variable after FS migration.
Hi,
I just wiped my old big NTFS partition, where I stored all kinds of user-important stuff such as /home/. I started getting strange errors, like the system freezing to a halt every now and then, and input/output errors when writing to my ~/.mozilla directory. Hoping it was not my Intel 80GB SSD already starting to fail, I backed everyting up on an external magnetic drive, deleted the NTFS and formatted the partition with XFS. (I heard it might be good for big files, but whatever.) After copying everything back, and setting stuff to order, changing permissions and ownership, and so on, I can't execute bash scripts in my bash script folder ~/bin/. ~/bin/ is in the PATH variable. What's weird (and what I suppose justifies posting on LQ) is that I can execute a script ~/bin/script fine, either by typing Code:
$ bash ~/bin/script Code:
$ bash script All files in bin have 'executable' permissions for everyone, but most are run using sudo anyway. And when I run a script called wifil using sudo, I get Code:
[user@computer /]$ sudo wifil |
Could this be a problem with sudo?
What happens, what errors, when you try to run a script without sudo? |
Check the output for "which sudo" and "ldd sudo", maybe you have some leftover from the previous installation. You should also check the contents of $PATH with "echo $PATH", for the very same reason.
It seems like either sudo is broken or you are using a different version of glibc than the one that was used to compile your sudo binary. It could also be a problem in glibc, but then your whole system would probably be unusable. |
Running a script without sudo simply gives me
Code:
[user@computer /]$ wifil Code:
[user@computer /]$ ls -l ~/bin/ | grep wifil Code:
[user@computer /]$ sudo wifil Quote:
Code:
[user@computer /]$ which sudo Code:
[user@computer /]$ sudo ldd /usr/bin/sudo Code:
[user@computer /]$ echo $PATH Quote:
$ sudo ~/bin/wifil [EDIT: *need* to use 'sudo bash (path)'. Sorry for this typo.] connected me so I could post this. Note, that all I ever did was - moving ~/ (let's call it "DIR") from the "storage partition" sda2, to an external disk - deleting sda2 - creating 70GB XFS system on new sda2 - moving DIR back to sda2 - creating new symlink /home/user --> [sda2_mount_point]/DIR user's home directory is /home/user/ ,where /home/user is a symlink to a directory on the "storage" partition, which has in effect had a change of filesystem. I should mention that there was definitely corrupted data on the old filesystem, some of which resided in ~/. However, now I can fsck /dev/sda2, (can't with an NTFS) and the checks show no errors. I find it hard to beleive that certain programs really cause the problem, because they reside on their own 10GB ext4 partition. EDIT: As do all shared libraries. |
Quote:
Quote:
Code:
$ sudo ~/bin/wifil I'm beginning to believe you have a somewhat corrupted system, you say yourself it isn't rock solid, with frequent halts/kernelpanics/whatever. How about free space? What mount options for your /home? Try moving the scripts to some location outside this new filesystem see if that works better. |
[QUOTE=pingu;4070012]But you also said in first post:
And then: Code:
$ sudo ~/bin/wifil I do need to use Code:
[user@computer /]$ sudo bash ~/bin/script Code:
[user@computer ~]$ sudo bash script fstab entry: Code:
/dev/sda1 / ext4 defaults,noatime,discard 0 1 Code:
[user@computer /]$ df I hoped to fix stuff by reformatting the affected partition, but there might be some physical damage on the disk. I don't know what to do. I'm terribly keen on keeping the data in ~/. |
fs corruption or hardware problem could be also causing this. If the problem is in a script you should run it with '#!/bin/bash -x' so you can get verbose output and tell what exactly inside the script is causing the problem.
If the fs got corrupted you should fsck it first, and then reinstall all the packages. Your package manager should provide an easy way to do that. |
Now wait a minute: Is it so, that you cannot execute any script unless you explicitly runs it with bash - that is, typing "bash scriptfile" works but not "scriptfile" alone?
If so, what is your default shell, is it really bash? (You can check that with # cat /etc/passwd |grep user You will see what is default shell, like mine here is bash: Code:
pingu@edgar:~$ cat /etc/passwd |grep ping |
That shouldn't matter as long as the script has a correct header as it should. It's worth checking though. It's also worth checking what does /usr/bin/sh points to with ls -l.
|
Quote:
Code:
[user@computer /]$ sudo cp ~/bin/wifil /mnt/3 What on earth?? I would never have thought of that. THANK YOU! But why? :) |
Quote:
# ! /bin/bash I changed it from /bin/sh after the errors started to appear. Code:
[user@computer /]$ ls -l /bin/ | grep "sh \-" |
Did you already try fsck?
|
Quote:
-logon as root -umount /dev/sda2 -fsck /dev/sda2 (no output) -mount /dev/sda2 /mnt/0 (no -t specification) -exit -logon as user |
My thought was that you might have /home mounted with options/flags not allowing you to execute.
But then, it shouldn't work with "bash scriptfile" either? I have never used this possibility myself, but maybe it's worth digging into? It could be that different filesystems have different default options. |
Quote:
When you open a script doing "bash <filename>" there's virtually no difference between that and doing "oowrite my_file.doc". Bash will launch a new session and start parsing the file. You can quickly check by setting -x on any random script and then launching it with bash or sh. |
All times are GMT -5. The time now is 01:10 PM. |