BASH: Reading long filenames into an array using a loop
Hello, everybody.
I'm trying to write a BASH script to recursively change the permissions of files and directories, respectively. The reason for this is so that, having removed about 50GB of from a vfat partition with umask=000 set onto a reiserfs partition, I want to change the permissions to something sane, as opposed to -rwxrwxrwx and drwxrwxrwx. Here's the code I'm having problems with (just an excerpt from my rather messy work-in-progress): Code:
#!/bin/bash What it's doing wrong: when a file or directory such as, "filename with spaces in it" is put into the array, it all comes out in separate members of the array, such as, testarray[0]=filename, testarray[1]=with, testarray[2]=spaces, etc. This is bad :-). Here's a sample of the output of just the ls/grep/sed command (run in a test directory): Code:
[dane@Orchestrator fubar]$ ls -a1 --color=none | grep -v ^'[.]\|[..]'$ | sed s/' '/'\\ '/g Thanks for your time. --Dane |
I have tried for ages to do something to get "for i in" to work with spaces, but with no luck. I have tried to put "\" in the file names, put them in quotes, put them in quotes and backslash spaces... But it seems that it treats spaces as separators regardless. I think that the find command can be used to run commands on files and dirs separately with:
find -type [f|d] -exec (command) {} but I only remember it form a script I was looking at a while ago so you may have to look at the man page of find. Hope that helps. P.S. if you use "ls -Al" insted of "ls -al" it will not display . and .. |
why not using "chmod --recursive"?
|
Well I was under the impression that files and dirs were to be treated differently in the script.
But if not then chmod -R would work, yes. |
why not
find -type d | xargs chmod find -type f -exec chmod {} \; |
Quote:
Code:
#! /bin/bash |
if only I knew that a while ago,
ah well, live and learn :p I'll have to rwite that down :) |
Thanks, everybody, for the replies!
I didn't understand the bit about xargs (I'm pretty new to BASH scripting), but inserting "IFS=$'\n'" before the for loop did the trick! I do, however have another question about this script, if you all don't mind. I was wondering how to get it to output an error message and exit with status 1 if anything at all gets written to stderr. Here is the script in its mostly complete state: Code:
#!/bin/bash --Dane |
I'd suggest using something like the following. It won't catch stderr, but you can query the exit status of each command:
Code:
# first option: simply check for errors and keep track of them in a variable |
Thanks, spirit receiver, for the advice.
I ended up using option #2 like this: Code:
# Create a function for testing the exit status of a command. Cheers! --Dane |
Here is a solution that takes into count the $IFS environment variable
Here is a solution that takes into count the $IFS
environment variable: Code:
# Here is a solution that takes into count the $IFS |
Thanks!
Thank-you for the helpful reply. I appreciate you posting even though it's been a few years since I wrote that; the info is still useful!
--Dane |
a little tip as regards playing with IFS
put it in subshell parenthesis Code:
( on the command line, list all executables: Code:
(IFS=:;ls -1 $PATH) |
All times are GMT -5. The time now is 05:42 PM. |