I'm usually a stickler for this kind of thing, but I actually think it doesn't matter too much to worry about filenames with newlines in them, particularly when working directly on the command line or in other quick&dirty situations.
That said, when writing a script that will be used repeatedly, it is generally better to do it correctly the first time. Then you'll never have to worry about correcting it later.
The main issue I have with propofol's script is here:
list=($(ls -1 *.mp3))
This is a completely useless use of ls
, even considering that IFS was changed to avoid the parsing ls
problem. All ls
is doing here is acting as a glorified echo
, since the shell is expanding the filenames into an argument list anyway before it even runs.
I will also say that changing IFS globally is generally bad form, and can lead to non-transparent code and hard-to-diagnose errors.
As usual, simple globbing
is all you generally need, either to set an array, or processed directly in a for
The use of find
is a bit more problematic, as it would depend on word splitting to work properly in the above. For that you should avoid command substitution
entirely and use a while+read
loop, with null separators.
How can I find and deal with file names containing newlines, spaces or both?
while IFS='' read -r -d '' fname; do
list+=( "$fname" )
done < <( find ~/mp3folder/ -iname '*.mp3' -print0 )
One more point that hasn't been mentioned yet. Never use #!/bin/sh
as the shebang unless you specifically need POSIX-compatible portability. If you intend to rely on Bash specific
features like arrays, then you need to call it explicitly with #!/bin/bash
. Otherwise it won't work properly if the system you run it on doesn't have bash
as its system shell.