Bash file filtering (B. Newbie)
I have this code:
Code:
#!/bin/bash I ran the bash debugger "~# bash -x ./pData" which tells me why there are "too many arguments" in line 4 :- the code attempts to copy EVERY file EVERY loop. Actually echo the name to the screen for testing. How do I get Bash to filter only those files I want to copy? (Yes. I read through a LOT of posts but nothing has quite the right answer. That I found) |
Several points:
|
Matthew, thanks.
Firstly, I need to process each *.dta file individually so your last option is informative but not applicable here. Some of your syntax is confusing so I need to RTFM - I have only used C and Assembler so far. e.g. you use the " to enclose a variable without prefix $ so I guess Bash picks it up whatever. Your example #3 works just fine for me.;) |
Since you only want to process *.dta files, using:
for file in *.dta; do would work out better. However, check what you want to do with the file, because you are overwriting the /var/www/html/scandata.dta for each file in the list. |
Quote:
Shell is quite different from C and assembler. It's a lot cruder than C in many ways. As with any programming, it's just a matter of doing lots of different things an getting a feel for it. The good thing about shell is that you can use the same commands in the terminal as you do in scripts (mostly). This makes a very nice way to test out commands and little loops etc. Quote:
When you see something like *.dta, you should know that it's the shell expanding the pattern first, then using it (e.g. using the list n a loop, or passing the list to a command). So whet you do Code:
ls *.dta |
Now we're getting there:
Code:
#!/bin/bash OK I know that I'm over writing 'scandata' every time but that's exactly what I need. 'hbinterpret' processes the dta info and records it. Then I can analyse later. There are many dta files. I find the fact that I can use command line code inside this shell script makes the job much easier. As you say Matthew, I am getting my head round it!:) |
Depending on whether or not this hbinterpret program cares about the current working directory setting when it is invoked, you might be able to replace these lines
Code:
cd .. # go up one dir to /var/www/html Code:
../hbinterpret For this sort of task it is really ideal. If you find yourself doing data processing an any significant way, you'd probably be better off using sed, perl, awk or something like that. Sed has quite a small instruction set, but is really powerful despite this. I discovered awk first, but then found perl did the same stuff faster, plus a whole lot more. |
Matthew, thanks again.
hbinterpret accesses other files in its own directory so fails on not finding them. When I run ../hbinterpret it junks out. I'll certainly have a look at perl in the same way I'd like to look at Tcl. I don't know yet whether either or both can have their source protected which I feel is necessary for the proprietary source.:scratch: |
Tcl is quite a nice language. Really minimal syntax - you can learn to use it quite OK within in a day. :)
What's really nice about TCL is expect. expect is well worth a look. It uses TCL as the basic language and adds some really cool stuff. You can automate your whole job with it :D There are bindings for expect in other languages too, but I really like the simplicity of TCL/expect. |
Wit a slight leap of faith I note you use Drupal on your site.
That looks good also, especially as Linux Format gives it the thumbs up:jawa: See you there. |
All times are GMT -5. The time now is 06:56 AM. |