Need help with a short Linux command line script - working with split files
First off, thank:) s in advance to anyone that can help me on this one..
The following is what I want to script. I know bits and pieces of it already :scratch: , but I am missing the more complicated sections that work with variables, etc.. here goes:
This seems a rather arbitrary thing to do. Can you explain what you are trying to accomplish?
Anyway, I don't think there is a way to pick a random number in shell directly, so try something like:
We get a random number from Ruby, and grab a line from your file using said random number.
You will of course need to loop over this to grab as many names from your file as you have your 'chunks'.
Also, this makes no provisions to make sure it doesn't grab the same name twice. You will have to keep track of your first and subsequent 'random' line numbers and try for another if you pull a duplicate.
As for your 'find' line, have a look at the '-exec' command in the find man page. It will run a command for each file it finds.
This would really be _so_ much easier if you were using a 'real' language like Ruby, Python, or Perl...
Anyway, hopefully I have given you a few ideas. Work some more on it, and post back if you get stuck...
Thanks Bulliver, I came across this script in Perl which may help
Hi Bulliver, others,
Thanks for your tip about my script. I think Perl might be the way to go on this one, since I believe it can be executed from the command line or as a CRON job...Admittedly, I really have no clue how to code Perl, just a general understanding of it through countless trial and error efforts.
I've found the a script for my first step, named splitfile.pl, which is invoked by the following:
[root@]# perl splitfile.pl -l 500 wordlist.txt file
Where -l = lines for each file, then source file, then output file prefix
I'd list the URL for the source, but I don't have sufficient privileges yet.
In my next step, I originally wanted to call a command line function that counts the number of files created, then uses that count to pull random new folder names from a text list of possible names.
Bulliver pointed out the error with this step: there is a chance that the same new folder name could be pulled from the list, thus screwing up the whole process.
So, now I will simplify it, but I don't know the perl for doing the rest:
Step 2: Collect all file names created by the split process, for example: fooaaa, fooaab, fooaac but *without* the .txt extension for now - basename I think...
Step 3: Go to a selected web directory and create new folders named according to the filenames collected in step 2; for example /var/www/user1/web/fooaaa , then make /var/www/user1/web/fooaab , /var/www/user1/web/fooaac etc. Make sure to create a folder for each name...
Step 4: After all folders have been created, go to the first folder, fooaaa, and copy files from a predefined source on the server: example copy /home/sourcefilesandfolders/*.* to /var/www/user1/web/foooaaa/ .
Step 5: Among the folders and files copied to these new directories will be a sub-folder named: abc , such that the complete directory path to folder abc would be: /var/www/user1/web/fooaaa/abc . Now return to the directory containing the split files and move file fooaaa to /var/www/user1/web/fooaaa/abc/mylist.txt - making sure to rename it to mylist.txt .
Repeat in the same fashion for each remaining split file, moving it to sub-directory abc with name mylist.txt of the folder that was created with its original "split name"...
In the end, we would have:
Step 6: Chmod all /var/www/user1/web/foo* folders to 755
Step 7: Open crontab for user1 and begin with the following commands:
It is not likely that 24 hours worth of new folder commands will be needed, since this would be 12/hour x 24 hours equaling 288 unique commands, which is essentially 288 new foo* created folders.
Thanks again for the help!
OK...I switched to Perl, and this is what I have so far, but I get an error
Ok I switched over to Perl and was able to scrounge up some snippets for the following process, which effectively takes care of steps 1-3 in my amended plan, * SEE POST IMMEDIATELY ABOVE*.
Now, however, I want to copy the contents of the following variable, assigned by my file path: $sourcefiles = "/home/soure"; . Inside folder "source" is all the files I wish to copy over to my new folders...
I tried to use "foreach" twice, being naive and not aware of any other way of copying (not moving) files..
I tried copy::file, but there is no wildcard for copying all files as in copying files from the Command Line...
|All times are GMT -5. The time now is 01:10 AM.|