Re-creating a column byte-based file
Greetings,
I searched the forums for what I am trying to accomplish, but it was hard describing it and getting any relevant results. Here is what I am trying to accomplish: I have a file that stores employee login IDs, names, types, and permissions. Our software reads the information based on byte-columns, so it reads a column as any ASCII character (spaces, letters, numbers, punctuation, etc.). I want to create a web-interface for adding and removing users, and storing the data in a MySQL database. However, if I am creating the files from the MySQL output, I need a way to write to specific column locations in the file ... I want to use a scripting language, preferably C-Shell, to call MySQL for the data and write the data to the correct columns of the file. I wrote a script that takes the data from the file, and dumps it into the MySQL table, so maybe I can pad the remaining space in the table column to fill with spaces ... any suggestions? |
You can use the "cut" command -c option to get positions.
The way I'd do extract the data in bash/ksh is: Code:
while read line Of course instead of just doing the final echo line you would want to put your routine that writes the data the way you want. |
That is what I used to get the data out of the file and into MySQL, using cut, but since I will need to re-write any changes made to the MySQL table back to this file, I need it formatted as before.
|
Code:
while [ $i -le $lines ] Code:
awk -v start="$1" -v end="$2" '{if(NR >= start && NR <= end) {print}}' $3 But I digress, just pointing that out ... |
Sorry I misread - I thought you were trying to figure out how to get positions out of text into MySQL. You were going the other way.
Unfortunately I don't work with MySQL much so can't really guide you. This link talks about writing to a text file from MySQL: http://www.wellho.net/forum/The-MySQ...text-file.html You can use awk to pad fields. An example of that would be this test script: Code:
#!/bin/bash Code:
fixed-address 1.2.3.4 |
using a while read loop and calling external cut 6 times for each line is a terribly slow way of doing file parsing. Use awk for efficiency (or use bash's internal string functions).
|
Of course, I didn't think of awk being able to printf and pad, thanks a lot!
|
Quote:
|
@ghostdog74 - it is slow, however, the cut process only needed to be done once in my case. I just needed to get existing data into the MySQL table, after that, the MySQL call will feed into the text file.
What is the field delimiter to use for a single byte in awk, as in cut -b is for byte? |
you use substr() in awk.
|
the syntax for substr() is:
Code:
awk '{ print substr(a,b,c)}' file a is the field (or $0 for the entire line, in my case) b is the starting position c is the length from the starting position |
Why don't you just keep all the data in one place eg MySQL. Copying back and forth means you'll always have some time when the 2 are not synced.
Anyway, to extract from MySQL 1. if the data is always same length (in the DB), the use the concat() fn to extract. 2. if the data is of variable lengths (in the DB), then use rpad() http://dev.mysql.com/doc/refman/5.0/...functions.html |
Quote:
The easiest way for me was using awk's printf and then speciftying the length of the field and then the data to write to it. The data is not the same length, as there are name strings. Also, since this will translate into a web-based tool, PHP does fwrite() which also has extensive formatting options. |
All times are GMT -5. The time now is 07:48 AM. |