Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Greetings,
I searched the forums for what I am trying to accomplish, but it was hard describing it and getting any relevant results. Here is what I am trying to accomplish:
I have a file that stores employee login IDs, names, types, and permissions. Our software reads the information based on byte-columns, so it reads a column as any ASCII character (spaces, letters, numbers, punctuation, etc.). I want to create a web-interface for adding and removing users, and storing the data in a MySQL database. However, if I am creating the files from the MySQL output, I need a way to write to specific column locations in the file ...
User ID: Columns 1-4
User Name: Columns 6-30
Type: 32-40
Permissions: 42-45
I want to use a scripting language, preferably C-Shell, to call MySQL for the data and write the data to the correct columns of the file. I wrote a script that takes the data from the file, and dumps it into the MySQL table, so maybe I can pad the remaining space in the table column to fill with spaces ... any suggestions?
You can use the "cut" command -c option to get positions.
The way I'd do extract the data in bash/ksh is:
Code:
while read line
do USER=`echo $line |cut -c1-4`
NAME=`echo $line |cut -c6-30`
TYPE=`echo $line |cut -c32-40`
PERMS=`echo $line |cut -c42-45`
echo User ID is $USER User Name is $NAME Type is $TYPE Permissions are $PERMS
done <FILE
Where FILE is the file that contains the original data.
Of course instead of just doing the final echo line you would want to put your routine that writes the data the way you want.
Last edited by MensaWater; 11-05-2009 at 01:58 PM.
That is what I used to get the data out of the file and into MySQL, using cut, but since I will need to re-write any changes made to the MySQL table back to this file, I need it formatted as before.
This is what I had to do, because when you stream a line of a text file to a variable (in your case $line) it doesn't have columns anymore, as I tried the exact code you did before and it wouldn't read the many spaces as columns. the range command is simply:
where arg1 is the starting point and arg2 is the end, and arg3 is the file. This was the only way I could get the columns to be preserve is actually streaming the file, and not a variable that captures the line of the file.
You can use awk to pad fields. An example of that would be this test script:
Code:
#!/bin/bash
# Example script I wrote for LQ to show use of padding in awk with printf.
# 07-Jul-2009 jlightne
#
while read label address
do echo $address |awk -F. '{printf "fixed-address %03d.%03d.%03d.%03d\n",$1,$2,$3,$4}'
done <awkprintf.test
using a while read loop and calling external cut 6 times for each line is a terribly slow way of doing file parsing. Use awk for efficiency (or use bash's internal string functions).
using a while read loop and calling external cut 6 times for each line is a terribly slow way of doing file parsing. Use awk for efficiency (or use bash's internal string functions).
Maybe providing an actual example routine to do this would be helpful rather than simply saying the cut routine provided isn't.
@ghostdog74 - it is slow, however, the cut process only needed to be done once in my case. I just needed to get existing data into the MySQL table, after that, the MySQL call will feed into the text file.
What is the field delimiter to use for a single byte in awk, as in cut -b is for byte?
Why don't you just keep all the data in one place eg MySQL. Copying back and forth means you'll always have some time when the 2 are not synced.
That is the goal, but the software that runs read a column-delimited file that has to be formatted per its standards.
The easiest way for me was using awk's printf and then speciftying the length of the field and then the data to write to it. The data is not the same length, as there are name strings. Also, since this will translate into a web-based tool, PHP does fwrite() which also has extensive formatting options.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.