ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have two files. One is a generated report that list jobs executed on our load sharing facility. It has a column with userids and how much CPU time was spent on each job. Any user could be listed one or more times in no particular order.
The goal is a script to modify the first file so that the dept. and real name is added. I can do this in a spreadsheet but would prefer a more automated method.
I can come with something eventually but what I'm wondering is if there is some trivial awk or perl that would make it easy. I'm always looking for something easy. :-)
#!/usr/bin/env python2.5
from __future__ import with_statement
users = {}
entries = []
with open('users.map', 'r') as user_map:
for line in user_map:
entry = [str.strip('"') for str in line.split()]
users[entry[0]] = {'uname':entry[1] + ' ' + entry[2],
'dept':entry[3]}
with open('job.log', 'r') as jobs:
for line in jobs:
uid = line.split()[0]
entries.append(line.strip() + ' ' +
users[uid]['uname'] + ' ' +
users[uid]['dept'] + '\n')
print entries
#!/usr/bin/env python2.5
from __future__ import with_statement
users = {}
entries = []
with open('users.map', 'r') as user_map:
for line in user_map:
entry = [str.strip('"') for str in line.split()]
users[entry[0]] = {'uname':entry[1] + ' ' + entry[2],
'dept':entry[3]}
with open('job.log', 'r') as jobs:
for line in jobs:
uid = line.split()[0]
entries.append(line.strip() + ' ' +
users[uid]['uname'] + ' ' +
users[uid]['dept'] + '\n')
print entries
I used pearl. This a
Code:
#!/usr/bin/perl
open(USERS, "user_tbl.csv");
open(CPUTIME, "sum_user_cputime.csv");
while ($users = <USERS>)
{
@listusers = split (/ /,$users);
while ($cputime = <CPUTIME>)
{
@listcputime = split (/ /,$cputime);
if ( "$listusers[0]" eq "$listcputime[0]" )
{
# had to chop a linefeed here
chop $listusers[2];
print "$listusers[0] $listusers[1] $listusers[2] $listcputime[1]";
}
}
seek CPUTIME, 0, 0;
}
close USERS;
close CPUTIME;
How do you like python? I here a lot of good things about it - except from the perl worshippers. :-(
What it is doing is testing to see if the total record count (NR) equals the file record count (FNR) which will only be true during the reading of the first file. If it is, then it pushes fields 2, 3 and 4 into a users array with a space between them and goes onto the next record (so that it won't do the printing part). After the first file is read, it starts reading the second (and now NR != to FNR) so it doesn't hit next, and gets the printing part. It prints the first field, the values that were pushed into the users array at the location with an index of the first field, then the second field.
Last edited by forrestt; 10-22-2008 at 09:58 AM.
Reason: changed the word "record" to "field" to be accurate.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.