how to output to different streams in python ?
Hi there,
I am reading a database and output data em CSV format to a data migration job. I have to create 3 separate outputs each one in a different format. My first approach was creating 3 python programs, each one outputing data in the desired format, but I realize that I have to read the same data from database in each run. So I was wondering if I can do something similar to this: exportInventory.py > part_numbers.csv 2> exportErrors.txt 3> descriptions.csv 4> quantities.csv How, in python, should I create those additional streams (3 and 4) and how to print to each one ? |
Since you have multiple file formats open a file in python and write whatever you want to each.
f1 = open("part_numbers.csv", "w") f2 = open("exportErrors.txt", "w") ... f1.write(...) To do it in bash you would need separate python programs. Basically opening/closing bash file descriptors is the same as higher languages. exec 3> "part_numbers.csv" exec 4> "exportErrors.txt" exportInventory.py >&3 exportWhatever.py >&4 I have never tried to write to a file descriptor directly from python but there is os.fdopen function. |
To expand on what I think Michael is getting at, you can pass the the filenames into the script and open them with Python, then replace your current stdout/print calls with the relevant file handlers.
For example: Code:
$ ./exportInventory.py part_numbers.csv descriptions.csv quantities.csv 2> exportErrors.txt |
I would suggest you to use the "with" statement like this: https://stackoverflow.com/questions/...e-line-to-file
Code:
with open('file_to_write', 'w') as f: |
Thank you all !
You all are right , opening to write several files is more pythonic and the way I was trying to achieve using multiple descriptors is more "bashish"... I will do in that way. But, just to be clear, conceptually speaking, in linux, a program can have only 2 output streams ? Only stdout and stderr ? It is not possible to have more than 2 ? I mean, is there a way to a program can be called as "a_program >file_1 2>file_2 3>file_3 4>file_4" ? thank you, |
Quote:
Code:
with open('file_1', 'w') as f1: |
If I understand what you are wanting.
Code:
#!/usr/bin/python Also Code:
#!/usr/bin/python |
Quote:
There is no a single print to each file. Instead I read a db, collect the data (let's say it is something like "field_A; field_B; field_C; field_D; field_F") and output a mix of thsi fields: file_1 -> "field_a; field_b" file_2 -> "field_a; field_c; field_f" file_3 -> "field_a; field_d" file_4 -> "field_a; field_e; field_f" Code:
sql= "select field_a, field_b, field_c, field_d, field_e from table where whatever" Instead, Code:
|
I'm basically a beginner with python...
print("Hello world") ~= print"hello world", file=sys.stdout) opens prints to stdout print("error message", file=sys.stderr) obviously opens and prints to stderr I think if you want to write to a file descriptor that isn't 1 or 2 would need to open it with os.fdopen(x,args). f3 = os.fdopen(3, "w+") f3.write("something else") Untested code but something like. Code:
exec 3> "part_numbers.csv" |
Since this is a short-lived script, Python should close the files automatically when the script finishes.
(If it were part of a persistent application, that's when you need to worry about either using "with" or explicitly closing - if the latter, you also need suitable exception handling (e.g. "try/finally") to ensure it gets closed if the unexpected occurs.) In any case, if you're looping through a resultset, instead of writing to each file on every loop iteration, consider storing the results in an array per file, then writing that to file once at the end. Quick example... Code:
data1 = [] |
see here: https://stackoverflow.com/questions/...in-python?rq=1
you can use one with statement to open more files. |
All times are GMT -5. The time now is 05:28 AM. |