How to parse text file to a set text column width and output to new text file?
Hello...
I have a text file split into paragraphs. This is fine when viewing the text file in a program that has word wrap capabilities however I am using a program called QCAD to produce architectural drawings and find that formatting text (simply in terms of text column width) is very primitive. What I would like to be able to do is take a text file containing paragraphs and to parse through the text and introduce a carriage return every x (say 80) characters which would then be output to a new text file formatted as required. The operation would be delimited by paragraphs and obviously would be applied to each paragraph one after the other. Unlike various scripts I have 'messed' about with a want the text file to remain left aligned. Any help, comments and/or pointers would be very much appreciated. Thanks Jamie. |
So, what exactly fmt does wrong for you?
|
Solved
Thank you.
fmt was the hint I needed - it does indeed do what I asked for and on looking at documentation related to it found myself pointed to another script (fold) that produces the exact output I was looking for. So easy when someone knows how!!! Thanks again. |
light at the end of the tunnel?
Hi Folks,
I got really excited when I saw the title of this topic ... and then realised that although it sounded like what I was looking for, sadly it was not. However ... it's just possible you guys can save my sanity !! In my team we receive dozens of files every week. Generally speaking they are comma delimited files (sometimes with a double inverted comma text qualifier) The application that we map them in to isn't very smart ... so ... What I'm looking for (and cant seem to find anywhere) is a utility that will a) let me give it the filename and what the delimiter is, and text qualifier if there is one. b) the utility will then scan through the file using the parameters i gave it. c) finally, the utility will output the number of columns found and (here's the real target) the maximum width of each column found. This would make my life sooooo much easier!! This seems to be such a basic task, and I'm sure I'm not the first person to want a tool like this. I spent literally 2 hours on google yesterday looking for a tool or command line app (Windows,Dos or Unix) All I could find was a program called UltraEdit that hinted that maybe it could do this. Alas even if it can, my employer is not going to spend $$ on an editor simply to find column widths. So here I am, rocking backwards and forwards in the corner, mumbling to myself " ..why is this so hard ..?? .." please help. |
KentMan, that's the sort of thing you probably won't find a stand alone program for because it's easy to cobble together with the standard unix command line tools.
Awk would be a good tool for this sort of thing, and is available on almost any modern Linux distro you care to name, not to mention lots of other unix-like OSes. Some questions:
|
Thanks for your quick reply. Here are the answers to your questions.
# When you say text qualifier,what do you mean? A column that contains text values will often have its value surrounded by inverted commas. 4 row Example "CODE1","KentMan",200 "CODE2","KentMan hates text files",2000 "CODE3","KentMan likes bacon",201 "CODE4","KentMan wants beer",45000 the entry in the last column does not have "" around it because it is a numeric value. The others are text values and so have text qualifiers around them. A text qualifier is often used where the value can also contain the delimiter. (real pain when this happens) so for example, if the value in column 2 of my example was actually "Kent,Man", the text qualifier tells the import routine to ignore any delimiters found between the quotes. If the person putting the file together had any sense however, they would be tab or pipe delimited to avoid this issue (ideal world scenario I know) # What should the output look like? (If you can provide a [i]short[i] example input and the expected output that would be perfect Ideal output would look like this (using the example above as the input) -------------- 1,5 2,24 3,5 3 columns found 4 rows tested -------------- the first number is the column position, and the second number is the maximum width found. Then two lines confirming how many columns it found and how many rows it tested (these are for re-assurance) process should be able to handle a few thousand rows. If the file was bigger than say 5k rows i would use something like the Unix head command to extract a sizeable sample from the top of the file for testing. I hope I explained ok, please get back to me if you have more questions. KM |
Quote:
|
Osor: yes. He is talking about columns, not lines.
|
Try this:
Code:
#!/usr/bin/awk -f Save it to a file and set the permission to 755, e.g. if the file is called mytest.awk, and is in the same directory as the data file, called test_data, you would do this ($ is the prompt): Code:
$ cd /where/your/files/are/located |
Doing CSV processing manually in awk is pretty hard (especially if you want to have commas inside the fields or use escape sequences). It is easier to use something prewritten. For example, if you are fortunate enough to have access to Perl on your system, you may use the Text::CSV module. Here’s an example:
Code:
#!/usr/bin/perl -l |
Quote:
Quote:
However, even though perl itself is pretty much ubiquitous on modern Linux distros, the standard set of modules is a little limited. If one has to reply on software which is already installed on the system, awk has the advantage of being more uniformly available, although the GNU version is not be as common on commercial unixes as one might hope. Having said all that, I choose perl 9 times out of ten over awk, because it's generally faster, and I am better practices with it. |
I continue to be amazed with what some people do with (and to) the *nix toolset - especially perl.
|
Well .. what can I say ... incredible.
Unfortunately I dont have access to Perl, but I do have gawk. Matthewg42, your script is exactly the tool I have been looking for. Not only that, it rattles through 5000 rows in a blink, and I have yet to test it with a file it gets wrong. A method of passing the delimiters would be the next logical step, but I'm not gonna push my luck and ask you to code it. I just wanted to drop you a note to let you know how much I appreciate your work. Yours too osor, but unfortunately perl is not an option for me :( The rubber room will have to wait for me, my sanity is recovered !! Many many thanks !! KM |
All times are GMT -5. The time now is 10:23 AM. |