Convert filtered Java keytool output into a CSV file
Hi,
I've used ansible to go to a number of servers and export ssl certificate information back to the ansible server so I can piple filtered output into a text file. There is a requirement to port this information into a csv file (spreadsheet). A sample text file resembles: Code:
ok: [server1] => { server2 would look something like.. server2, alias1, Jul 11 2017 server2, ali2, Jul 11 2017 server2, alia3, Jul 11 2017 server 1 would look something like.. server1, alias1, Jan 03 2018 I am relatively competent with sed/awk etc, however this is currently a little beyond me. I've previously converted data into a comma-separated list via awk -vORS=, '{ print $1}' The issue here is that I'm stomped on how I can accommodate that some servers may have 1 alias, others may have 5 or more. Thanks for any help. |
Please remember to use [code][/code] tags when displaying code or data.
I am curious about the current output as it sort of looks like json data but there does not appear to be any closing curly braces in your example ... is this correct? Assuming the data is not what I am thinking and the example is correct, show us what you are thinking of when using awk to extract your data? As a hint, you would need to keep looking for aliases and end dates up until the next 'ok' :) |
With this InFile ...
Code:
ok: [server1] => { Code:
awk '{gsub(/\"|,/,"") Code:
server1, alias1, Jan 03 2018 |
Normally I would wait until the OP has presented his attempt, but as Daniel has jumped the gun:
Code:
awk '{gsub(/[[:punct:]]/,"")}/ok/{s=$NF}/Alias/{a=s","$NF}/until/{print a","$(NF-4),$(NF-3),$NF}' file |
Quote:
Daniel B. Martin |
grail/danielbmartin - I can't tell you how grateful I am for this. I'm blown away by it and I have to admit, the awk statements blow me away.
I wish I could say I fully understood the awk command there, but certainly I want to understand it. If you have any links/tutorials which explain what you've done there - much appreciated. I've not used gsub/punct previously. Or even if you have the time, take the awk command and break down what that bit is doing. It works a treat and for others, if you extract verbose keystore output and grep based on alias and valid from lines - the solution here is of fantastic help. In response to grail - I deliberately filtered out the closing } (its there, but I left it out in the final pass/filter). Thanks, David. |
Quote:
Code:
awk '{gsub(/[[:punct:]]/,"")} eliminates all punctuation marks to simplify the following steps. /ok / {serverID=$NF} says if the current line contains the character string "ok " then take the last blank-delimited field and save it in the variable "serverID". NF always contains the Number of Fields in the current line, so $NF is the last field, $(NF-1) is the penultimate field, etc. /Alias name/ {AliasName=$3} says if the current line contains the character string "Alias name" then take the third blank-delimited field and save it in the variable "AliasName". /until / {Expiry=$(NF-4)" "$(NF-3)" "$NF says if the current line contains the character string "until " then pluck three blank-delimited fields and save them in the variable "Expiry". The selected fields are the fourth from the last, the third from the last, and the last. In the variable Expiry they are separated by blanks. At this point the stage is set for printing one line to the OutFile. print serverID", "AliasName", "Expiry If the test for Server Name is not successful, that's okay, continue to use whatever value is in the variable ServerId. Daniel B. Martin |
As far as links go, the only one I use is the manual page.
With the closing brace in the data is either json or perhaps a yaml type output. If json then there is a command line tool called jq which I have been told is very useful. My explanation is obviously much the same as Daniel's, so here are some of the subtle differences: /Alias/{a=s","$NF} :- Here I append the alias name (last field on the line, hence $NF) after the comma. As these two items are the same for all times, the 'a' variable contains all needed data prior to date /until/{print a","$(NF-4),$(NF-3),$NF} :- The date fields are in the exact same positions as for Daniel's solution, the subtle difference here is that when you use a comma between items for the print command, it will automatically place the OFS (Output Field Separator) between each item. As I have not altered it the default OFS is a space. As for [[:punct:]], this a regular expression term and on the manual page you can see here an explanation and a list of other such constructs. As a final note, whenever confronted with columned data, awk is my first go to command :D |
Hi,
Thanks for the explanations and apologies its taken me a while to come back to this. I've tried to have another go based on what I've taken in above (I get the usefulness of $NF and the [[:punct:]] feature) I had an issue yesterday, but after a reasonable nights sleep - I've been able to figure this out. I am now attempting to take a json-like output such as below: Code:
ok: [server1] => { Code:
server1,alias1,08,Sep,2017,server1 Code:
awk '{gsub(/[[:punct:]]/,"")}/ok/{s=($NF)}/Alias/{a=s","$NF}/until/{b=a","$(NF-3)","$(NF-4)","$NF}/DNSName/{print b","$NF}' /tmp/ssl4.txt | sed 's/mydomainlocal/.mydomain.local/g' | sort Thanks again to all of you who helped me understand this - its made a real difference to my day job. David. |
All times are GMT -5. The time now is 10:59 PM. |