Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
05-05-2015, 11:05 AM
|
#1
|
Member
Registered: Mar 2009
Distribution: CentOS 6.5 / 7
Posts: 119
Rep:
|
Calculate average of column in units of time (HH:MM)
Here is an example of my data.
Code:
Group1,2015-04-21,05:05,PASS
Group1,2015-04-22,10:16,PASS
Group1,2015-04-23,05:25,PASS
Group1,2015-04-24,06:38,PASS
Group1,2015-04-25,06:27,PASS
Group1,2015-04-26,10:11,PASS
Group1,2015-04-27,05:48,PASS
Group1,2015-04-28,00:03,FAIL
Group1,2015-04-29,05:33,PASS
Group1,2015-04-30,05:30,PASS
Group1,2015-05-01,13:39,PASS
Group1,2015-05-02,06:53,PASS
Group1,2015-05-03,09:51,PASS
Group1,2015-05-04,06:01,PASS
What I want is an average of column 3 which should be easy, right?
Code:
awk -F',' '{ total += $3; count++ } END { print total/count }' test.txt
6.5
6.5 is not correct and is in decimal form. According to excel (yes, I am lazy) It should be 6 hours, 57 Minutes, 9 seconds. What do I need to do differently in order to get the correct answer in the format HH:MM?
|
|
|
05-05-2015, 11:27 AM
|
#2
|
LQ Guru
Registered: May 2005
Location: boston, usa
Distribution: fedora-35
Posts: 5,326
|
|
|
|
05-05-2015, 11:33 AM
|
#3
|
LQ Addict
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 24,374
|
as excel will recognize date format and calculate average based on that you need to do the same thing. Convert hours:minutes into a usable format and calculate average. But I think you need to take into account the date too.
you may find this page useful to do the conversion: http://www.theunixschool.com/2013/01...functions.html
|
|
|
05-05-2015, 11:36 AM
|
#4
|
Member
Registered: Apr 2015
Distribution: Debian
Posts: 272
Rep: 
|
If it really is just the hours and minutes you are interested in you probably want to multiply the hours by 60 then add the minutes then you will be dealing with one unit (minutes). As pan64 suggests - they date might be important?
|
|
|
05-05-2015, 11:42 AM
|
#5
|
LQ Guru
Registered: May 2005
Location: boston, usa
Distribution: fedora-35
Posts: 5,326
|
Code:
[schneidz@hyper density]$ date +s -d '2015-04-21 05:05'
1429607100
[schneidz@hyper density]$ date -d @1429607100
Tue Apr 21 05:05:00 EDT 2015
[schneidz@hyper density]$ man date
|
|
|
05-05-2015, 07:56 PM
|
#6
|
LQ Veteran
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,399
|
The OP apparently is happy to ignore the date (based on excel example).
Using [,:] as FS busts the time up - the arithmetic becomes simple. If there is a lot of data, I'd be looking to avoid call-outs to date if I could.
|
|
|
05-06-2015, 05:38 PM
|
#7
|
Member
Registered: Mar 2009
Distribution: CentOS 6.5 / 7
Posts: 119
Original Poster
Rep:
|
Correct. There is a lot of data. This is a tiny snip from a much larger group of files. The date is not important in this scenario, just part of the report that I have to maneuver around to get what I need which is averages for the hours and minutes it took to run. It looks like I am just failing maths 
Last edited by thealmightyos; 05-06-2015 at 06:22 PM.
Reason: duh momment
|
|
|
05-06-2015, 08:15 PM
|
#8
|
LQ Veteran
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,399
|
I like to use things like this to expose the data - makes it easy to see what fields you need to use. The hours is one field, the minutes the next. As suggested above, totalling them should be trivial. In your END clause do the final division - or (better) use modulo 60.
Code:
awk -F"[,:]" 'NR < 3 {print "\nRecord: " NR ; for (i=1; i<=NF; i++) print "Field "i":\t",$i}' file
|
|
1 members found this post helpful.
|
All times are GMT -5. The time now is 06:20 PM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|