-   Linux - General (
-   -   Share your oldest uname/uptime! (

szboardstretcher 07-26-2013 01:30 PM

Share your oldest uname/uptime!
In the vein of the recent command usage post, i thought it would be ultra to show off your oldest server with highest uptime.

Im at a newer company, so mine is kind of pathetic:


[root@web ~]# uname -a
Linux 2.6.18-194.el5 #1 SMP Fri Apr 2 14:58:14 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
[root@web ~]# uptime
11:29:22 up 93 days, 27 min,  1 user,  load average: 1.57, 1.99, 2.26
[root@web ~]#

druuna 07-26-2013 02:01 PM

A professional server that has gone without a reboot for 93 days: You need to plan a reboot real soon now. Or are you 100% sure there haven't been any kernel related (security) patches in the last 93 days?

Besides the kernel; The longer a machine goes without a reboot the greater the chance that it will not boot or come up as expected. Scripts/config files that might have changed (long ago, and possibly forgotten about), HD's that "spontaneously" die after spinning for a long time. And there are probably other reasons. Not something I would like to run into after, for whatever reason, the machine does need to get rebooted.

A high uptime might look cool, but isn't a holy grail whatsoever and in my opinion not of any interest when it comes to professional environments.

If this is a home environment: Go nuts, it is your box (but the above risks still apply).

szboardstretcher 07-26-2013 02:18 PM

This must be a reference to regular linux/dell/hp hardware where you reboot to patch the kernel.

I remember Solaris boxes with hot-swappable,.. well, everything. In place kernel patches... etc. So, its completely possible to have a secure/safe machine on for longer than 90 days.

"The longer a machine goes without a reboot etc,.." if its in production you should have 3t+1 servers as backup, so this shouldnt be a problem. I hope everyone out there is replicating servers/load balancing for failure.

I high uptime isn't cool. Nor is it a status symbol. It is just interesting to see what people have, why they have it, and what they plan on doing.

And in Solaris environments, it is of great interest to professionals how long the equipment has been running. I've had to turn in reports with server patchlevel/uptime listed, so it must be important to someone.

JLndr 07-26-2013 02:24 PM

Here is mine...


john@breezy:~$ uname -a
Linux breezy 2.6.32-042stab076.8 #1 SMP Tue May 14 20:38:14 MSK 2013 x86_64 GNU/Linux
john@breezy:~$ uptime
 15:24:32 up 14 days, 14:03,  1 user,  load average: 0.06, 0.01, 0.00

szboardstretcher 07-26-2013 02:31 PM

Man, thats a pretty idle system!

JLndr 07-26-2013 02:35 PM


Originally Posted by szboardstretcher (Post 4997449)
Man, thats a pretty idle system!

Yeah, haha. Just got it 14 days ago. :p

szboardstretcher 07-26-2013 02:37 PM

Here. This is the definition of 'lots of uptime',.. i even recognize that screen. I see it in my nightmares still. Netware 4.11

druuna 07-26-2013 02:48 PM

@szboardstretcher: Its more then having a HA/Fault tolerant environment.

The following is from own experience:

I've worked with large Tandem NonStop environments. Great stuff to work with: Everything hot-swappable, extra cpu/memory boards could be inserted on running systems, all hardware was double/quadruple and all was patchable on a running system. A system build to "never go down" (mind the quotes).

Uptimes went through the roof 'cause there was no need for a reboot.

At a certain point it was decided (for certification if I remember correctly) to do a power-failure test. Simple in principle: Pull the plug and see if the generators kick in before reserve power is drained. The generators did not kick in and a bunch of tandems went down. We are running on one leg at this moment.

The fact that "a few disks" might not come up correctly was foreseen. 25 brand new disks were on site, 10% of the total amount. The fact that almost 90 disks died was not foreseen. The extra disks needed could not be gotten from Europe, no-one had that many. The company had to rent a private jet to get them flown in from the US (time is of the essence, still running on one leg).

The services provided never went down, so the customers didn't notice a thing. This whole incident did, however, cost millions.

From that moment on it was decided to do a controlled reboot every six months.

szboardstretcher 07-26-2013 02:55 PM

@druuna: very valid point.

I've personally never been asked to test-down any equipment that wasn't *brand spankin new*.

Kind of goes with the old adage "If it aint broke, certainly dont reboot it."


DarkShadow 07-26-2013 03:20 PM

Personally, a computer's uptime is uninteresting to me. But as they say, to each his own

suicidaleggroll 07-26-2013 04:00 PM

I had a couple of machines hovering around the 2-3 year mark not long ago (old machines, no mission-critical operations, OS long out of date, just acting as local data servers isolated from the internet until we decide to decommision them). About 3 months ago we had an extended power outage (~12 hours) that shut down the office, now everything has a 101 day uptime.


$ uptime
 14:00:00 up 101 days,  6:32,  1 user,  load average: 0.01, 0.00, 0.00

All times are GMT -5. The time now is 04:21 AM.