LinuxQuestions.org
Go Job Hunting at the LQ Job Marketplace
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices

Reply
 
Search this Thread
Old 07-26-2013, 01:30 PM   #1
szboardstretcher
Senior Member
 
Registered: Aug 2006
Location: Detroit, MI
Distribution: GNU/Linux, LFS, Slitaz
Posts: 3,043
Blog Entries: 1

Rep: Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953
Share your oldest uname/uptime!


In the vein of the recent command usage post, i thought it would be ultra to show off your oldest server with highest uptime.

Im at a newer company, so mine is kind of pathetic:

Code:
[root@web ~]# uname -a
Linux 2.6.18-194.el5 #1 SMP Fri Apr 2 14:58:14 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
[root@web ~]# uptime
11:29:22 up 93 days, 27 min,  1 user,  load average: 1.57, 1.99, 2.26
[root@web ~]#
 
Old 07-26-2013, 02:01 PM   #2
druuna
LQ Veteran
 
Registered: Sep 2003
Posts: 10,532
Blog Entries: 7

Rep: Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371
A professional server that has gone without a reboot for 93 days: You need to plan a reboot real soon now. Or are you 100% sure there haven't been any kernel related (security) patches in the last 93 days?

Besides the kernel; The longer a machine goes without a reboot the greater the chance that it will not boot or come up as expected. Scripts/config files that might have changed (long ago, and possibly forgotten about), HD's that "spontaneously" die after spinning for a long time. And there are probably other reasons. Not something I would like to run into after, for whatever reason, the machine does need to get rebooted.

A high uptime might look cool, but isn't a holy grail whatsoever and in my opinion not of any interest when it comes to professional environments.

If this is a home environment: Go nuts, it is your box (but the above risks still apply).
 
Old 07-26-2013, 02:18 PM   #3
szboardstretcher
Senior Member
 
Registered: Aug 2006
Location: Detroit, MI
Distribution: GNU/Linux, LFS, Slitaz
Posts: 3,043
Blog Entries: 1

Original Poster
Rep: Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953
This must be a reference to regular linux/dell/hp hardware where you reboot to patch the kernel.

I remember Solaris boxes with hot-swappable,.. well, everything. In place kernel patches... etc. So, its completely possible to have a secure/safe machine on for longer than 90 days.

"The longer a machine goes without a reboot etc,.." if its in production you should have 3t+1 servers as backup, so this shouldnt be a problem. I hope everyone out there is replicating servers/load balancing for failure.

I high uptime isn't cool. Nor is it a status symbol. It is just interesting to see what people have, why they have it, and what they plan on doing.

And in Solaris environments, it is of great interest to professionals how long the equipment has been running. I've had to turn in reports with server patchlevel/uptime listed, so it must be important to someone.
 
Old 07-26-2013, 02:24 PM   #4
JLndr
Member
 
Registered: Jul 2013
Location: Brunswick, GA
Distribution: Debian 7.1
Posts: 47

Rep: Reputation: 3
Here is mine...

Code:
john@breezy:~$ uname -a
Linux breezy 2.6.32-042stab076.8 #1 SMP Tue May 14 20:38:14 MSK 2013 x86_64 GNU/Linux
john@breezy:~$ uptime
 15:24:32 up 14 days, 14:03,  1 user,  load average: 0.06, 0.01, 0.00
 
Old 07-26-2013, 02:31 PM   #5
szboardstretcher
Senior Member
 
Registered: Aug 2006
Location: Detroit, MI
Distribution: GNU/Linux, LFS, Slitaz
Posts: 3,043
Blog Entries: 1

Original Poster
Rep: Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953
Man, thats a pretty idle system!
 
Old 07-26-2013, 02:35 PM   #6
JLndr
Member
 
Registered: Jul 2013
Location: Brunswick, GA
Distribution: Debian 7.1
Posts: 47

Rep: Reputation: 3
Quote:
Originally Posted by szboardstretcher View Post
Man, thats a pretty idle system!
Yeah, haha. Just got it 14 days ago.
 
Old 07-26-2013, 02:37 PM   #7
szboardstretcher
Senior Member
 
Registered: Aug 2006
Location: Detroit, MI
Distribution: GNU/Linux, LFS, Slitaz
Posts: 3,043
Blog Entries: 1

Original Poster
Rep: Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953
Here. This is the definition of 'lots of uptime',.. i even recognize that screen. I see it in my nightmares still. Netware 4.11

http://arstechnica.com/information-t...beat-16-years/
 
Old 07-26-2013, 02:48 PM   #8
druuna
LQ Veteran
 
Registered: Sep 2003
Posts: 10,532
Blog Entries: 7

Rep: Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371Reputation: 2371
@szboardstretcher: Its more then having a HA/Fault tolerant environment.

The following is from own experience:

I've worked with large Tandem NonStop environments. Great stuff to work with: Everything hot-swappable, extra cpu/memory boards could be inserted on running systems, all hardware was double/quadruple and all was patchable on a running system. A system build to "never go down" (mind the quotes).

Uptimes went through the roof 'cause there was no need for a reboot.

At a certain point it was decided (for certification if I remember correctly) to do a power-failure test. Simple in principle: Pull the plug and see if the generators kick in before reserve power is drained. The generators did not kick in and a bunch of tandems went down. We are running on one leg at this moment.

The fact that "a few disks" might not come up correctly was foreseen. 25 brand new disks were on site, 10% of the total amount. The fact that almost 90 disks died was not foreseen. The extra disks needed could not be gotten from Europe, no-one had that many. The company had to rent a private jet to get them flown in from the US (time is of the essence, still running on one leg).

The services provided never went down, so the customers didn't notice a thing. This whole incident did, however, cost millions.

From that moment on it was decided to do a controlled reboot every six months.
 
Old 07-26-2013, 02:55 PM   #9
szboardstretcher
Senior Member
 
Registered: Aug 2006
Location: Detroit, MI
Distribution: GNU/Linux, LFS, Slitaz
Posts: 3,043
Blog Entries: 1

Original Poster
Rep: Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953Reputation: 953
@druuna: very valid point.

I've personally never been asked to test-down any equipment that wasn't *brand spankin new*.

Kind of goes with the old adage "If it aint broke, certainly dont reboot it."

 
Old 07-26-2013, 03:20 PM   #10
DarkShadow
LQ Newbie
 
Registered: Jul 2013
Posts: 18

Rep: Reputation: 0
Personally, a computer's uptime is uninteresting to me. But as they say, to each his own
 
Old 07-26-2013, 04:00 PM   #11
suicidaleggroll
Senior Member
 
Registered: Nov 2010
Location: Colorado
Distribution: OpenSUSE, CentOS
Posts: 2,721

Rep: Reputation: 975Reputation: 975Reputation: 975Reputation: 975Reputation: 975Reputation: 975Reputation: 975Reputation: 975
I had a couple of machines hovering around the 2-3 year mark not long ago (old machines, no mission-critical operations, OS long out of date, just acting as local data servers isolated from the internet until we decide to decommision them). About 3 months ago we had an extended power outage (~12 hours) that shut down the office, now everything has a 101 day uptime.

Code:
$ uptime
 14:00:00 up 101 days,  6:32,  1 user,  load average: 0.01, 0.00, 0.00
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
The oldest new user ddrake LinuxQuestions.org Member Intro 3 02-03-2013 09:03 PM
init uptime longer than system uptime? m4rtin Linux - Server 1 09-15-2010 07:45 AM
Oldest Member? pwhbeck LinuxQuestions.org Member Intro 1 01-05-2010 07:56 PM
How OLD is your OLDEST Computer that you have right now? Nukem General 83 07-11-2004 02:34 PM
Oldest Configuration Bole General 15 07-24-2003 02:06 PM


All times are GMT -5. The time now is 12:20 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration