LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 05-05-2012, 06:59 AM   #1
Tarikc
Member
 
Registered: May 2009
Distribution: CentOS, RedHat, Ubuntu
Posts: 68

Rep: Reputation: 4
Swapping RAID 5 + strange LEDs on the hard disks.


Dear All,

I really hope some one gives me an answer.

The situation is like this:

I have HP DL380 server with VMware ESXi 5.0 installed on it.
4 x 146GB installed with RAID5 configuration.

The 4th hard disk in the array have Amber LED, I've researched the hp site and they say it has permanent fault and I have to replace it ASAP.
So I bought another disk, and was about to replace it.

I've restarted the system, and found out that all the HDDs are blinking green, but not the same blinking.
I have managed to take a video of the blinking drives, after the restart.
you can find it here:

http://dl.dropbox.com/u/24591267/movie.mov

And I've kept monitoring the system while it rebooted, the BIOS did not throw any error whether the hard is faulty of not.

Now, the situation returns the same with the 4th drive steadily have the amber led.

Please your urgent help is highly appreciated.

Best Regards,

Last edited by onebuck; 05-06-2012 at 07:44 AM. Reason: Remove 'Urgent' from Thread title.
 
Old 05-05-2012, 09:20 AM   #2
sag47
Senior Member
 
Registered: Sep 2009
Location: Raleigh, NC
Distribution: Ubuntu, PopOS, Raspbian
Posts: 1,899
Blog Entries: 36

Rep: Reputation: 477Reputation: 477Reputation: 477Reputation: 477Reputation: 477
Are you able to access a terminal in ESXi? I'm not sure what distro it's using as I've never used it. If you do have access to a terminal and you have smartmontools package installed (namely smartctl

Let's interrogate the drive and analyze your logs. Please post the last thousand entries in /var/log/messages excluding some services which tend to be noisy if you have them set up.
Code:
cat /var/log/messages | grep -v 'dhcp\|named\|smbd' | tail -n1000
Interrogate a good drive, and interrogate a bad drive using smartmontools and print the output of both commands here.
Code:
#good drive x
smartctl -a /dev/sdx
#bad drive y
smartctl -a /dev/sdy
Hopefully you have access to those utilities. If not maybe boot up to a Live OS such as Ubuntu and interrogate the drives that way.

SAM

Last edited by sag47; 05-05-2012 at 11:38 AM.
 
Old 05-05-2012, 11:43 AM   #3
Tarikc
Member
 
Registered: May 2009
Distribution: CentOS, RedHat, Ubuntu
Posts: 68

Original Poster
Rep: Reputation: 4
Hello SAM,

Thank you very much for your reply.

I've managed to get only the syslog from the system, smartctl is not available on it.

Here is the log "starting from the 1st of May:
-Part1-

Code:
2012-05-01T00:00:01Z crond[2705]: USER root pid 503990 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T00:01:01Z crond[2705]: USER root pid 504020 cmd /sbin/auto-backup.sh
2012-05-01T00:03:15Z cimslp: Found 18 profiles in namespace root/interop
2012-05-01T01:00:01Z crond[2705]: USER root pid 506300 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T01:01:01Z crond[2705]: USER root pid 506330 cmd /sbin/tmpwatch.py
2012-05-01T01:01:01Z crond[2705]: USER root pid 506331 cmd /sbin/auto-backup.sh
2012-05-01T01:01:01Z tmpwatch: scanning
2012-05-01T01:01:01Z tmpwatch: done: removed 0 files.
2012-05-01T02:00:01Z crond[2705]: USER root pid 508681 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T02:01:01Z crond[2705]: USER root pid 508713 cmd /sbin/auto-backup.sh
2012-05-01T03:00:01Z crond[2705]: USER root pid 511041 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T03:01:01Z crond[2705]: USER root pid 511074 cmd /sbin/auto-backup.sh
2012-05-01T03:03:01Z cimslp: Found 18 profiles in namespace root/interop
2012-05-01T04:00:01Z crond[2705]: USER root pid 513446 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T04:01:01Z crond[2705]: USER root pid 513454 cmd /sbin/auto-backup.sh
2012-05-01T05:00:01Z crond[2705]: USER root pid 515718 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T05:01:01Z crond[2705]: USER root pid 515783 cmd /sbin/auto-backup.sh
2012-05-01T06:00:01Z crond[2705]: USER root pid 515992 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T06:01:01Z crond[2705]: USER root pid 518111 cmd /sbin/auto-backup.sh
2012-05-01T06:02:47Z cimslp: Found 18 profiles in namespace root/interop
2012-05-01T07:00:01Z crond[2705]: USER root pid 520442 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T07:01:01Z crond[2705]: USER root pid 520507 cmd /sbin/auto-backup.sh
2012-05-01T08:00:01Z crond[2705]: USER root pid 522734 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T08:01:01Z crond[2705]: USER root pid 522762 cmd /sbin/auto-backup.sh
2012-05-01T09:00:01Z crond[2705]: USER root pid 525136 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T09:01:01Z crond[2705]: USER root pid 525183 cmd /sbin/auto-backup.sh
2012-05-01T09:02:32Z cimslp: Found 18 profiles in namespace root/interop
2012-05-01T10:00:01Z crond[2705]: USER root pid 527496 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T10:01:01Z crond[2705]: USER root pid 527535 cmd /sbin/auto-backup.sh
2012-05-01T11:00:01Z crond[2705]: USER root pid 529912 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T11:01:01Z crond[2705]: USER root pid 529942 cmd /sbin/auto-backup.sh
2012-05-01T12:00:01Z crond[2705]: USER root pid 532209 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T12:01:01Z crond[2705]: USER root pid 532239 cmd /sbin/auto-backup.sh
2012-05-01T12:02:18Z cimslp: Found 18 profiles in namespace root/interop
2012-05-01T13:00:01Z crond[2705]: USER root pid 534591 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T13:01:01Z crond[2705]: USER root pid 534634 cmd /sbin/auto-backup.sh
2012-05-01T14:00:01Z crond[2705]: USER root pid 536931 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T14:01:01Z crond[2705]: USER root pid 536961 cmd /sbin/auto-backup.sh
2012-05-01T15:00:01Z crond[2705]: USER root pid 539294 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T15:01:01Z crond[2705]: USER root pid 539300 cmd /sbin/auto-backup.sh
2012-05-01T15:02:04Z cimslp: Found 18 profiles in namespace root/interop
2012-05-01T16:00:01Z crond[2705]: USER root pid 541647 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T16:01:01Z crond[2705]: USER root pid 541678 cmd /sbin/auto-backup.sh
2012-05-01T17:00:01Z crond[2705]: USER root pid 543992 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T17:01:01Z crond[2705]: USER root pid 544023 cmd /sbin/auto-backup.sh
2012-05-01T18:00:01Z crond[2705]: USER root pid 546386 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T18:01:01Z crond[2705]: USER root pid 546416 cmd /sbin/auto-backup.sh
2012-05-01T18:01:50Z cimslp: Found 18 profiles in namespace root/interop
2012-05-01T19:00:01Z crond[2705]: USER root pid 548684 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T19:01:01Z crond[2705]: USER root pid 548720 cmd /sbin/auto-backup.sh
2012-05-01T20:00:01Z crond[2705]: USER root pid 551083 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T20:01:01Z crond[2705]: USER root pid 551113 cmd /sbin/auto-backup.sh
2012-05-01T21:00:01Z crond[2705]: USER root pid 553399 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T21:01:01Z crond[2705]: USER root pid 553429 cmd /sbin/auto-backup.sh
2012-05-01T21:01:36Z cimslp: Found 18 profiles in namespace root/interop
2012-05-01T22:00:01Z crond[2705]: USER root pid 555786 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T22:01:01Z crond[2705]: USER root pid 555818 cmd /sbin/auto-backup.sh
2012-05-01T23:00:01Z crond[2705]: USER root pid 558120 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-01T23:01:01Z crond[2705]: USER root pid 558154 cmd /sbin/auto-backup.sh
2012-05-02T00:00:01Z crond[2705]: USER root pid 560518 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T00:01:01Z crond[2705]: USER root pid 560547 cmd /sbin/auto-backup.sh
2012-05-02T00:01:22Z cimslp: Found 18 profiles in namespace root/interop
2012-05-02T01:00:01Z crond[2705]: USER root pid 562860 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T01:01:01Z crond[2705]: USER root pid 562888 cmd /sbin/tmpwatch.py
2012-05-02T01:01:01Z crond[2705]: USER root pid 562889 cmd /sbin/auto-backup.sh
2012-05-02T01:01:01Z tmpwatch: scanning
2012-05-02T01:01:01Z tmpwatch: done: removed 0 files.
2012-05-02T02:00:01Z crond[2705]: USER root pid 565159 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T02:01:01Z crond[2705]: USER root pid 565193 cmd /sbin/auto-backup.sh
2012-05-02T03:00:01Z crond[2705]: USER root pid 567550 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T03:01:01Z crond[2705]: USER root pid 567580 cmd /sbin/auto-backup.sh
2012-05-02T03:01:07Z cimslp: Found 18 profiles in namespace root/interop
2012-05-02T04:00:01Z crond[2705]: USER root pid 569875 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T04:01:01Z crond[2705]: USER root pid 569905 cmd /sbin/auto-backup.sh
2012-05-02T05:00:01Z crond[2705]: USER root pid 572277 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T05:01:01Z crond[2705]: USER root pid 572317 cmd /sbin/auto-backup.sh
2012-05-02T06:00:01Z crond[2705]: USER root pid 574590 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T06:00:53Z cimslp: Found 18 profiles in namespace root/interop
2012-05-02T06:01:01Z crond[2705]: USER root pid 574625 cmd /sbin/auto-backup.sh
2012-05-02T07:00:01Z crond[2705]: USER root pid 577001 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T07:01:01Z crond[2705]: USER root pid 577031 cmd /sbin/auto-backup.sh
2012-05-02T08:00:01Z crond[2705]: USER root pid 577278 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T08:01:01Z crond[2705]: USER root pid 579356 cmd /sbin/auto-backup.sh
2012-05-02T09:00:01Z crond[2705]: USER root pid 581707 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T09:00:39Z cimslp: Found 18 profiles in namespace root/interop
2012-05-02T09:01:01Z crond[2705]: USER root pid 581752 cmd /sbin/auto-backup.sh
2012-05-02T10:00:01Z crond[2705]: USER root pid 584025 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T10:01:01Z crond[2705]: USER root pid 584054 cmd /sbin/auto-backup.sh
2012-05-02T11:00:01Z crond[2705]: USER root pid 586391 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T11:01:01Z crond[2705]: USER root pid 586420 cmd /sbin/auto-backup.sh
2012-05-02T12:00:01Z crond[2705]: USER root pid 588778 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T12:00:25Z cimslp: Found 18 profiles in namespace root/interop
2012-05-02T12:01:01Z crond[2705]: USER root pid 588815 cmd /sbin/auto-backup.sh
2012-05-02T13:00:01Z crond[2705]: USER root pid 591096 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T13:01:01Z crond[2705]: USER root pid 591129 cmd /sbin/auto-backup.sh
2012-05-02T14:00:01Z crond[2705]: USER root pid 593490 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T14:01:01Z crond[2705]: USER root pid 593520 cmd /sbin/auto-backup.sh
2012-05-02T15:00:01Z crond[2705]: USER root pid 595813 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T15:00:11Z cimslp: Found 18 profiles in namespace root/interop
2012-05-02T15:01:01Z crond[2705]: USER root pid 595850 cmd /sbin/auto-backup.sh
2012-05-02T16:00:01Z crond[2705]: USER root pid 598220 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T16:01:01Z crond[2705]: USER root pid 598250 cmd /sbin/auto-backup.sh
2012-05-02T17:00:01Z crond[2705]: USER root pid 600475 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T17:01:01Z crond[2705]: USER root pid 600542 cmd /sbin/auto-backup.sh
2012-05-02T17:59:57Z cimslp: Found 18 profiles in namespace root/interop
2012-05-02T18:00:01Z crond[2705]: USER root pid 602851 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T18:01:01Z crond[2705]: USER root pid 602929 cmd /sbin/auto-backup.sh
2012-05-02T19:00:01Z crond[2705]: USER root pid 605218 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T19:01:01Z crond[2705]: USER root pid 605287 cmd /sbin/auto-backup.sh
2012-05-02T20:00:01Z crond[2705]: USER root pid 607617 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T20:01:01Z crond[2705]: USER root pid 607689 cmd /sbin/auto-backup.sh
2012-05-02T20:59:42Z cimslp: Found 18 profiles in namespace root/interop
2012-05-02T21:00:01Z crond[2705]: USER root pid 609925 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T21:01:01Z crond[2705]: USER root pid 609954 cmd /sbin/auto-backup.sh
2012-05-02T22:00:01Z crond[2705]: USER root pid 612252 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T22:01:01Z crond[2705]: USER root pid 612282 cmd /sbin/auto-backup.sh
2012-05-02T23:00:01Z crond[2705]: USER root pid 614647 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-02T23:01:01Z crond[2705]: USER root pid 614677 cmd /sbin/auto-backup.sh
2012-05-02T23:59:28Z cimslp: Found 18 profiles in namespace root/interop
2012-05-03T00:00:01Z crond[2705]: USER root pid 616941 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T00:01:01Z crond[2705]: USER root pid 616972 cmd /sbin/auto-backup.sh
2012-05-03T01:00:01Z crond[2705]: USER root pid 619326 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T01:01:01Z crond[2705]: USER root pid 619359 cmd /sbin/tmpwatch.py
2012-05-03T01:01:01Z crond[2705]: USER root pid 619375 cmd /sbin/auto-backup.sh
2012-05-03T01:01:01Z tmpwatch: scanning
2012-05-03T01:01:01Z tmpwatch: done: removed 0 files.
2012-05-03T02:00:01Z crond[2705]: USER root pid 621686 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T02:01:01Z crond[2705]: USER root pid 621716 cmd /sbin/auto-backup.sh
2012-05-03T02:59:14Z cimslp: Found 18 profiles in namespace root/interop
2012-05-03T03:00:01Z crond[2705]: USER root pid 624094 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T03:01:01Z crond[2705]: USER root pid 624124 cmd /sbin/auto-backup.sh
2012-05-03T04:00:01Z crond[2705]: USER root pid 626389 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T04:01:01Z crond[2705]: USER root pid 626417 cmd /sbin/auto-backup.sh
2012-05-03T05:00:01Z crond[2705]: USER root pid 628719 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T05:01:01Z crond[2705]: USER root pid 628792 cmd /sbin/auto-backup.sh
2012-05-03T05:59:00Z cimslp: Found 18 profiles in namespace root/interop
2012-05-03T06:00:01Z crond[2705]: USER root pid 631115 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T06:01:01Z crond[2705]: USER root pid 631143 cmd /sbin/auto-backup.sh
2012-05-03T06:32:10Z ImageConfigManager: [2012-05-03 06:32:10,686 BootBankInstaller.pyc INFO] Unrecognized value "title=Loading VMware ESXi" in boot.cfg

2012-05-03T06:32:10Z ImageConfigManager: [2012-05-03 06:32:10,788 BootBankInstaller.pyc INFO] Unrecognized value "title=Loading VMware ESXi" in boot.cfg

2012-05-03T06:32:10Z ImageConfigManager: [2012-05-03 06:32:10,876 vmware.runcommand INFO] runcommand called with: args = '['/sbin/bootOption', '-rp']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.

2012-05-03T06:32:10Z ImageConfigManager: [2012-05-03 06:32:10,986 root     DEBUG] <?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"
 xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
 xmlns:xsd="http://www.w3.org/2001/XMLSchema"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<soapenv:Header>
<taskKey xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:vim25" versionId="dev2" xsi:type="string">haTask--vim.host.ImageConfigManager.queryHostImageProfile-132358227</taskKey>
</soapenv:Header>
<soapenv:Body>
<HostImageConfigGetProfile xmlns="urn:vim25"><_this type="HostImageConfigManager">ha-image-config-manager</_this></HostImageConfigGetProfile>
</soapenv:Body>
</soapenv:Envelope>

2012-05-03T06:32:10Z ImageConfigManager: [2012-05-03 06:32:10,991 vmware.runcommand INFO] runcommand called with: args = '['/sbin/esxcfg-advcfg', '-U', 'host-acceptance-level', '-G']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.

2012-05-03T06:32:11Z ImageConfigManager: [2012-05-03 06:32:11,028 imageprofile INFO] Adding VIB VMware_locker_tools-light_5.0.0-0.3.474610 to ImageProfile ESXi-5.0.0-20111104001-standard

2012-05-03T06:32:11Z ImageConfigManager: [2012-05-03 06:32:11,028 root     DEBUG] <?xml version="1.0" encoding="UTF-8"?><soapenv:Envelope xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<soapenv:Body><HostImageConfigGetProfileResponse xmlns='urn:vim25'><returnval><name>ESXi-5.0.0-20111104001-standard</name><vendor>VMware, Inc.</vendor></returnval></HostImageConfigGetProfileResponse></soapenv:Body></soapenv:Envelope>

2012-05-03T07:00:01Z crond[2705]: USER root pid 633493 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T07:01:01Z crond[2705]: USER root pid 633524 cmd /sbin/auto-backup.sh
2012-05-03T07:04:28Z ImageConfigManager: [2012-05-03 07:04:28,102 BootBankInstaller.pyc INFO] Unrecognized value "title=Loading VMware ESXi" in boot.cfg

2012-05-03T07:04:28Z ImageConfigManager: [2012-05-03 07:04:28,190 BootBankInstaller.pyc INFO] Unrecognized value "title=Loading VMware ESXi" in boot.cfg

2012-05-03T07:04:28Z ImageConfigManager: [2012-05-03 07:04:28,277 vmware.runcommand INFO] runcommand called with: args = '['/sbin/bootOption', '-rp']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.

2012-05-03T07:04:28Z ImageConfigManager: [2012-05-03 07:04:28,295 root     DEBUG] <?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"
 xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
 xmlns:xsd="http://www.w3.org/2001/XMLSchema"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<soapenv:Header>
<taskKey xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:vim25" versionId="dev2" xsi:type="string">haTask--vim.host.ImageConfigManager.queryHostAcceptanceLevel-132358299</taskKey>
</soapenv:Header>
<soapenv:Body>
<HostImageConfigGetAcceptance xmlns="urn:vim25"><_this type="HostImageConfigManager">ha-image-config-manager</_this></HostImageConfigGetAcceptance>
</soapenv:Body>
</soapenv:Envelope>

2012-05-03T07:04:28Z ImageConfigManager: [2012-05-03 07:04:28,296 vmware.runcommand INFO] runcommand called with: args = '['/sbin/esxcfg-advcfg', '-U', 'host-acceptance-level', '-G']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.

2012-05-03T07:04:28Z ImageConfigManager: [2012-05-03 07:04:28,435 root     DEBUG] <?xml version="1.0" encoding="UTF-8"?><soapenv:Envelope xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<soapenv:Body><HostImageConfigGetAcceptanceResponse xmlns='urn:vim25'><returnval>partner</returnval></HostImageConfigGetAcceptanceResponse></soapenv:Body></soapenv:Envelope>

2012-05-03T08:00:01Z crond[2705]: USER root pid 635868 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T08:01:01Z crond[2705]: USER root pid 635900 cmd /sbin/auto-backup.sh
2012-05-03T08:58:46Z cimslp: Found 18 profiles in namespace root/interop
2012-05-03T08:58:46Z cimslp: Callback Code -7
2012-05-03T08:58:46Z cimslp: Error registering service with slp 0
2012-05-03T08:58:46Z cimslp: Error registering service with slp -7
2012-05-03T08:58:46Z cimslp: Error registering https with slpd ... I will try again next interval.
2012-05-03T09:00:01Z crond[2705]: USER root pid 638211 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T09:01:01Z crond[2705]: USER root pid 638300 cmd /sbin/auto-backup.sh
2012-05-03T10:00:01Z crond[2705]: USER root pid 640605 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T10:01:01Z crond[2705]: USER root pid 640635 cmd /sbin/auto-backup.sh
2012-05-03T11:00:01Z crond[2705]: USER root pid 642895 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T11:01:01Z crond[2705]: USER root pid 642931 cmd /sbin/auto-backup.sh
2012-05-03T11:58:32Z cimslp: Found 18 profiles in namespace root/interop
2012-05-03T12:00:01Z crond[2705]: USER root pid 645300 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T12:01:01Z crond[2705]: USER root pid 645330 cmd /sbin/auto-backup.sh
2012-05-03T13:00:01Z crond[2705]: USER root pid 647616 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T13:01:01Z crond[2705]: USER root pid 647651 cmd /sbin/auto-backup.sh
2012-05-03T14:00:01Z crond[2705]: USER root pid 650002 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T14:01:01Z crond[2705]: USER root pid 650031 cmd /sbin/auto-backup.sh
2012-05-03T14:58:17Z cimslp: Found 18 profiles in namespace root/interop
2012-05-03T15:00:01Z crond[2705]: USER root pid 652338 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T15:01:01Z crond[2705]: USER root pid 652370 cmd /sbin/auto-backup.sh
2012-05-03T16:00:01Z crond[2705]: USER root pid 654736 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T16:01:01Z crond[2705]: USER root pid 654766 cmd /sbin/auto-backup.sh
2012-05-03T17:00:01Z crond[2705]: USER root pid 657064 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T17:01:01Z crond[2705]: USER root pid 657094 cmd /sbin/auto-backup.sh
2012-05-03T17:58:03Z cimslp: Found 18 profiles in namespace root/interop
2012-05-03T18:00:01Z crond[2705]: USER root pid 659397 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T18:01:01Z crond[2705]: USER root pid 659427 cmd /sbin/auto-backup.sh
2012-05-03T19:00:01Z crond[2705]: USER root pid 661758 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T19:01:01Z crond[2705]: USER root pid 661788 cmd /sbin/auto-backup.sh
2012-05-03T20:00:01Z crond[2705]: USER root pid 664092 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T20:01:01Z crond[2705]: USER root pid 662074 cmd /sbin/auto-backup.sh
2012-05-03T20:57:49Z cimslp: Found 18 profiles in namespace root/interop
2012-05-03T21:00:01Z crond[2705]: USER root pid 666510 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T21:01:01Z crond[2705]: USER root pid 666545 cmd /sbin/auto-backup.sh
2012-05-03T22:00:01Z crond[2705]: USER root pid 668813 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T22:01:01Z crond[2705]: USER root pid 668843 cmd /sbin/auto-backup.sh
2012-05-03T23:00:01Z crond[2705]: USER root pid 671214 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-03T23:01:01Z crond[2705]: USER root pid 671242 cmd /sbin/auto-backup.sh
2012-05-03T23:57:35Z cimslp: Found 18 profiles in namespace root/interop
 
Old 05-05-2012, 11:44 AM   #4
Tarikc
Member
 
Registered: May 2009
Distribution: CentOS, RedHat, Ubuntu
Posts: 68

Original Poster
Rep: Reputation: 4
-Part2-

Code:
2012-05-05T00:00:01Z crond[2705]: USER root pid 730037 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-05T00:01:01Z crond[2705]: USER root pid 730067 cmd /sbin/auto-backup.sh
2012-05-05T01:00:01Z crond[2705]: USER root pid 732369 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-05T01:01:01Z crond[2705]: USER root pid 732413 cmd /sbin/tmpwatch.py
2012-05-05T01:01:01Z crond[2705]: USER root pid 732414 cmd /sbin/auto-backup.sh
2012-05-05T01:01:01Z tmpwatch: scanning
2012-05-05T01:01:01Z tmpwatch: done: removed 0 files.
2012-05-05T02:00:01Z crond[2705]: USER root pid 734779 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-05T02:01:01Z crond[2705]: USER root pid 734809 cmd /sbin/auto-backup.sh
2012-05-05T02:55:27Z cimslp: Found 18 profiles in namespace root/interop
2012-05-05T03:00:01Z crond[2705]: USER root pid 737075 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-05T03:01:01Z crond[2705]: USER root pid 737103 cmd /sbin/auto-backup.sh
2012-05-05T04:00:01Z crond[2705]: USER root pid 739474 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-05T04:01:01Z crond[2705]: USER root pid 737454 cmd /sbin/auto-backup.sh
2012-05-05T05:00:01Z crond[2705]: USER root pid 741790 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-05T05:01:01Z crond[2705]: USER root pid 741818 cmd /sbin/auto-backup.sh
2012-05-05T05:55:13Z cimslp: Found 18 profiles in namespace root/interop
2012-05-05T06:00:01Z crond[2705]: USER root pid 744169 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-05T06:01:01Z crond[2705]: USER root pid 744204 cmd /sbin/auto-backup.sh
2012-05-05T07:00:01Z crond[2705]: USER root pid 746507 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-05T07:01:01Z crond[2705]: USER root pid 746537 cmd /sbin/auto-backup.sh
2012-05-05T08:00:01Z crond[2705]: USER root pid 748900 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-05T08:01:01Z crond[2705]: USER root pid 748936 cmd /sbin/auto-backup.sh
2012-05-05T08:04:10Z ImageConfigManager: [2012-05-05 08:04:10,305 BootBankInstaller.pyc INFO] Unrecognized value "title=Loading VMware ESXi" in boot.cfg

2012-05-05T08:04:10Z ImageConfigManager: [2012-05-05 08:04:10,430 BootBankInstaller.pyc INFO] Unrecognized value "title=Loading VMware ESXi" in boot.cfg

2012-05-05T08:04:10Z ImageConfigManager: [2012-05-05 08:04:10,517 vmware.runcommand INFO] runcommand called with: args = '['/sbin/bootOption', '-rp']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.

2012-05-05T08:04:10Z ImageConfigManager: [2012-05-05 08:04:10,579 root     DEBUG] <?xml version="1.0" encoding="UTF-8"?>
<soapenv:Envelope xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/"
 xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
 xmlns:xsd="http://www.w3.org/2001/XMLSchema"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<soapenv:Header>
<taskKey xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:vim25" versionId="dev2" xsi:type="string">haTask--vim.host.ImageConfigManager.queryHostImageProfile-132358941</taskKey>
</soapenv:Header>
<soapenv:Body>
<HostImageConfigGetProfile xmlns="urn:vim25"><_this type="HostImageConfigManager">ha-image-config-manager</_this></HostImageConfigGetProfile>
</soapenv:Body>
</soapenv:Envelope>

2012-05-05T08:04:10Z ImageConfigManager: [2012-05-05 08:04:10,584 vmware.runcommand INFO] runcommand called with: args = '['/sbin/esxcfg-advcfg', '-U', 'host-acceptance-level', '-G']', outfile = 'None', returnoutput = 'True', timeout = '0.0'.

2012-05-05T08:04:10Z ImageConfigManager: [2012-05-05 08:04:10,621 imageprofile INFO] Adding VIB VMware_locker_tools-light_5.0.0-0.3.474610 to ImageProfile ESXi-5.0.0-20111104001-standard

2012-05-05T08:04:10Z ImageConfigManager: [2012-05-05 08:04:10,621 root     DEBUG] <?xml version="1.0" encoding="UTF-8"?><soapenv:Envelope xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<soapenv:Body><HostImageConfigGetProfileResponse xmlns='urn:vim25'><returnval><name>ESXi-5.0.0-20111104001-standard</name><vendor>VMware, Inc.</vendor></returnval></HostImageConfigGetProfileResponse></soapenv:Body></soapenv:Envelope>

2012-05-05T08:54:59Z cimslp: Found 18 profiles in namespace root/interop
2012-05-05T09:00:01Z crond[2705]: USER root pid 751317 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-05T09:01:01Z crond[2705]: USER root pid 751347 cmd /sbin/auto-backup.sh
2012-05-05T10:00:01Z crond[2705]: USER root pid 753645 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-05T10:01:01Z crond[2705]: USER root pid 753732 cmd /sbin/auto-backup.sh
2012-05-05T11:00:01Z crond[2705]: USER root pid 753998 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-05T11:01:01Z crond[2705]: USER root pid 756076 cmd /sbin/auto-backup.sh
2012-05-05T11:17:17Z DCUI: pam_per_user: create_subrequest_handle(): doing map lookup for user "root"
2012-05-05T11:17:17Z DCUI: pam_per_user: create_subrequest_handle(): creating new subrequest (user="root", service="system-auth-generic")
2012-05-05T11:17:17Z DCUI: Authentication of user root succeeded
2012-05-05T11:17:17Z DCUI: User root logged in
2012-05-05T11:17:22Z DCUI: Rebooting host
2012-05-05T11:17:22Z DCUI: Initializing vobuser library
2012-05-05T11:17:37Z sfcb-vmware_base[3360]: HTTP Request Error. Code: 7 Message: couldn't connect to host
2012-05-05T11:17:37Z sfcb-vmware_base[3360]: VICimProvider exiting on Hostd exit!!
2012-05-05T11:17:37Z watchdog-hostd: 'hostd ++min=0,swap,group=hostd /etc/vmware/hostd/config.xml' exited after 1151895 seconds 0
2012-05-05T11:17:37Z watchdog-hostd: Executing 'hostd ++min=0,swap,group=hostd /etc/vmware/hostd/config.xml'
2012-05-05T11:17:37Z init: starting pid 756863, tty '': '/usr/lib/vmware/vmksummary/log-bootstop.sh stop'
2012-05-05T11:17:37Z init: starting pid 754820, tty '': '/sbin/shutdown.sh'
2012-05-05T11:17:37Z VMware[shutdown]:  Stopping VMs
2012-05-05T11:17:37Z root: init Running wsman stop
2012-05-05T11:17:38Z root: openwsmand Stopping openwsmand
2012-05-05T11:17:38Z watchdog-openwsmand: Watchdog for openwsmand is now 3128
2012-05-05T11:17:38Z watchdog-openwsmand: Terminating watchdog process with PID 3128
2012-05-05T11:17:38Z watchdog-openwsmand: [3128] Signal received: exiting the watchdog
2012-05-05T11:17:39Z root: init Running sfcbd stop
2012-05-05T11:17:39Z root: init Running sfcbd-watchdog stop
2012-05-05T11:17:39Z sfcbd-watchdog: Terminating watchdog process with PID 3072
2012-05-05T11:17:39Z sfcbd-watchdog: stopping sfcbd pid
2012-05-05T11:17:39Z sfcbd: Stopping sfcbd
2012-05-05T11:17:39Z root: init Running usbarbitrator stop
2012-05-05T11:17:39Z watchdog-usbarbitrator: Watchdog for usbarbitrator is now 3033
2012-05-05T11:17:39Z watchdog-usbarbitrator: Terminating watchdog process with PID 3033
2012-05-05T11:17:39Z watchdog-usbarbitrator: [3033] Signal received: exiting the watchdog
2012-05-05T11:17:40Z root: init Running vpxa stop
2012-05-05T11:17:40Z watchdog-vpxa: Watchdog for vpxa is now 2939
2012-05-05T11:17:40Z watchdog-vpxa: Terminating watchdog process with PID 2939
2012-05-05T11:17:40Z watchdog-vpxa: [2939] Signal received: exiting the watchdog
2012-05-05T11:17:40Z Unknown: ?ンの監視」セクションで「監視感度」を下げることにより、この期間を長くすることができます。           </action>        </cause>        <cause>           <description>                アプリケーションのハートビートが赤に変わりました。 この状態は、ハートビートを送信するように構成されている                アプリケーションで障害が発生したか、応答がなくなった場合に発生することがあります。           </description>           <action>                構成エラーが原因でアプリケーションがハートビートの送信を停止したのかどうかを確認し、                問題を修正してください。           </action>        </cause>     </EventLongDescription>     "'
2012-05-05T11:17:41Z root: init Running vobd stop
2012-05-05T11:17:42Z watchdog-vobd: Watchdog for vobd is now 2905
2012-05-05T11:17:42Z watchdog-vobd: Terminating watchdog process with PID 2905
2012-05-05T11:17:42Z watchdog-vobd: [2905] Signal received: exiting the watchdog
2012-05-05T11:17:43Z root: init Running cdp stop
2012-05-05T11:17:43Z watchdog-cdp: Watchdog for cdp is now 2886
2012-05-05T11:17:43Z watchdog-cdp: Terminating watchdog process with PID 2886
2012-05-05T11:17:43Z watchdog-cdp: [2886] Signal received: exiting the watchdog
2012-05-05T11:17:44Z root: init Running dcbd stop
2012-05-05T11:17:44Z watchdog-dcbd: Watchdog for dcbd is now 2866
2012-05-05T11:17:44Z watchdog-dcbd: Terminating watchdog process with PID 2866
2012-05-05T11:17:44Z watchdog-dcbd: [2866] Signal received: exiting the watchdog
2012-05-05T11:17:45Z root: init Running memscrubd stop
2012-05-05T11:17:45Z root: init Running slpd stop
2012-05-05T11:17:45Z root: slpd Stopping slpd
2012-05-05T11:17:45Z root: init Running lbtd stop
2012-05-05T11:17:45Z watchdog-net-lbt: Watchdog for net-lbt is now 2824
2012-05-05T11:17:45Z watchdog-net-lbt: Terminating watchdog process with PID 2824
2012-05-05T11:17:45Z watchdog-net-lbt: [2824] Signal received: exiting the watchdog
2012-05-05T11:17:46Z root: init Running storageRM stop
2012-05-05T11:17:46Z watchdog-storageRM: Watchdog for storageRM is now 2804
2012-05-05T11:17:46Z watchdog-storageRM: Terminating watchdog process with PID 2804
2012-05-05T11:17:46Z watchdog-storageRM: [2804] Signal received: exiting the watchdog
2012-05-05T11:17:47Z root: init Running vprobed stop
2012-05-05T11:17:47Z watchdog-vprobed: Watchdog for vprobed is now 2783
2012-05-05T11:17:47Z watchdog-vprobed: Terminating watchdog process with PID 2783
2012-05-05T11:17:47Z watchdog-vprobed: [2783] Signal received: exiting the watchdog
2012-05-05T11:17:48Z root: init Running hostd stop
2012-05-05T11:17:48Z watchdog-hostd: Watchdog for hostd is now 2761
2012-05-05T11:17:48Z watchdog-hostd: Terminating watchdog process with PID 2761
2012-05-05T11:17:48Z watchdog-hostd: [2761] Signal received: exiting the watchdog
2012-05-05T11:17:49Z root: init Running sensord stop
2012-05-05T11:17:49Z watchdog-sensord: Watchdog for sensord is now 2739
2012-05-05T11:17:49Z watchdog-sensord: Terminating watchdog process with PID 2739
2012-05-05T11:17:49Z watchdog-sensord: [2739] Signal received: exiting the watchdog
2012-05-05T11:17:50Z root: init Running SSH stop
2012-05-05T11:17:50Z inetd[2704]: authd/tcp6: socket: Address family not supported by protocol
2012-05-05T11:17:50Z doat: Stopped wait on component RemoteShell.disable
2012-05-05T11:17:50Z doat: Stopped wait on component RemoteShell.stop
2012-05-05T11:17:50Z root: init Running DCUI stop
2012-05-05T11:17:50Z root: DCUI Disabling DCUI logins
2012-05-05T11:21:39Z jumpstart: unable to create session: Operation not permitted
2012-05-05T11:21:39Z jumpstart: dependencies for plugin 'restore-host-cache' not met (missing: vcfs)
2012-05-05T11:21:39Z vmkmicrocode: Warning: Line size is greater than expected size 242
2012-05-05T11:21:39Z vmkmicrocode: File microcode_amd_0x100fa0.bin does not contain a valid microcode update for any of the processors
2012-05-05T11:21:39Z vmkmicrocode: File microcode-1027.dat does not contain a valid microcode update for any of the processors
2012-05-05T11:21:39Z vmkmicrocode: File m4010676860C0001.dat does not contain a valid microcode update for any of the processors
2012-05-05T11:21:39Z vmkmicrocode: File m03106a5.dat does not contain a valid microcode update for any of the processors
2012-05-05T11:21:39Z vmkmicrocode: Warning: Directory /etc/vmware/microcode/ does not contain a valid microcode update for this processor
2012-05-05T11:21:50Z vmkdevmgr: DriverMap: 60 maps loaded. 734 entries.
2012-05-05T11:21:58Z jumpstart: StorageInfo: Unable to name LUN mpx.vmhba0:C0:T0:L0. Cannot set display name on this device.  Unable to guarantee name will not change across reboots or media change.
2012-05-05T11:21:59Z jumpstart: VmKernelNicInfo::LoadConfig: Storing previous management interface:'vmk0'
2012-05-05T11:21:59Z jumpstart: IpSecConfig: Ipv6 not Enabled
2012-05-05T11:21:59Z jumpstart: No iBFT data present in the BIOS
2012-05-05T11:21:59Z iscsid: Notice: iSCSI Database already at latest schema. (Upgrade Skipped).
2012-05-05T11:21:59Z iscsid: iSCSI MASTER Database opened. (0xffa4d008)
2012-05-05T11:21:59Z iscsid: LogLevel = 0
2012-05-05T11:21:59Z iscsid: LogSync  = 0
2012-05-05T11:22:00Z iscsid: DISCOVERY: transport_name=bnx2i-441ea13a8482 Pending=0 Failed=0
2012-05-05T11:22:00Z iscsid: DISCOVERY: transport_name=bnx2i-441ea13a8482 Pending=0 Failed=0
2012-05-05T11:22:00Z iscsid: DISCOVERY: transport_name=bnx2i-441ea13a8484 Pending=0 Failed=0
2012-05-05T11:22:00Z iscsid: DISCOVERY: transport_name=bnx2i-441ea13a8484 Pending=0 Failed=0
2012-05-05T11:22:00Z iscsid: DISCOVERY: transport_name=bnx2i-441ea13a8486 Pending=0 Failed=0
2012-05-05T11:22:00Z iscsid: DISCOVERY: transport_name=bnx2i-441ea13a8486 Pending=0 Failed=0
2012-05-05T11:22:00Z iscsid: DISCOVERY: transport_name=bnx2i-441ea13a8488 Pending=0 Failed=0
2012-05-05T11:22:00Z iscsid: DISCOVERY: transport_name=bnx2i-441ea13a8488 Pending=0 Failed=0
2012-05-05T11:22:00Z jumpstart: StorageInfo: Unable to name LUN mpx.vmhba0:C0:T0:L0. Cannot set display name on this device.  Unable to guarantee name will not change across reboots or media change.
2012-05-05T11:22:01Z jumpstart: Using policy dir /etc/vmware/secpolicy
2012-05-05T11:22:01Z jumpstart: Parsed all objects
2012-05-05T11:22:01Z jumpstart: Objects defined and obsolete objects removed
2012-05-05T11:22:01Z jumpstart: Parsed all domain names
2012-05-05T11:22:01Z jumpstart: Domains defined and obsolete domains removed
2012-05-05T11:22:01Z jumpstart: Domain policies parsed and syntax validated
2012-05-05T11:22:01Z jumpstart: Constraints check for domain policies succeeded
2012-05-05T11:22:01Z jumpstart: Domain policies set
2012-05-05T11:22:01Z jumpstart: Parsed all the tardisk policy files
2012-05-05T11:22:01Z jumpstart: Set all the tardisk labels and policy
2012-05-05T11:22:01Z jumpstart: Parsed all file label mappings
2012-05-05T11:22:01Z jumpstart: Set all file labels
2012-05-05T11:22:01Z jumpstart: System security policy has been set successfully
2012-05-05T11:22:01Z jumpstart: using /vmfs/volumes/4f54f0e4-bec50480-5d55-441ea13a8484 as /scratch
2012-05-05T11:22:01Z jumpstart: Using /locker/packages/5.0.0/ as /productLocker
2012-05-05T11:22:01Z jumpstart: using /store as /locker
2012-05-05T11:22:01Z jumpstart: current bootstate is BOOT_STATE_VALID
2012-05-05T11:22:01Z inetd[2702]: authd/tcp6: socket: Address family not supported by protocol
2012-05-05T11:22:01Z crond[2703]: crond 1.9.1-VMware-visor-6030 started, log level 8
2012-05-05T11:22:01Z init: starting pid 2704, tty '': '/sbin/services.sh start'
2012-05-05T11:22:01Z root: init Running DCUI start
2012-05-05T11:22:01Z root: DCUI Enabling DCUI login: runlevel =
2012-05-05T11:22:01Z root: init Running SSH start
2012-05-05T11:22:02Z inetd[2702]: ssh/tcp6: socket: Address family not supported by protocol
2012-05-05T11:22:02Z inetd[2702]: authd/tcp6: socket: Address family not supported by protocol
2012-05-05T11:22:02Z root: init Running sensord start
2012-05-05T11:22:02Z watchdog-sensord: [2737] Begin '/usr/lib/vmware/bin/sensord ++min=0,max=10', min-uptime = 60, max-quick-failures = 1, max-total-failures = 1000000, bg_pid_file = ''
2012-05-05T11:22:02Z watchdog-sensord: Executing '/usr/lib/vmware/bin/sensord ++min=0,max=10'
2012-05-05T11:22:03Z root: init Running hostd start
2012-05-05T11:22:03Z watchdog-hostd: [2759] Begin 'hostd ++min=0,swap,group=hostd /etc/vmware/hostd/config.xml', min-uptime = 60, max-quick-failures = 1, max-total-failures = 1000000, bg_pid_file = ''
2012-05-05T11:22:03Z watchdog-hostd: Executing 'hostd ++min=0,swap,group=hostd /etc/vmware/hostd/config.xml'
2012-05-05T11:22:04Z root: init Running vprobed start
2012-05-05T11:22:04Z watchdog-vprobed: [2813] Begin '/sbin/vprobed ++group=vprobed', min-uptime = 60, max-quick-failures = 1, max-total-failures = 1000000, bg_pid_file = ''
2012-05-05T11:22:04Z watchdog-vprobed: Executing '/sbin/vprobed ++group=vprobed'
2012-05-05T11:22:05Z root: init Running storageRM start
2012-05-05T11:22:05Z watchdog-storageRM: [2871] Begin '/sbin/storageRM ++group=sioc', min-uptime = 60, max-quick-failures = 1, max-total-failures = 1000000, bg_pid_file = ''
2012-05-05T11:22:05Z watchdog-storageRM: Executing '/sbin/storageRM ++group=sioc'
2012-05-05T11:22:06Z root: init Running lbtd start
2012-05-05T11:22:06Z watchdog-net-lbt: [2897] Begin '/sbin/net-lbt ++min=0,swap,group=lbt', min-uptime = 1000, max-quick-failures = 100, max-total-failures = 100, bg_pid_file = ''
2012-05-05T11:22:06Z watchdog-net-lbt: Executing '/sbin/net-lbt ++min=0,swap,group=lbt'
2012-05-05T11:22:06Z Unknown: ?ンの監視」セクションで「監視感度」を下げることにより、この期間を長くすることができます。           </action>        </cause>        <cause>           <description>                アプリケーションのハートビートが赤に変わりました。 この状態は、ハートビートを送信するように構成されている                アプリケーションで障害が発生したか、応答がなくなった場合に発生することがあります。           </description>           <action>                構成エラーが原因でアプリケーションがハートビートの送信を停止したのかどうかを確認し、                問題を修正してください。           </action>        </cause>     </EventLongDescription>     "'
2012-05-05T11:22:07Z root: init Running slpd start
2012-05-05T11:22:07Z root: slpd Starting slpd
2012-05-05T11:22:07Z root: init Running memscrubd start
2012-05-05T11:22:07Z root: init Running dcbd start
2012-05-05T11:22:07Z watchdog-dcbd: [2939] Begin '/usr/sbin/dcbd ++group=net-daemons', min-uptime = 60, max-quick-failures = 1, max-total-failures = 5, bg_pid_file = ''
2012-05-05T11:22:07Z watchdog-dcbd: Executing '/usr/sbin/dcbd ++group=net-daemons'
2012-05-05T11:22:07Z dcbd: [info]     Main loop running.
2012-05-05T11:22:08Z root: init Running cdp start
2012-05-05T11:22:08Z watchdog-cdp: [2959] Begin '/usr/sbin/net-cdp ++group=net-daemons', min-uptime = 60, max-quick-failures = 1, max-total-failures = 1000000, bg_pid_file = ''
2012-05-05T11:22:08Z watchdog-cdp: Executing '/usr/sbin/net-cdp ++group=net-daemons'
2012-05-05T11:22:09Z root: init Running vobd start
2012-05-05T11:22:09Z watchdog-vobd: [2984] Begin '/usr/lib/vmware/vob/bin/vobd ++min=0,max=100,group=uwdaemons', min-uptime = 60, max-quick-failures = 1, max-total-failures = 1000000, bg_pid_file = ''
2012-05-05T11:22:09Z watchdog-vobd: Executing '/usr/lib/vmware/vob/bin/vobd ++min=0,max=100,group=uwdaemons'
2012-05-05T11:22:10Z root: init Running vpxa start
2012-05-05T11:22:12Z watchdog-vpxa: [3022] Begin '/usr/lib/vmware/vpxa/bin/vpxa ++min=0,swap,group=vpxa -D /etc/vmware/vpxa', min-uptime = 60, max-quick-failures = 1, max-total-failures = 1000000, bg_pid_file = ''
2012-05-05T11:22:12Z watchdog-vpxa: Executing '/usr/lib/vmware/vpxa/bin/vpxa ++min=0,swap,group=vpxa -D /etc/vmware/vpxa'
2012-05-05T11:22:13Z root: init Running usbarbitrator start
2012-05-05T11:22:13Z usbarbitrator: evicting objects on USB from OC
2012-05-05T11:22:13Z usbarbitrator: unclaiming USB devices
2012-05-05T11:22:13Z usbarbitrator: rescanning to complete removal of USB devices
2012-05-05T11:22:14Z iscsid: DISCOVERY: transport_name=bnx2i-441ea13a8482 Pending=0 Failed=0
2012-05-05T11:22:14Z iscsid: DISCOVERY: transport_name=bnx2i-441ea13a8484 Pending=0 Failed=0
2012-05-05T11:22:14Z iscsid: DISCOVERY: transport_name=bnx2i-441ea13a8486 Pending=0 Failed=0
2012-05-05T11:22:14Z iscsid: DISCOVERY: transport_name=bnx2i-441ea13a8488 Pending=0 Failed=0
2012-05-05T11:22:14Z watchdog-usbarbitrator: [3082] Begin '/usr/lib/vmware/bin/vmware-usbarbitrator ++min=0,max=25,group=uwdaemons -t', min-uptime = 60, max-quick-failures = 1, max-total-failures = 5, bg_pid_file = ''
2012-05-05T11:22:14Z watchdog-usbarbitrator: Executing '/usr/lib/vmware/bin/vmware-usbarbitrator ++min=0,max=25,group=uwdaemons -t'
2012-05-05T11:22:15Z root: init Running sfcbd-watchdog start
2012-05-05T11:22:15Z root: init Running sfcbd start
2012-05-05T11:22:15Z sfcbd-watchdog: Watchdog active: interval 60 seconds, pid 3101
2012-05-05T11:22:15Z root: init Running wsman start
2012-05-05T11:22:15Z root: openwsmand Starting openwsmand
2012-05-05T11:22:15Z sfcbd-watchdog: starting sfcbd
2012-05-05T11:22:15Z watchdog-openwsmand: [3157] Begin '/sbin/openwsmand ++min=0,swap,group=wsman,securitydom=4 --syslog=3 --foreground-process', min-uptime = 60, max-quick-failures = 5, max-total-failures = 10, bg_pid_file = ''
2012-05-05T11:22:15Z sfcbd: Starting sfcbd
2012-05-05T11:22:15Z watchdog-openwsmand: Executing '/sbin/openwsmand ++min=0,swap,group=wsman,securitydom=4 --syslog=3 --foreground-process'
2012-05-05T11:22:15Z openwsmand: [wrn][3167:/build/mts/release/bora-434227/oss-cim/common/..//openwsman/2.2.1/src/server/wsmand.c:316:main] nsswitch.conf successfully stat'ed
2012-05-05T11:22:16Z sfcb-sfcb[3307]: --- Log syslog level: 3
2012-05-05T11:22:16Z init: starting pid 3320, tty '': '/sbin/apply-host-profiles'
2012-05-05T11:22:16Z init: starting pid 3321, tty '': '/usr/lib/vmware/vmksummary/log-bootstop.sh boot'
2012-05-05T11:22:16Z esxcfg-dumppart: DiagnosticPartition: Unable to copy the dump partition: Couldn't find a valid VMKernel dump file. Dump partition might be uninitialized.
2012-05-05T11:22:16Z init: starting pid 3326, tty '': '/sbin/vmdumper -g 'Boot Successful''
2012-05-05T11:22:16Z init: starting pid 3327, tty '': '/bin/sh ++min=0,swap,group=host/vim/vimuser/terminal/shell /etc/rc.local'
2012-05-05T11:22:16Z init: starting pid 3328, tty '': '/sbin/esxcfg-init --set-boot-progress done'
2012-05-05T11:22:16Z init: starting pid 3329, tty '': '/sbin/vmware-autostart.sh start'
2012-05-05T11:22:16Z init: starting pid 3332, tty '/dev/tty1': '/sbin/initterm.sh tty1 /sbin/techsupport.sh'
2012-05-05T11:22:16Z init: starting pid 3333, tty '/dev/tty2': '-/sbin/initterm.sh tty2 /sbin/dcuiweasel'
2012-05-05T11:22:16Z VMware[startup]:  Starting VMs
2012-05-05T11:22:16Z DCUI: Starting DCUI
2012-05-05T11:22:17Z DCUI: NotifyDCUI: Notifying the DCUI of configuration change
2012-05-05T11:22:17Z DCUI: NotifyDCUI: Skipping DCUI notify, since we are in DCUI
2012-05-05T11:22:21Z cimslp: --- Using /etc/sfcb/sfcb.cfg
2012-05-05T11:22:24Z cimslp: Found 18 profiles in namespace root/interop
2012-05-05T11:23:41Z sfcb-vmware_raw[3375]: IpmiIfcSelGetAll: no record count
2012-05-05T12:00:01Z crond[2703]: USER root pid 4767 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-05T12:01:01Z crond[2703]: USER root pid 4799 cmd /sbin/auto-backup.sh
2012-05-05T13:00:01Z crond[2703]: USER root pid 7134 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-05T13:01:01Z crond[2703]: USER root pid 7181 cmd /sbin/auto-backup.sh
2012-05-05T14:00:01Z crond[2703]: USER root pid 9486 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-05T14:01:01Z crond[2703]: USER root pid 9545 cmd /sbin/auto-backup.sh
2012-05-05T14:22:10Z cimslp: Found 18 profiles in namespace root/interop
2012-05-05T15:00:01Z crond[2703]: USER root pid 11854 cmd /usr/lib/vmware/vmksummary/log-heartbeat.py
2012-05-05T15:01:01Z crond[2703]: USER root pid 11882 cmd /sbin/auto-backup.sh
~ #
 
Old 05-05-2012, 12:11 PM   #5
sag47
Senior Member
 
Registered: Sep 2009
Location: Raleigh, NC
Distribution: Ubuntu, PopOS, Raspbian
Posts: 1,899
Blog Entries: 36

Rep: Reputation: 477Reputation: 477Reputation: 477Reputation: 477Reputation: 477
Nothing in your logs particularly stand out to me. Are you able to boot into a live OS (or is it HA)?

How is the RAID set up? Is it using mdadm? If so, perhaps try to get some information on what the array is doing.

Code:
cat /proc/mdstat
mdadm --detail --scan
#the scan shows my array on /dev/md0
mdadm -D /dev/md0
#where /dev/sdx is a device in the array /dev/md0, you should examine each device in the array
mdadm --examine /dev/sdx
Please post the output of the commands for me to review.

In my experience, hard drives are cheap and data is expensive. So if you can't take the system down to do a thorough check then your best bet is to replace the drive and run smartctl on the drive from another machine. When you're able to do that post the output from smartctl -a and I'll help you analyze it. It won't directly tell us if the disk is failing but we can get an idea of what is happening to it from the S.M.A.R.T. controller diagnostics.

After googling around for quite a while it seems ESXi is very limited in it's available tool set from the terminal. So what I previously mentioned might be one of your few solutions.

Last edited by sag47; 05-05-2012 at 12:28 PM.
 
Old 05-05-2012, 12:43 PM   #6
lithos
Senior Member
 
Registered: Jan 2010
Location: SI : 45.9531, 15.4894
Distribution: CentOS, OpenNA/Trustix, testing desktop openSuse 12.1 /Cinnamon/KDE4.8
Posts: 1,144

Rep: Reputation: 217Reputation: 217Reputation: 217
Hi Tarikc

I know how frustrating can be a failed hardware, I do somehow check the HP status with HPACUCLI software from HP

Code:
#/usr/sbin/hpacucli ctrl all show status

Smart Array 6i in Slot 0 (Embedded)
   Controller Status: OK
   Cache Status: OK


#/usr/sbin/hpacucli ctrl all show config

Smart Array 6i in Slot 0 (Embedded)     

   array A (Parallel SCSI, Unused Space: 0 MB)


      logicaldrive 1 (135.7 GB, RAID 5, OK)

      physicaldrive 2:0   (port 2:id 0 , Parallel SCSI, 72.8 GB, OK)
      physicaldrive 2:1   (port 2:id 1 , Parallel SCSI, 72.8 GB, OK)
      physicaldrive 2:2   (port 2:id 2 , Parallel SCSI, 72.8 GB, OK)
      physicaldrive 2:3   (port 2:id 3 , Parallel SCSI, 72.8 GB, OK, spare)
and if some drive will fail or behave strange it will look like this
Code:
# output when errors
----------------------------------
Smart Array 6i in Slot 0 (Embedded)
   Controller Status: OK
   Cache Status: OK



Smart Array 6i in Slot 0 (Embedded)     

   array A (Parallel SCSI, Unused Space: 0 MB)


      logicaldrive 1 (135.7 GB, RAID 5, Recovering, 3% complete)

      physicaldrive 2:0   (port 2:id 0 , Parallel SCSI, ??? GB, Failed)
      physicaldrive 2:1   (port 2:id 1 , Parallel SCSI, 72.8 GB, OK)
      physicaldrive 2:2   (port 2:id 2 , Parallel SCSI, 72.8 GB, OK)
      physicaldrive 2:3   (port 2:id 3 , Parallel SCSI, 72.8 GB, Rebuilding, active spare)

I hope it helps you to find out what's going on with your server.

good luck

Last edited by lithos; 05-05-2012 at 12:46 PM.
 
Old 05-06-2012, 02:11 AM   #7
Tarikc
Member
 
Registered: May 2009
Distribution: CentOS, RedHat, Ubuntu
Posts: 68

Original Poster
Rep: Reputation: 4
Dear All,

Thank you a million for your great help.

I googled a little bit more and found out that HP have something called offline bundle for the ESXi 5.0.
It contains the HPACUCLI that lithos mentioned, and some other tools that can be used to monitor the health of the system.
I will try to install the software today and will let you know the results.

Thank you very much again.

Will keep in touch.
 
Old 05-06-2012, 07:48 AM   #8
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: SlackwareŽ
Posts: 13,925
Blog Entries: 44

Rep: Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159
Moderator response

Hi,

Quote:
Originally Posted by Tarikc View Post
Dear All,

Thank you a million for your great help.

I googled a little bit more and found out that HP have something called offline bundle for the ESXi 5.0.
It contains the HPACUCLI that lithos mentioned, and some other tools that can be used to monitor the health of the system.
I will try to install the software today and will let you know the results.

Thank you very much again.

Will keep in touch.
I can understand your frustrations with hardware issues. Please do not post with 'Urgent or the like', it is considered rude. Our members are volunteers and provide the free assistance so this is no urgency for them. If you need quick service then I suggest that you should look at paid maintenance or service on demand groups.
 
Old 05-06-2012, 12:55 PM   #9
Tarikc
Member
 
Registered: May 2009
Distribution: CentOS, RedHat, Ubuntu
Posts: 68

Original Poster
Rep: Reputation: 4
Hello onebuck,

I would like to appologize for this mistake.

I honestly didn't mean to be rude but as you said I was fraustrated.

I appologize again for this incident.

Best wishes for all.
 
Old 05-06-2012, 02:05 PM   #10
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: SlackwareŽ
Posts: 13,925
Blog Entries: 44

Rep: Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159Reputation: 3159
Moderator response

Hi,

Quote:
Originally Posted by Tarikc View Post
Hello onebuck,

I would like to appologize for this mistake.

I honestly didn't mean to be rude but as you said I was fraustrated.

I appologize again for this incident.

Best wishes for all.
Thank you for understanding.
 
Old 05-11-2012, 05:29 AM   #11
Tarikc
Member
 
Registered: May 2009
Distribution: CentOS, RedHat, Ubuntu
Posts: 68

Original Poster
Rep: Reputation: 4
Dear Lithos,

I have successfully installed the hpacucli on the VMWARE and I followed your commands, and here is the output:

~ # /opt/hp/hpacucli/bin/hpacucli ctrl all show status

Smart Array P410i in Slot 0 (Embedded)
Controller Status: OK
Cache Status: OK
Battery/Capacitor Status: OK


~ # /opt/hp/hpacucli/bin/hpacucli ctrl all show config

Smart Array P410i in Slot 0 (Embedded) (sn: 500143801756F3F0)

array A (SAS, Unused Space: 0 MB)


logicaldrive 1 (410.1 GB, RAID 5, Interim Recovery Mode)

physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK)
physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK)
physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 147.0 GB, Failed)

SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 500143801756F3FF)

Please tell me is it safe to hot swap it or I have to bring it off again to change the failed drive?

I wish I get your reply soonest.

Best Regards,
 
Old 05-11-2012, 06:13 AM   #12
lithos
Senior Member
 
Registered: Jan 2010
Location: SI : 45.9531, 15.4894
Distribution: CentOS, OpenNA/Trustix, testing desktop openSuse 12.1 /Cinnamon/KDE4.8
Posts: 1,144

Rep: Reputation: 217Reputation: 217Reputation: 217
Hi,

I'm sorry to see that it failed the drive
and yes, you can (should as soon) pull the drive out ! BE CAREFUL to take the right one!
and replace it with a new drive.
The array should get into recovering ... automatically. ( HP's manuals - Automatic data recovery with rapid rebuild technology )

Last edited by lithos; 05-11-2012 at 06:15 AM.
 
Old 05-19-2012, 12:53 AM   #13
Tarikc
Member
 
Registered: May 2009
Distribution: CentOS, RedHat, Ubuntu
Posts: 68

Original Poster
Rep: Reputation: 4
Dear Lithos,

I would like to express my deepest gratitude for your wonderful help and precise instructions.

I've swapped the drives, and the array built their new brother in 27 minutes.

~ # /opt/hp/hpacucli/bin/hpacucli ctrl all show config

Smart Array P410i in Slot 0 (Embedded) (sn: 500143801756F3F0)

array A (SAS, Unused Space: 0 MB)


logicaldrive 1 (410.1 GB, RAID 5, OK)

physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK)
physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK)
physicaldrive 1I:1:3 (port 1I:box 1:bay 3, SAS, 146 GB, OK)
physicaldrive 1I:1:4 (port 1I:box 1:bay 4, SAS, 147.0 GB, OK)

SEP (Vendor ID PMCSIERA, Model SRC 8x6G) 250 (WWID: 500143801756F3FF)

Thank you very much for your assistance and wonderful help, you've made my day!

Best Regards,
 
Old 05-19-2012, 07:33 AM   #14
lithos
Senior Member
 
Registered: Jan 2010
Location: SI : 45.9531, 15.4894
Distribution: CentOS, OpenNA/Trustix, testing desktop openSuse 12.1 /Cinnamon/KDE4.8
Posts: 1,144

Rep: Reputation: 217Reputation: 217Reputation: 217
Splendid,
I'm glad you did it.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Looking for good hard disks to use in a Raid 1 array gregw040 Linux - Hardware 2 01-14-2010 03:26 AM
Trouble setting up hard disks with software raid AsgrimTheMighty Linux - Hardware 5 09-30-2007 03:18 AM
LEDs come on every 15 seconds. Is this normal: scsi raid? cruiserparts Linux - Hardware 0 06-30-2006 08:52 AM
Hard Disks & RAID-1 DeathsFriend Fedora 2 03-25-2004 07:26 AM
Question about swapping disks, plz help... WeNdeL Linux - General 6 04-02-2003 03:27 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 03:59 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration