LinuxQuestions.org
Visit Jeremy's Blog.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Debian
User Name
Password
Debian This forum is for the discussion of Debian Linux.

Notices

Reply
 
Search this Thread
Old 03-03-2011, 02:14 PM   #1
carlosinfl
Senior Member
 
Registered: May 2004
Location: Orlando, FL
Distribution: Debian
Posts: 2,900

Rep: Reputation: 73
Question Debian Reports Non-optimal RAID Status


I checked /var/log/messages today and it's flooding with:

Code:
Mar  3 14:58:36 mail mpt-statusd: detected non-optimal RAID status
The system is a fresh install and not built on RAID. It's a single disk on a VM (VMware) and I don't think 'mdadm' is even installed:

Code:
mail:/var/log# apt-cache policy mdadm
mdadm:
  Installed: (none)
  Candidate: 3.1.4-1+8efb9d1
  Version table:
     3.1.4-1+8efb9d1 0
        500 http://ftp.us.debian.org/debian/ squeeze/main amd64 Packages
I searched before I posted and found an old bug from 2009:

http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=539266

I can't understand why I'm seeing this on my new system with no RAID configuration and using Debian 6.0.
 
Old 03-03-2011, 03:49 PM   #2
wpeckham
Member
 
Registered: Apr 2010
Location: USA
Distribution: Debian, Ubuntu, Fedora, RedHat, DSL, Puppy, CentOS, Knoppix
Posts: 778

Rep: Reputation: 173Reputation: 173
Single disk raid?

Interesting? RAID is not optimal when there are no RAID volumes, eh?
I have no answers, but am sure looking forward to seeing some.

Single volume raid: boggles the mind.
 
Old 03-04-2011, 09:16 AM   #3
murmur101
LQ Newbie
 
Registered: Apr 2010
Posts: 15

Rep: Reputation: 1
Got the same here..

debian 2.6.32-5-amd64
used atomic layout (no raid) in setup in a VM on an esxi 4.0
my /var/log/messages is peppered with " mpt-statusd: detected non-optimal RAID status"

I re-installed 5 times.. same thing

looking forward to see an answer to this one..
 
Old 03-04-2011, 09:43 AM   #4
carlosinfl
Senior Member
 
Registered: May 2004
Location: Orlando, FL
Distribution: Debian
Posts: 2,900

Original Poster
Rep: Reputation: 73
I've looked all over and don't understand it. You configure the VM with x amount of disk space and that should be transparent to the O.S. (Debian 6.0) if it's local or remote RAID storage. If you have a RAID controller, it should present the storage as one solid disk and then you can carve it up as you feel fit.

Super confused.

I've tried RHEL 6, Arch Linux, Ubuntu Server 10.10, Slackware, & Gentoo and Debian is the only one with this problem. It's flooding my logs to the point that I can't even run this as a O.S. I also reinstalled 3 times.
 
Old 03-07-2011, 12:52 PM   #5
carlosinfl
Senior Member
 
Registered: May 2004
Location: Orlando, FL
Distribution: Debian
Posts: 2,900

Original Poster
Rep: Reputation: 73
If you disable 'mpt-status' in init.d scripts, that seems to work:

Code:
/etc/init.d/mpt-status stop
Anyone know how to disable this after I reboot so it doesn't start again?
 
Old 03-08-2011, 02:10 AM   #6
murmur101
LQ Newbie
 
Registered: Apr 2010
Posts: 15

Rep: Reputation: 1
Hi,

I think this is correct
Code:
/etc/init.d/mpt-statusd stop
I disabled it using


Code:
update-rc.d-insserv -f mpt-statusd remove
works for me at least
 
1 members found this post helpful.
Old 03-10-2011, 01:32 PM   #7
carlosinfl
Senior Member
 
Registered: May 2004
Location: Orlando, FL
Distribution: Debian
Posts: 2,900

Original Poster
Rep: Reputation: 73
I don't know if there are any negative implementations to disabling this service or why Debian even elected to install / use this to begin with. Anyone know? Obviously I'm running this on a server with a hardware RAID controller from Dell (PERC 5i) so there is no need to run software RAID or 'mdadm'.
 
Old 03-23-2011, 03:18 PM   #8
wmussatto
LQ Newbie
 
Registered: Mar 2011
Posts: 1

Rep: Reputation: 0
I may be missing something, but mpt-statusd deals with hardware raid. If you are getting message from your raid them perhaps the raid pair is either being rebuilt or needs to be? mdadm deals with software raid. I think at least some of the Dell Hardware Raid products use LSI chips or boards and that is what mpt-statusd is looking at.

I know that on our server I had to manually during bootup force the LSI card to start the rebuild of our array. It continued to do so through a series of reboots until the job was done. I got an email saying it was in "non-optimal status" and then one that said it was fixed. Hope this helps.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Software RAID (mdadm) - RAID 0 returns incorrect status for disk failure/disk removed Marjonel Montejo Linux - General 4 10-04-2009 06:15 PM
df reports vastly different usage on raid array tops008 Linux - Software 0 05-20-2009 09:27 PM
mdadm reports no superblock trying to rebuild failed RAID 5 hotcut23 Linux - Hardware 0 08-18-2007 01:39 AM
monitoring RAID status esdeedee Linux - Software 1 05-18-2006 11:38 PM
Software Raid-0 Chunk-Size - what is optimal? Swamper Linux - Software 0 09-13-2004 10:29 AM


All times are GMT -5. The time now is 05:00 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration