LinuxQuestions.org
Visit the LQ Articles and Editorials section
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices

Reply
 
Search this Thread
Old 04-24-2013, 08:33 PM   #1
Genesys
LQ Newbie
 
Registered: Mar 2010
Location: USA
Distribution: NST
Posts: 4

Rep: Reputation: 0
LVM and Raid... Volume keeps dropping...


I'm not really sure how to start this question other than.... why does my LV group keep dropping?

A little background on the situation...

I have a NAS running Debian (Lenny). It has two 1TB drives configured for Raid 0. It had been running perfectly for weeks, but now for some reason, my main storage LV (i.e. the one with all of my data) keeps dropping.

So... a little data:

Quote:
NAS:~# vgdisplay -v vg0
Using volume group(s) on command line
Finding volume group "vg0"
--- Volume group ---
VG Name vg0
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.81 TB
PE Size 4.00 MB
Total PE 475564
Alloc PE / Size 475564 / 1.81 TB
Free PE / Size 0 / 0
VG UUID BGlHD2-tm1d-VItN-8NA8-pBch-D2zd-83oR0b

--- Logical volume ---
LV Name /dev/vg0/lv0
VG Name vg0
LV UUID J9t0QV-bGov-Fvsr-Fvtv-S2B0-RiGx-wl8lBY
LV Write Access read/write
LV Status available
# open 1
LV Size 1.81 TB
Current LE 475564
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

--- Physical volumes ---
PV Name /dev/md3
PV UUID f48Xwm-3ecz-2cmQ-1Fym-1kw1-DEiE-jfnyhH
PV Status allocatable
Total PE / Free PE 475564 / 0
So, I'm hoping that someone will be able to help me track down the error[s]...

Edit: more data...

Quote:
NAS:~# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md3 : active raid0 sda4[0] sdb4[1]
1947912064 blocks super 1.2 64k chunks

md1 : active raid1 sdb2[1] sda2[0]
1044800 blocks [2/2] [UU]

md2 : active raid1 sdb3[1] sda3[0]
521408 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]
1043840 blocks [2/2] [UU]

unused devices: <none>

Last edited by Genesys; 04-24-2013 at 08:36 PM. Reason: add more data
 
Old 04-24-2013, 10:01 PM   #2
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: Fedora
Posts: 1,531

Rep: Reputation: 374Reputation: 374Reputation: 374Reputation: 374
Check /var/log/messages for md or disk errors.
 
Old 04-25-2013, 03:46 PM   #3
Genesys
LQ Newbie
 
Registered: Mar 2010
Location: USA
Distribution: NST
Posts: 4

Original Poster
Rep: Reputation: 0
I was going through /var/log/messages and looking for something that would stand out... but I didn't see anything. I'll probably keep looking tonight after I get home...

Although, I was wondering... In addition to looking for disk errors... would there be any messages regarding system reboots?
 
Old 04-27-2013, 08:59 AM   #4
Genesys
LQ Newbie
 
Registered: Mar 2010
Location: USA
Distribution: NST
Posts: 4

Original Poster
Rep: Reputation: 0
So, I looked in /var/log/messages and I don't see anything that stands out...

Quote:
NAS:/var/log# cat messages | grep md3
Apr 25 02:50:37 NAS kernel: md: md3 stopped.
Apr 25 02:50:37 NAS kernel: md3: setting max_sectors to 128, segment boundary to 32767
Just for S&Gs, here's a copy of my fstab... I'm definitely not an exert on such things... but it looks ok to me:

Quote:
NAS:/# cat /etc/fstab
# /etc/fstab: static file system information.
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
/dev/md0 / ext3 errors=remount-ro 0 1
/dev/md1 none swap sw 0 0
/dev/md2 /mnt/md2 ext3 defaults 0 2

#/dev/mapper/vg0-lv0 /mnt/vg0-lv0 ext3 defaults 0 2

/dev/vg0/lv0 /media/Data ext3 defaults 0 0
 
Old 04-27-2013, 11:43 AM   #5
Genesys
LQ Newbie
 
Registered: Mar 2010
Location: USA
Distribution: NST
Posts: 4

Original Poster
Rep: Reputation: 0
Just checked my mdadm.conf, and to my surprise, it seemed like it was incomplete...

Original;
Quote:
NAS:/etc/mdadm# cat mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR blah

# definitions of existing MD arrays

# This file was auto-generated on Sat, 18 Dec 2010 10:26:52 +0100
# by mkconf $Id$
Modified:
Quote:
NAS:/etc/mdadm# cat mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR blah

# definitions of existing MD arrays

# This file was auto-generated on Sat, 18 Dec 2010 10:26:52 +0100
# by mkconf $Id$
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=b07083e0:2d78a9c5:d33c2718:aa7ffeb2
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=c0f83d5e:284d2995:0eaac66e:5b712a01
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=55edfb9f:fdde3c2c:842050b0:081237cd
ARRAY /dev/md/3 level=raid0 metadata=1.2 num-devices=2 UUID=f10c6406:a53b8d28:3141e2db:f3386637 name=3
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
LVM Mount Physical Volume/Logical Volume without a working Volume Group mpivintis Linux - Newbie 10 01-11-2014 07:02 AM
Setup of a large volume: LVM or RAID? colucix Linux - Software 8 12-22-2010 09:13 AM
libblkid uuid_parse() failing to parse UUID for lvm volume on raid device wroom Programming 0 09-04-2010 07:08 PM
upgrade system with lvm on software raid-1 data volume w/o losing data BinWondrin Linux - General 1 01-13-2009 03:25 PM
Converting a LVM volume to RAID 5 mastrboy Linux - Software 2 02-08-2007 11:30 AM


All times are GMT -5. The time now is 10:50 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration