Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I'm not really sure how to start this question other than.... why does my LV group keep dropping?
A little background on the situation...
I have a NAS running Debian (Lenny). It has two 1TB drives configured for Raid 0. It had been running perfectly for weeks, but now for some reason, my main storage LV (i.e. the one with all of my data) keeps dropping.
So... a little data:
Quote:
NAS:~# vgdisplay -v vg0
Using volume group(s) on command line
Finding volume group "vg0"
--- Volume group ---
VG Name vg0
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.81 TB
PE Size 4.00 MB
Total PE 475564
Alloc PE / Size 475564 / 1.81 TB
Free PE / Size 0 / 0
VG UUID BGlHD2-tm1d-VItN-8NA8-pBch-D2zd-83oR0b
--- Logical volume ---
LV Name /dev/vg0/lv0
VG Name vg0
LV UUID J9t0QV-bGov-Fvsr-Fvtv-S2B0-RiGx-wl8lBY
LV Write Access read/write
LV Status available
# open 1
LV Size 1.81 TB
Current LE 475564
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Physical volumes ---
PV Name /dev/md3
PV UUID f48Xwm-3ecz-2cmQ-1Fym-1kw1-DEiE-jfnyhH
PV Status allocatable
Total PE / Free PE 475564 / 0
So, I'm hoping that someone will be able to help me track down the error[s]...
I was going through /var/log/messages and looking for something that would stand out... but I didn't see anything. I'll probably keep looking tonight after I get home...
Although, I was wondering... In addition to looking for disk errors... would there be any messages regarding system reboots?
Just checked my mdadm.conf, and to my surprise, it seemed like it was incomplete...
Original;
Quote:
NAS:/etc/mdadm# cat mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR blah
# definitions of existing MD arrays
# This file was auto-generated on Sat, 18 Dec 2010 10:26:52 +0100
# by mkconf $Id$
Modified:
Quote:
NAS:/etc/mdadm# cat mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR blah
# definitions of existing MD arrays
# This file was auto-generated on Sat, 18 Dec 2010 10:26:52 +0100
# by mkconf $Id$
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=b07083e0:2d78a9c5:d33c2718:aa7ffeb2
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=c0f83d5e:284d2995:0eaac66e:5b712a01
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=55edfb9f:fdde3c2c:842050b0:081237cd
ARRAY /dev/md/3 level=raid0 metadata=1.2 num-devices=2 UUID=f10c6406:a53b8d28:3141e2db:f3386637 name=3
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.