Fedora This forum is for the discussion of the Fedora Project. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
03-19-2007, 02:20 PM
|
#1
|
Member
Registered: Jul 2005
Distribution: Fedora6 x86_64
Posts: 118
Rep:
|
To use a RAID0 created by a SuSE install
Folks,
I'm using FC6 (x86_64) on a system that has a RAID0 created by a previous SuSE 10.0 installation. SuSE is still installed, so I can launch it and compare with FC6. I do not know how to mount and use this RAID0 with FC6. I compared both systems and they look the same in many aspects, like dmraid -r will return on both:
/dev/sda: nvidia, "nvidia_ffcdjdfc", stripe, ok, 398297086 sectors, data@ 0
/dev/sdb: nvidia, "nvidia_ffcdjdfc", stripe, ok, 398297086 sectors, data@ 0
There is a /dev/md0 device in FC6. Moreover, the following exists in FC6 in /dev/mapper/:
crw------- 1 root root 10, 63 mar 19 11:05 control
brw-rw---- 1 root disk 253, 0 mar 19 11:05 nvidia_ffcdjdfc
brw-rw---- 1 root disk 253, 1 mar 19 11:05 nvidia_ffcdjdfcp1
Problem is, when I try to mount it, it reports:
mount -t ext3 /dev/mapper/nvidia_ffcdjdfc /raid/
mount: /dev/mapper/nvidia_ffcdjdfc already mounted or /raid/ busy
/raid is not mounted when checked using 'df'.
On both systems there are no fstab entry for the raid. It looks like not much is missing. I'm a bit afraid of trying things for fear of losing data on the raid. Can anyone help ? - Thanks !
|
|
|
03-19-2007, 02:48 PM
|
#2
|
LQ Guru
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733
|
Here is the init.d boot.md service that runs in SuSE when it boots. It may server as a reference. SuSE uses mdadm for raid devices. I don't know about Fedora, but SuSE will check for one type of raid and if it is found the script finishes without checking for the other. So if you have two types of raid, the second won't be checked for. You might want to check the Fedora scripts to see if they do the same thing.
Code:
#!/bin/sh
#
# SUSE system startup script for MD Raid autostart
# Copyright (C) 1995--2005 Kurt Garloff, SUSE / Novell Inc.
# Copyright (C) 2006 Marian Jancar, SUSE / Novell Inc.
#
# This library is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or (at
# your option) any later version.
#
# This library is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307,
# USA.
#
### BEGIN INIT INFO
# Provides: boot.md
# Required-Start: boot.udev boot.rootfsck
# X-SUSE-Should-Start: boot.scsidev boot.multipath
# Default-Start: B
# Default-Stop:
# Short-Description: Multiple Device RAID
# Description: Start MD RAID
# RAID devices are virtual devices created from two or more real block devices.
# This allows multiple devices (typically disk drives or partitions there-of)
# to be combined into a single device to hold (for example) a single filesystem.
# Some RAID levels include redundancy and so can survive some degree of device failure.
### END INIT INFO
# Source LSB init functions
# providing start_daemon, killproc, pidofproc,
# log_success_msg, log_failure_msg and log_warning_msg.
# This is currently not used by UnitedLinux based distributions and
# not needed for init scripts for UnitedLinux only. If it is used,
# the functions from rc.status should not be sourced or used.
#. /lib/lsb/init-functions
# Shell functions sourced from /etc/rc.status:
# rc_check check and set local and overall rc status
# rc_status check and set local and overall rc status
# rc_status -v be verbose in local rc status and clear it afterwards
# rc_status -v -r ditto and clear both the local and overall rc status
# rc_status -s display "skipped" and exit with status 3
# rc_status -u display "unused" and exit with status 3
# rc_failed set local and overall rc status to failed
# rc_failed <num> set local and overall rc status to <num>
# rc_reset clear both the local and overall rc status
# rc_exit exit appropriate to overall rc status
# rc_active checks whether a service is activated by symlinks
. /etc/rc.status
# Reset status of this service
rc_reset
# Return values acc. to LSB for all commands but status:
# 0 - success
# 1 - generic or unspecified error
# 2 - invalid or excess argument(s)
# 3 - unimplemented feature (e.g. "reload")
# 4 - user had insufficient privileges
# 5 - program is not installed
# 6 - program is not configured
# 7 - program is not running
# 8--199 - reserved (8--99 LSB, 100--149 distrib, 150--199 appl)
#
# Note that starting an already running service, stopping
# or restarting a not-running service as well as the restart
# with force-reload (in case signaling is not supported) are
# considered a success.
mdadm_BIN=/sbin/mdadm
mdrun_BIN=/sbin/mdrun
mdadm_CONFIG="/etc/mdadm.conf"
mdadm_SYSCONFIG="/etc/sysconfig/mdadm"
# udev integration
if [ -x /sbin/udevsettle ] ; then
[ -z "$MDADM_DEVICE_TIMEOUT" ] && MDADM_DEVICE_TIMEOUT=60
else
MDADM_DEVICE_TIMEOUT=0
fi
function _rc_exit {
[ "x$2" != x"" ] && echo -n $2
rc_failed $1
rc_status -v
rc_exit
}
case "$1" in
start)
echo -n "Starting MD Raid "
# Check for existence of needed config file and read it
[ -r $mdadm_SYSCONFIG ] || _rc_exit 6 "... $mdadm_SYSCONFIG not existing "
# Read config
. $mdadm_SYSCONFIG
[ "x$MDADM_CONFIG" != x"" ] && mdadm_CONFIG="$MDADM_CONFIG"
# Check for missing binaries (stale symlinks should not happen)
[ -x $mdadm_BIN ] || _rc_exit 5 "... $mdadm_BIN not installed "
[ -x $mdrun_BIN ] || _rc_exit 5 "... $mdrun_BIN not installed "
# Try to load md_mod
[ ! -f /proc/mdstat -a -x /sbin/modprobe ] && /sbin/modprobe -k md_mod 2>&1 | :
[ -f /proc/mdstat ] || _rc_exit 5 "... no MD support in kernel "
# Wait for udev to settle
if [ "$MDADM_DEVICE_TIMEOUT" -gt 0 ] ; then
/sbin/udevsettle --timeout="$MDADM_DEVICE_TIMEOUT"
fi
# Fallback to mdrun when $mdadm_CONFIG missing or mdadm exits with an error
[ "$BOOT_MD_USE_MDADM_CONFIG" = "yes" -a -s "$mdadm_CONFIG" ]
[ $? = 0 ] && { $mdadm_BIN -A -s -c $mdadm_CONFIG || rc_failed 1; }
[ $? = 0 ] || $mdrun_BIN
# Remember status and be verbose
rc_status -v
;;
stop)
echo -n "Shutting down MD Raid "
# Remember status and be verbose
rc_status -v
;;
status)
rc_failed 4
rc_status -v
;;
*)
echo "Usage: $0 {start|stop|status}"
exit 1
;;
esac
rc_exit
|
|
|
03-19-2007, 05:19 PM
|
#3
|
Member
Registered: Jul 2005
Distribution: Fedora6 x86_64
Posts: 118
Original Poster
Rep:
|
As far as I can see, Fedora does not use md_mod but rather dm-mod:
Code:
/sbin/lsmod | grep dm
dm_multipath 30033 0
dm_snapshot 27569 0
dm_zero 10945 0
dm_mirror 33089 0
dm_mod 77073 9 dm_multipath,dm_snapshot,dm_zero,dm_mirror
Makes one wonder what's going on exactly in Linux Raid World. Why 'md' and 'dm' ?
Nevertheless I read a bit the Fedora rc.sysinit script and did a few tests to see how it works. It certainly ain't as straightforward as the raid SuSE stuff ! After testing the resolve_dm_name() and get_numeric_dev() functions found downward from 'RAID setup' in rc.sysinit (line 402) I obtained the following command which I ran:
Code:
/sbin/dmraid -ay -i -p nvidia_ffcdjdfc
RAID set "nvidia_ffcdjdfc" already active
So, it's already active. Or so it seems. Then the script gos on calling something called /sbin/kpartx amd I did not went there.
Thing is, I never told Fedora during install where to mount any RAID0 or otherwise. I ignored the raid part (if there was any - I don't remember) simply because I did not want the installer to format any raid drive.
Now that dmraid reports that the raid is already active, how can I mount it ?
There are two raid-related /dev/mapper devices created by Fedora:
Code:
crw------- 1 root root 10, 63 mar 19 16:18 control
brw-rw---- 1 root disk 253, 0 mar 19 16:18 nvidia_ffcdjdfc
brw-rw---- 1 root disk 253, 1 mar 19 16:18 nvidia_ffcdjdfcp1
The 'p1' must be the partition 1 of the RAID (there's only one partition). So if I try to mount it:
Code:
mount -t ext3 /dev/mapper/nvidia_ffcdjdfcp1 /raid/
mount: wrong fs type, bad option, bad superblock on /dev/mapper/nvidia_ffcdjdfcp1,
Or:
Code:
mount /dev/mapper/nvidia_ffcdjdfcp1 /raid/
mount: wrong fs type, bad option, bad superblock on /dev/mapper/nvidia_ffcdjdfcp1,
The raid is still in good condition since if I reboot in SuSE it will work by doing a simple:
mount -t ext3 /dev/md0 /raid/
The above mount on Fedora FC6 reports wrong fs type.
It seems so near to a working state.
Anyone know at this stage how to mount the raid in Fedora FC6 ?
|
|
|
03-20-2007, 02:32 AM
|
#4
|
LQ Guru
Registered: Aug 2001
Location: Fargo, ND
Distribution: SuSE AMD64
Posts: 15,733
|
I use SuSE but not raid. Anyway, I looked at my lsmod listing and there is a dm_mod module. SuSE uses mdadm to create the arrays however. I don't know how compatible this would be for Fedora. Can you install mdadm in Fedora? Since it is clean maybe.
From the mdadm man page:
Code:
DEVICE NAMES
While entries in the /dev directory can have any format you like, mdadm
has an understanding of 'standard' formats which it uses to guide its
behaviour when creating device files via the --auto option.
The standard names for non-partitioned arrays (the only sort of md
array available in 2.4 and earlier) either of
/dev/mdNN
/dev/md/NN
where NN is a number. The standard names for partitionable arrays (as
available from 2.6 onwards) is one of
/dev/md/dNN
/dev/md_dNN
Partition numbers should be indicated by added "pMM" to these, thus
"/dev/md/d1p2".
Are there any devices that look like those indicated in the howto? Is this array a single partition?
Does the Fedora partitioner program allow you to assemble and mount raid arrays? There may be a "format" checkbox you can uncheck.
Since you can mount the drive in a SuSE live distro, perhaps you could backup the data to an external drive or a spare drive, or to DVDs and start over.
Here is an extract from a gentoo howto:
Code:
mount /dev/mapper/nvidia_abiccada4 /mnt/gentoo
mkdir /mnt/gentoo/boot
mount /dev/mapper/nvidia_abiccada1 /mnt/gentoo/boot
Try "file -s /dev/mapper/nvidia_ffcdjdfcp1" and/or "fdiskk -l /dev/mapper/nvidia_ffcdjdfc" and see what they say.
Also double check what /proc/mdstat says.
Last edited by jschiwal; 03-20-2007 at 02:33 AM.
|
|
|
All times are GMT -5. The time now is 02:31 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|