FedoraThis forum is for the discussion of the Fedora Project.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have a 3ware 7506-4LP with 3 drives in a RAID5 setup. One drive is degraded, but my understanding is that I should still be able to mount the raid in degraded mode, it'll just NOT be fault tolerant. I want to mount it so that I can copy the data off the raid to another location so when I get a new drive, I don't risk destroying the data upong rebuilding... or for that matter, I have some users who just need access sooner than I seem to be able to get them, to the data.
I cannot find a way to mount it though. Running the command line client for the raid lets me see that is it there and that it has a degraded drive, but I cannot figure out how to actually mount it... and amazingly, I can't seem to goggle anything useful for my request.
It wasn't mounted in this install. It's a fresh FC5 install. The raid was on a FC3 system that I had to take down for other complications. So I installed the new system from scratch planning to copy over the old info from a backup. The raid in question isn't backed up, it's not required by the system, only the users who want their data.
The plan was to put the raid on after the system install and I figured at worst, yeah, it would show up and be mountable. I can't tell that's it's showing as a device at all, even though the cli is able to speak to it and check it's status as a raid in general.
Also, check the 3ware support site and verify that the FC5 kernel already supports the card. Otherwise, you will need to supply drivers for the card to function.
Do you see the card BIOS setup options as you start the boot? If no, then you have a problem. If yes, then look through the card BIOS settings and see if something is messed up.
fdisk -l only shows the main system raid, a SCSI one.
The 3ware site does not show FC5 as being officially supported, or there's nothing to download for it specifically.
As for the card BIOS settings, yes, they come up and nothign looks out off the ordinary. That is where it reported the degraded drive and let me do the rebuild (unsuccessfully).
Other people were pointing out that there really shouldn't be a problem mounting this based on what I'm seeing... so it's really confusing.
I have done a modprobe 3w-9xxx (per suggestion) and there's already a 3w-xxxx in the list when I lsmod. So, I'm assuming there's driver support in there.
I'm burning a knoppix 5 DVD right now and will boot to that to see if knoppix will have better luck mounting the array... but I'm still looking for some semblance of reasonability in getting this working in the environment it's going gto be running in.
I'm an idiot!
I did a cat on /dev/scsi/scsi
here is the result you were looking for:
# cat /proc/scsi/scsi
Attached devices:
Host: scsi2 Channel: 00 Id: 01 Lun: 00
Vendor: 3ware Model: Logical Disk 1 Rev: 1.2
Type: Direct-Access ANSI SCSI revision: ffffffff
So, it's in /proc just fine... now I suppose it NEEDS to be in /dev somehow to be mountable... but does it being in /proc make it look like we're on the right path? Now what?
thanks a ton for this help too...
Getting a system to recognize an undetected scsi device seems to always go this way. What you’re doing is “teaching” the os how to do it the first time and then it will “remember” how the next time.
If the raid isn’t mountable, then reboot and try mounting it again.
And it’s always possible that sda isn’t the right device (i.e., also try sdc and sdb if sda fails) since it is detected as scsi2 instead of scsi0. Do you have some USB drives already plugged in?
Last edited by WhatsHisName; 05-25-2006 at 05:49 PM.
it keeps dropping all the sda devs.
I created them all, rebooted and they were all gone.
I then created them again after adding the other lines you gave and just after I tried to mount:
# mount -t ext3 /dev/sda1 /backup/
The operation seemes to have hung. I opened another shell and took a look in /dev and there are NO sda, sda1, etc.
Curiouser and curiouser...
I can FEEL it... it seems like we're almost right there.
Oddly, even though it hung, if I look then in /backup, it does have at least part of one of the directories.. with files dated before the take-down of the raid originally. (yum related stuff I moved there to save space)
So, it APPEARS it started to mount it, then killed off the whole shabang.
However...
#mount
doesn't show it mounted at all.
uh-huh
-edit-
Never mind.... I was looking at the wrong system... the system in question is NOT showing part of the mount... oops.
So I eventually got the hang to stop (ctl-c)... and run again.
Here is the ouput:
# mount -t ext3 /dev/sda1 /backup/
mount: wrong fs type, bad option, bad superblock on /dev/sda1,
missing codepage or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
[root@dmdlnew /]# dmesg | tail
3w-xxxx: scsi2: Command failed: status = 0xc7, flags = 0x40, unit #1.
3w-xxxx: scsi2: Command failed: status = 0xc7, flags = 0x40, unit #1.
3w-xxxx: scsi2: Command failed: status = 0xc7, flags = 0x40, unit #1.
3w-xxxx: scsi2: Command failed: status = 0xc7, flags = 0x40, unit #1.
sd 2:0:1:0: SCSI error: return code = 0x8000002
sda: Current: sense key: Medium Error
Additional sense: Unrecovered read error
Info fld=0x0
end_request: I/O error, dev sda, sector 65
EXT3-fs: unable to read superblock
Update, still not there, but closer... I THINK.
Called 3ware and figured out how to get t4eh degrade3d raid rebuilt... and it is now properly rebuilt, AFAIK.
There was also a problem with IRQ conflicts and an ACIP problem, so I told the kernel to boot without it.
So, now it can be found and looks great, but I can't mount it because nothing can tgell what filesystem it is supposed to be. I believe it's ext3, but it claims it's not. qtparted says it's "unknown". SO... that's the new problem. I think if I could get it's filesystem type sorted out... it should be relatively smooth going... (err... I shouldn't edven try to say that)... but I don't know how to repair it's filesystem type.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.