Installing openSuSE 11 on Raid 0
Hey there. I got stuck in a weird situation. First of all, here are my specs.
CPU: AMD am2 64 -bit
RAM: DDR2 8GB
Raid Controller Card PCI-E : RocketRaid3120 True hardware.
HDD: 2x 320GB Seagate SATA2, configured as Raid0, Stripping.
HDD: 1 x 750 SATA2 Connected to the motherboards' sata2 port, used for backup.
The problem: So i start the installation of openSuSE 11, no problems, Raid gets recognized, i leave the default partitions provided by opensuse, which are.
2gb for swap, 20gb for / (root), and the rest which is about 500gb is left for /home.
So sda= SCSI Hard drive (Raid0 Setup)
sda1= 2 gb swap
sda2= 20gb as / (ext3)
sda3= 530gb as /home (ext3)
Then i start the install process, so then the partitions are created, and:
sda1 gets formated as swap
sda2 gets formated as /
but when it gets to sda3 this is what happens:
Formatting sda3 as /home with ext3
_____________17% and that's where it stops.
So while formating the /home partition with ext3, it stops at 17%. Then there is nothing i can do, everything is frozen.
Now if i change it from ext3 to xfs it will format all the way, but when it comes to copying the X11 images(4) it stalls again at about 84%.
I thought there was something wrong with the setup, so i formated raid0 with fdisk from a floppy disk, and it never stalled, it formated all the way trough, so it only stops when formating the /home partition for openSuSE.
And also i installed openSUSE 11 onto an IDE hdd, so there is nothing wrong with the DVD (installation source).
Did anyone come across anything like this before? Any suggestions on how would i go about getting past this?
Any help would be greatly appreciated.
530 GB is pretty big. Maybe the mkfs algorithm to calculate the cluster size and the number of inodes fails to provide a workable solution. You could try stipulating those yourself when you manually create the file system. Finding a good balance might be challenging with a partition that large. More inodes will use up disk space before you get any data stored on the disk. Larger cluster size will waste disk space on each file. Lots of inodes with large cluster size is the worst solution but that may be what you end up using to make the entire partition accessible.
You could also try booting a different live CD just to format the 530 GB partition.
Thank you stress_junkie. I would have to say you are absolutely right. I booted into rescue system. Then i used mkfs.ext3 to create a filesystem. I did an export MKE2FS_SYNC=10 first, then mkfs -t ext3 /dev/sda, it went just fine until it got to the "writing inode table : 3820/4768" and that's where it froze on me. I'm not exactly sure how that works, but i will try to change the inode size maybe. Is there a howto on how to setup such a filesystem? Tried google, haven't really got much.
Also i mentioned in my first post, that if i format with XFS it will work through the format, but as soon as it starts copying the X11 images right after the formating, it will stall again, any ideas why that happens?
Use JFS ... it will not waste space like ext3 and also make one or two more partitions
Thank you for all your help so far. Greatly appreciated.
OK so i tried a few things. I managed to get a hold of opensuse LiveCD booted from there. I also changed the size of the RAID. I created 3 RAID arrays, and i used the smalled one for linux which was 50 GB . Tried installing, and same thing. While copying root filesystem stops.
So i change the filesystem type from ext3 to jfs, and then from jfs to xfs, then from xfs to reiser, and then it went through the copying part, but when it got to where it writes the boot sector to the disk, it froze again. So i just connected a new SATA2 drive to the computer, i installed linux onto that.
Then i booted into linux, formated the 50GB Raid array as XFS, since its the fastest in my experience, so anyhow, i mounted it and then i copied a 600 MB folder from a cd to my home folder, and from there i was trying to copy it to the XFS formated 50 GB RAID array. It was all good it transfered in a blink of an eye, but as soon as the transfer was done, everything froze completely. Now since i have to do a restart, and can't see the error that occurred, is there anywhere where that error gets logged somehow? or is there a tool that i could use in command line to see what error it gives me? I used cp -R but nothing.
Any ideas would be great. Thank you.
So i decided to go all the way back. Went into the bios, and changed the Raid 0 Array to Raid 1 mirroring. Then booted into linux, and it was recognized as /dev/sdc so - primary partition - /dev/sdc1 formated as XFS. Mounted it as /media/HDD.
Now i went to copy the same 600 mb folder onto it. Worked just fine. Nothing froze, and i was able to access the files off of it - but after a few minutes the drive just went away. I could still see /media/HDD and the folder in it, but it gave me no response. I tried to delete but nothing. So this time the system did not freeze, only the drive gave me no response. So i went back to the partition tool to see if i can format it again, but to my surprise the drive was not present anymore. So all i had was /dev/sda, and /dev/sdb , and /dev/sdc was gone almost like it would a removable drive, and i shut it off, only it isn't a removable drive.
Question: 1. Is it possible for the PCI-E RocketRaid3120 controller to just power off the 2 drives connected to it?
2. What should i test/try to see what exactly is happening to the drives?
OK i do have a solution for this problem, there was quite a while since this was started. So. First thing i did, is separate the drives from the Raid Controler, and connect each to a SATA port on my motherboard. Then i used booted from Ultimate Boot Disk, and loaded TESTDISK.EXE , and scanned the Hard drives, to my surprise there were all kinds of partitions on there, from way before, apparently. There were at least 20 XFS partitions, and an HFS+ partition, which is an Apple based partition, and i was more then certain that i did not create. So i decided to erase the hard drive. So i ran MHDD32.exe from the Ultimate boot cd, then issued the ERASE command. It started erasing the hard drive, it took about 2 hours or so on a 320 GB SATA2 hard drive. Meanwhile i download the new Firmware upgrade from Highpoint wesite as well, and copied it to a floppy together with the hpt update tool. So as soon as both hard drives got erased i reconnected them to the Highpoint 3120 controller, and i inserted the floppy then changed drived from the cdrom to the A: drive and ran the HPT firmware/bios updater. Then removed all media, and inserted the SUSE 11.0 install DVD, rebooted the computer, and configured the hard drives that were connected to the Raid controller, and then the installation began, and it ran flawlessly. No freezing no nothing. Now i'm not entirely sure wether this had to do with the new firmware, or the freshly erased hard drives, but i worked out and it was one of the two. So thank you for all of your guys' help it was greatly appreciated.
Thank you .
|All times are GMT -5. The time now is 11:35 AM.|