LinuxQuestions.org
Share your knowledge at the LQ Wiki.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices


Reply
  Search this Thread
Old 04-19-2012, 07:46 PM   #1
kitek
Member
 
Registered: Apr 2005
Posts: 252

Rep: Reputation: 15
SCSI Questions


I have some Dell PowerEdge 2850's with SCSI drives. They each have two. I have always configured them to be RAID 1 or RAID 5 when there is more than 3. I want to know a little more about the SCSI portion of it. By enabling the SCSI adapter to utilize SCSI intead do regular RAID, what are the benefits besides speed? If a drive fails, can you just simply pull it out plug another in and your good? Is SCSI redundent like regular raids? I have I just want to make sure I understand this. If I need more space, can I just add a drive? All of these will run CentOS.

Last edited by kitek; 04-19-2012 at 11:46 PM.
 
Old 04-20-2012, 01:48 AM   #2
xeleema
Member
 
Registered: Aug 2005
Location: D.i.t.h.o, Texas
Distribution: Slackware 13.x, rhel3/5, Solaris 8-10(sparc), HP-UX 11.x (pa-risc)
Posts: 988
Blog Entries: 4

Rep: Reputation: 254Reputation: 254Reputation: 254
Greetingz!

Quote:
Originally Posted by kitek View Post
I have always configured them to be RAID 1 or RAID 5....
When you say this, do you mean in the firmware of the controllers (hardware-based RAID), or in the OS itself (software-based RAID)?

Quote:
Originally Posted by kitek View Post
By enabling the SCSI adapter to utilize SCSI intead do regular RAID, what are the benefits besides speed?
A speed benefit depends entirely on what RAID type you're using (hardware vs software, which is usually negligible with modern hardware), and what level (RAID 0, RAID 1, RAID 5, RAID 0+1, RAID 10).


Quote:
Originally Posted by kitek View Post
If a drive fails, can you just simply pull it out plug another in and your good?
Depends on the controller and the server. Hot-pluggable is hot-pluggable. Normally, if the controller supports hot-pluggable, and the drives are in little removable caddys, you're good.
The inverse being if you have to open the case and take a screwdriver to a drive, you should probably power-off the server first.

Quote:
Originally Posted by kitek View Post
Is SCSI redundent like regular raids?
SCSI is just a protocol type that defines how devices interact on a bus (like IDE or SATA, or FC-AL). SCSI itself is *not* redundant. For that you need to depend on a RAID level (other than RAID 0).
I think you'd find the wikipeida page on RAID pretty handy.

Quote:
Originally Posted by kitek View Post
If I need more space, can I just add a drive?
Typically, it's not that easy. You'll need to look at a logical volume manager of some sort (like Veritas StorageFoundation if you want a commercial product with support).


Quote:
Originally Posted by kitek View Post
All of these will run CentOS.
Ah! In this case, you'll want to read-up on the mdadm command, and take a look at the documentation for LVM2 (logical volume manager).

Example: Here's a layout that is very common amongst servers.

Code:
Filesystem (<<< that thing you "df")
|
+--Logical Volume (via LVM2's lvcreate / lvdisplay commands)
    |
    +--Volume Group (via LVM2's vgcreate / vgdisplay commands)
        |
        +--Physical Volume (via LVM2's pvcreate / pvdisplay commands)
            |
            +--Partition (via fdisk / sfdisk / cfdisk / parted / gparted)
                |
                +--Hard Drive (or RAID device built using 'mdadm')

Last edited by xeleema; 04-20-2012 at 01:55 AM.
 
Old 04-20-2012, 02:25 AM   #3
kitek
Member
 
Registered: Apr 2005
Posts: 252

Original Poster
Rep: Reputation: 15
Thanks for the great response. Let me answer these for you.

Quote:
When you say this, do you mean in the firmware of the controllers (hardware-based RAID), or in the OS itself (software-based RAID)?
In the firmware of the BIOS and the controllers.

Quote:
Depends on the controller and the server. Hot-pluggable is hot-pluggable. Normally, if the controller supports hot-pluggable, and the drives are in little removable caddys, you're good.
The inverse being if you have to open the case and take a screwdriver to a drive, you should probably power-off the server first.
Yes they are the Caddys. Dell PowerEdge 2850 and 2600

Quote:
SCSI is just a protocol type that defines how devices interact on a bus (like IDE or SATA, or FC-AL). SCSI itself is *not* redundant. For that you need to depend on a RAID level (other than RAID 0).
I think you'd find the wikipeida page on RAID pretty handy.
I had looked at this. There is a lot of good information here. It seemed that it set an array of disks so that you can use several smaller drives and then seem to look like one disk. It seemed to me to be like a RAID 1 or 5 or something like that. That's what made me think it may be redundant as well. Also you could maybe add space...?

Quote:
Ah! In this case, you'll want to read-up on the mdadm command, and take a look at the documentation for LVM2 (logical volume manager).

Example: Here's a layout that is very common amongst servers.

Code:
Filesystem (<<< that thing you "df")
|
+--Logical Volume (via LVM2's lvcreate / lvdisplay commands)
    |
    +--Volume Group (via LVM2's vgcreate / vgdisplay commands)
        |
        +--Physical Volume (via LVM2's pvcreate / pvdisplay commands)
            |
            +--Partition (via fdisk / sfdisk / cfdisk / parted / gparted)
                |
                +--Hard Drive (or RAID device built using 'mdadm')
[/QUOTE]

I do need to understand linux partitions better. Windows servers no problem.


So it seems I am better off in my case to just go with RAID 1 since I would have "some" redundancy. When I installed CentOS on one machine with two SCSI drive setup in the BIOS it made all 1 large drive. I made another exact model system and BIOS settings and it sees the drives as two. And I understand why. I guess one of the things I do not grasp is, Why have a server that can have 8 SCSI drives hot-swappable all small drives running linux or Windows. and any of those fail, your done without a tape or something. It seems if you had 8 IBM drives all the same model and had something flawed seems there is more to go wrong and potentially all at the same time. I just don't want to loose the data on these machines and I want to configure them for the long run and I do not plan to use tapes. In your opinion should I just simply setup a RAID in the BIOS and set it up for RAID 1 for this setup?

Let me add to this. The system BIOS allows me to configure it for RAID or SCSI ok. Does this mean operating as a SCSI protocol, can you not use a RAID configuration? I

Last edited by kitek; 04-20-2012 at 02:32 AM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
How can I raid 6 SCSI drives? abefroman Linux - Hardware 1 02-02-2007 03:23 AM
new scsi RAID-0 setup help GATTACA Linux - Hardware 2 10-03-2005 10:09 AM
Problem in using RAID as SCSI tantrix Linux - Software 1 05-11-2005 06:35 AM
Perc3Di SCSI RAID + Adaptec 2810SA RAID = Fatal Grub Error? LinuxOnTheEdge Linux - General 2 03-19-2005 02:35 PM
moving system from ide software raid to new box with scsi raid ftumsh Linux - General 0 10-28-2003 09:34 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Server

All times are GMT -5. The time now is 11:03 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration