LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware
User Name
Password
Linux - Hardware This forum is for Hardware issues.
Having trouble installing a piece of hardware? Want to know if that peripheral is compatible with Linux?

Notices


Reply
  Search this Thread
Old 10-17-2010, 11:55 PM   #1
deity_me
LQ Newbie
 
Registered: Sep 2003
Posts: 28

Rep: Reputation: 0
Drives configured as JBOD but linux still sees the disk as separate


So I configured a couple of hard drives as JBOD in the BIOS. The boot up screen says they're in JBOD configuration.

However when I boot into Ubuntu 10.10
I still see them as 2 separate disks and I mount them separately and write to each separately.

I never used JBOD before but I would assume it would be like a RAID where the disks would show up as one volume.
Is my assumption wrong?

If not, how do I fix this problem?

Thanks
 
Old 10-18-2010, 03:58 AM   #2
Timothy Miller
Moderator
 
Registered: Feb 2003
Location: Arizona, USA
Distribution: Debian, EndeavourOS, OpenSUSE, KDE Neon
Posts: 4,003
Blog Entries: 26

Rep: Reputation: 1521Reputation: 1521Reputation: 1521Reputation: 1521Reputation: 1521Reputation: 1521Reputation: 1521Reputation: 1521Reputation: 1521Reputation: 1521Reputation: 1521
No, JBOD means Just a Bunch of Discs...in other words, it should see them seperately. The only way it SHOULD see them as a RAID was if it was configured as RAID. Ubuntu is seeing them correctly for how you have them configured.
 
Old 10-18-2010, 04:20 AM   #3
bulls_i3
LQ Newbie
 
Registered: Jan 2010
Posts: 20

Rep: Reputation: 2
I'm not sure how you can configure your drives as JBOD in the BIOS. Could you elaborate more on how you did that?

JBOD isn't a specific file-system or technology. It refers to spanning several disk drives over a single file-system, without redundancy, and there's more than one way of doing it. This is a filesystem feature, and AFAIK, has nothing to do with BIOS. In Linux, the standard filesystem for creating JBOD, again AFAIK, is AUFS. If you have your disks set up as AUFS, you should see your disks mounted as AUFS filesystems in /proc/mounts.
 
Old 10-18-2010, 08:26 AM   #4
Skaperen
Senior Member
 
Registered: May 2009
Location: center of singularity
Distribution: Xubuntu, Ubuntu, Slackware, Amazon Linux, OpenBSD, LFS (on Sparc_32 and i386)
Posts: 2,681
Blog Entries: 31

Rep: Reputation: 176Reputation: 176
Quote:
Originally Posted by bulls_i3 View Post
I'm not sure how you can configure your drives as JBOD in the BIOS. Could you elaborate more on how you did that?
Most RAID controllers provide a basic level of configuration as part of the BIOS. For example with 3ware 9650SE, you press Alt+3 when the controller is initializing, and it will put you in the controller configuration menu. You can configure the controller there, manually. Many others do the same.

Unfortunately, the 9xxx series RAID controllers do not support JBOD. You can configure separate drives, but it still uses a prefix on the drive to store the configuration, making this incompatible with exchanging drives with other machines with other controllers.

Quote:
Originally Posted by bulls_i3 View Post
JBOD isn't a specific file-system or technology. It refers to spanning several disk drives over a single file-system, without redundancy, and there's more than one way of doing it. This is a filesystem feature, and AFAIK, has nothing to do with BIOS. In Linux, the standard filesystem for creating JBOD, again AFAIK, is AUFS. If you have your disks set up as AUFS, you should see your disks mounted as AUFS filesystems in /proc/mounts.
JBOD means "Just a Bunch Of Disks". It means making each disk (at least the ones configured as JBOD ... you don't have to make them all be JBOD if your controller supports this) show up "as itself". If you have 4 drives and make them all be JBOD, you'd probably have /dev/sda, /dev/sdb, /dev/sdc, and /dev/sdd. The OS can then do whatever it wishes with these individual drives, such as partitioning them (typically done, but not required) and formatting them (typically done, but not required) with any filesystem the OS supports.

JBOD is a specific configuration some RAID controllers provide. AUFS has nothing to do with it.

RAID presents two or more physical disks as a single unit (or multiple instances of this where desired). Originally it was intended for redundancy, hence the name "Redundant Array of Independent (or Individual or Inexpensive) Drives". The simplest example is mirroring (level 1). Pure striping is considered RAID level 0 though it has no redundancy.
 
Old 10-18-2010, 12:51 PM   #5
bulls_i3
LQ Newbie
 
Registered: Jan 2010
Posts: 20

Rep: Reputation: 2
Quote:
Originally Posted by Skaperen View Post
Most RAID controllers provide a basic level of configuration as part of the BIOS. For example with 3ware 9650SE, you press Alt+3 when the controller is initializing, and it will put you in the controller configuration menu. You can configure the controller there, manually. Many others do the same.

Unfortunately, the 9xxx series RAID controllers do not support JBOD. You can configure separate drives, but it still uses a prefix on the drive to store the configuration, making this incompatible with exchanging drives with other machines with other controllers.


JBOD means "Just a Bunch Of Disks". It means making each disk (at least the ones configured as JBOD ... you don't have to make them all be JBOD if your controller supports this) show up "as itself". If you have 4 drives and make them all be JBOD, you'd probably have /dev/sda, /dev/sdb, /dev/sdc, and /dev/sdd. The OS can then do whatever it wishes with these individual drives, such as partitioning them (typically done, but not required) and formatting them (typically done, but not required) with any filesystem the OS supports.

JBOD is a specific configuration some RAID controllers provide. AUFS has nothing to do with it.

RAID presents two or more physical disks as a single unit (or multiple instances of this where desired). Originally it was intended for redundancy, hence the name "Redundant Array of Independent (or Individual or Inexpensive) Drives". The simplest example is mirroring (level 1). Pure striping is considered RAID level 0 though it has no redundancy.

You are, for the most part, correct. But as I said, there's more than one way to implement "Just a Bunch of Disks", and RAIDing them is definitely NOT the easy way to do it. You may RAID0 them, use AUFS, UnionFS, or others.

The difference in this case is how the data spans the disks. With RAID0, your data is split at the block or sector level (each file may have parts of it on different physical disks), whereas with AUFS it is split at the file level (every file lives on one disk, but files in a folder may come from different disks).

IMHO, AUFS is an easier choice to set up as there is no RAID controller. It's easier to maintain since you don't have to rebuild the array when you add a disk, and it is "safer" (compared to RAID0). A corruption in one of your disks with RAID0 will render both drives useless. A corruption with one of your disks with AUFS will still allow you to get to your files, although you will be missing about half of them. Also, RAID0 requires the disks to be of identical size (any difference in size will be wasted), this is not an issue for AUFS.

If you want to access all your disks as one big disk, without redundancy, without a RAID controller, and without worrying about varying disk sizes, then RAID is simply not an option. You're stuck with AUFS or UnionFS. In which case your disks will come up as separate devices in linux (not like RAID), but they should all point to the same mount-point with FS type "aufs".

I have never personally heard of RAID0 referred to as a kind of "JBOD". "Just a bunch of disks" implies the disks are not related or dependent on each other, which is by definition NOT true for any RAID configuration.

Last edited by bulls_i3; 10-18-2010 at 01:00 PM.
 
Old 10-18-2010, 01:24 PM   #6
TobiSGD
Moderator
 
Registered: Dec 2009
Location: Germany
Distribution: Whatever fits the task best
Posts: 17,148
Blog Entries: 2

Rep: Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886Reputation: 4886
JBOD is in some cases a misunderstood word, as stated by Wikipedia:
Quote:
A single drive is referred to as a SLED (Single Large Expensive Drive), to contrast with RAID, while an array of drives without any additional control (accessed simply as independent drives) is referred to as (a) JBOD (Just a Bunch Of Disks). Simple concatenation mode is referred to as SPAN, BIG, or sometimes as JBOD, though this latter is proscribed in careful use, due to ambiguity with the alternative meaning just cited.
 
Old 10-18-2010, 03:25 PM   #7
Skaperen
Senior Member
 
Registered: May 2009
Location: center of singularity
Distribution: Xubuntu, Ubuntu, Slackware, Amazon Linux, OpenBSD, LFS (on Sparc_32 and i386)
Posts: 2,681
Blog Entries: 31

Rep: Reputation: 176Reputation: 176
Quote:
Originally Posted by bulls_i3 View Post
You are, for the most part, correct. But as I said, there's more than one way to implement "Just a Bunch of Disks", and RAIDing them is definitely NOT the easy way to do it. You may RAID0 them, use AUFS, UnionFS, or others.
If it's a JBOD, it's not a RAID. It might be done by a controller that can do RAID, but JBOD is a case of turning RAID off, even if only for selected drives (of a larger set of them). If a controller does not support RAID, then what you have is JBOD without the nomenclature.

Quote:
Originally Posted by bulls_i3 View Post
The difference in this case is how the data spans the disks. With RAID0, your data is split at the block or sector level (each file may have parts of it on different physical disks), whereas with AUFS it is split at the file level (every file lives on one disk, but files in a folder may come from different disks).
AUFS, and a few others, do support spanning multiple devices. These can be any block device. You can span across partitions, whole devices, or even RAID arrays themselves (which appear to the FS layer as one device).

One could use software RAID and then put AUFS on that RAID array. Given AUFS's benefits, that's an undesireable way to do it. If you have the disks as independent disk, and want to use AUFS, leaving them independent is generally the way to go, and let AUFS smartly deal with it.

Quote:
Originally Posted by bulls_i3 View Post
IMHO, AUFS is an easier choice to set up as there is no RAID controller. It's easier to maintain since you don't have to rebuild the array when you add a disk, and it is "safer" (compared to RAID0). A corruption in one of your disks with RAID0 will render both drives useless. A corruption with one of your disks with AUFS will still allow you to get to your files, although you will be missing about half of them. Also, RAID0 requires the disks to be of identical size (any difference in size will be wasted), this is not an issue for AUFS.
AUFS can certainly be justified for this. But just because someone is configuring a RAID controller into JBOD mode (even if they misunderstood what JBOD meant, as the OP did), does not mean they automatically get AUFS. They could have "Just a Bunch of File Systems" (JBFS) of any type (example: ext3). By server at home is configured that way so I can manage what goes where it explicitly.

Quote:
Originally Posted by bulls_i3 View Post
If you want to access all your disks as one big disk, without redundancy, without a RAID controller, and without worrying about varying disk sizes, then RAID is simply not an option. You're stuck with AUFS or UnionFS. In which case your disks will come up as separate devices in linux (not like RAID), but they should all point to the same mount-point with FS type "aufs".
I've been using RAID because I want redundancy. Drives fail. As long as they don't fail too close together in time, I won't lose anything.

I'd like is a file system that can span drives, and also do redundancy. A few of these now exist. Being able to do N+1 redundancy, so I don't have to use N times the space, would be a nice trick. RAID5 and RAID6 do this ... with a "write penalty" in performance (rather severe for arrays with small number of drives).

Quote:
Originally Posted by bulls_i3 View Post
I have never personally heard of RAID0 referred to as a kind of "JBOD". "Just a bunch of disks" implies the disks are not related or dependent on each other, which is by definition NOT true for any RAID configuration.
Right, although many RAID controllers can do it. It's just terminology more or less equivalent to "revert to behavior like a non-RAID controller, at least for these selected drives".

But having JBOD doesn't imply having AUFS. If you need a large aggregate filesystem instead of JBFS, then AUFS or UnionFS could be the way to go.
 
Old 10-18-2010, 09:36 PM   #8
deity_me
LQ Newbie
 
Registered: Sep 2003
Posts: 28

Original Poster
Rep: Reputation: 0
Okay so thats a lot of information to take in.

So if the JBOD doesnt allow me to see the disks together as a single drive, what was the difference between having them configured as JBOD and having them configured normally? Right now to me, having them in JBOD looks and feels just like they were when they were not in JBOD.

Thanks
 
Old 10-19-2010, 08:19 AM   #9
Skaperen
Senior Member
 
Registered: May 2009
Location: center of singularity
Distribution: Xubuntu, Ubuntu, Slackware, Amazon Linux, OpenBSD, LFS (on Sparc_32 and i386)
Posts: 2,681
Blog Entries: 31

Rep: Reputation: 176Reputation: 176
Quote:
Originally Posted by deity_me View Post
Okay so thats a lot of information to take in.

So if the JBOD doesnt allow me to see the disks together as a single drive, what was the difference between having them configured as JBOD and having them configured normally? Right now to me, having them in JBOD looks and feels just like they were when they were not in JBOD.

Thanks
JBOD is just a term used to describe NOT putting independent disks together into a larger "virtual" drive. When configuring a RAID controller that supports this (not all do), JBOD would be a choice in lieu of choosing one of the RAID levels.

RAID meant redundancy. But there are two schemes that do not have redundancy. Striping without redundancy got the designation "RAID level 0". They should have dropped the "R" for this but they didn't.

Then there is an even lesser scheme, which is to not organize disks into an array at all, and just leave them as is. Note that this decision can be done on a disk by disk basis in most cases. When these drives are presented to the OS, they are presented as "just a bunch of disks" rather than as one big disk array. The term JBOD stuck.

To understand how this applies, one must think of it as what gets presented to the OS. If there are 4 real disks, and the OS sees 4 disks with no reserved space taken from them, it is JBOD. Think of JBOD as "unRAID".

RAID 0 is striping without redundancy making many smaller disks appear as one big disk.

RAID 1 is mirroring between disks where one disk is an exact replica of another. Usually this is just 2 disks, but conceptually it can go higher.

RAID 2 is bit level striping with Hamming code parity (usually the 7,4 code, the simplest Hamming code). This level is not used for disks any more.

RAID 3 is byte level striping. This is rarely used (I've never seen it used or supported by any controller).

RAID 4 is block level striping with a specific disk used for parity. It is rarely used because of the bottleneck on the parity disk. It can function and rebuild when one drive fails (rebuilding after the failed drive is replaced).

RAID 5 is like RAID 4 but with the parity distributed so that different block groups have their parity on different drives. This allows the workload for writing to be statistically distributed better than RAID 4. There are at least 4 different variations on how to distribute the parity and the ordering of blocks around the disks (I've seen 4, but cannot rule out more). There remains a write performance penalty, but it is not as bad as in RAID 4.

RAID 6 is dual parity, with distribution of parity like RAID 5. It's performance penalty for writing is worse than RAID 5, but it can function and rebuild when TWO drives have failed.

RAID 10 is a stripe of mirrors. You start with a RAID 1 mirror of 2 drives. Then you have another RAID 1 mirror of 2 more drives. Repeat as desired. Then stripe all of these mirror sets together. If a drive fails, that mirror set can still function and rebuild. If a drive in a different mirror set also fails at the same time, it still functions and rebuilds. The rebuilds are the duty of the individual mirror sets, and can be done in parallel independently. One to many drives can fail and function is maintained as long as the failure combination does not cause an entire mirror to fail.

RAID 01 is the reverse of RAID 10 by being a mirror of stripes. The number of combinations of two drive failures that can still work is fewer than for RAID 10.

RAID 50 is two or more RAID 5 arrays striped together. Each RAID 5 subunit can handle 1 drive failure, limiting multiple drive failures to separate RAID 5 units.

RAID 60 is like RAID 50 but with the underlying units beig RAID 6 instead of RAID 5. So 2 drives can fail in each RAID 6 subunit.

If the controller is doing nested RAID (e.g. 10, 50, or 60) and the OS then stripes multiple controllers together in software, append an extra 0 to the level number: e.g. 100, 500, and 600. For example, if you have 3 sets of RAID 5 striped together as RAID 50 by a RAID controller, and do exactly the same thing on a 2nd RAID controller, and stripe them together in the OS to make one really big logical drive, then you would have RAID 500.

A number of other combinations are also possible, and the "standards" are really fuzzy (e.g. non-existent) at this point.

If you just want all the disks to be themselves as a non-RAID multi-drive controller would do, that's JBOD.
 
1 members found this post helpful.
Old 06-05-2012, 11:20 AM   #10
itz4vj
LQ Newbie
 
Registered: May 2012
Posts: 8
Blog Entries: 1

Rep: Reputation: Disabled
Could someone please help on the quick clarifications that I have on JBOD?

I have a bunch disks assigned to a linux server..these disks are just assigned individually and NON-RAID'ed.
On OS, I grouped them into a single VG and created as a single Logical Volume.

SO, is this considered as a JBOB configuration ?

Any info would be helpful. Thanks.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Separate drives to install XP and Linux wxman2 SUSE / openSUSE 5 08-29-2005 02:45 PM
separate drives for Win-ME & Linux cellularhero Linux - Newbie 3 10-13-2004 11:22 PM
Windows, Mepis Linux on separate drives sengle3 Linux - Newbie 2 04-23-2004 06:57 PM
Linux sees drives not controller... Present Linux - Distributions 7 02-23-2004 08:34 PM
How to dual boot XP and Linux in separate drives TomCruise2002 Linux - Newbie 1 09-22-2002 11:51 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Hardware

All times are GMT -5. The time now is 03:28 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration