Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I have just recently installed RedHat Enterprise v4 on a Dell PowerEdge 1850. I am looking to attach a 2.5TB Fibre Channel RAID drive to it. It have gotten far enough that the system sees the RAID.
Code:
fdisk -l /dev/sdb
Disk /dev/sdb: 2500.5 GB, 2500509827072 bytes
255 heads, 63 sectors/track, 304003 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
The problem comes when I attempt to partition this RAID
Code:
fdisk /dev/sdb
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-36654, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-36654, default 36654):
I cannot create a partition greater than cylinder 36654.
Perhaps I am a total newbie but I thought the limit was 8TB in REHL 4.
Any help is greatly appreciated. I am probably missing something here... aren't I...
Not an expert myself on Linux so just noting this as it may give you something to think about.
What I see on my systems that have Fibre attached Clariion arrays.
cat /proc/partitions shows all the partitions. We have 5 items there that were presented from the array to the host. We did NOT fdisk any of these from the array (only the internal raid disks). That is to say if I do "fdisk -l" on these they will show a display similar to yours but will then say the device "doesn't contain a valid partition table"
Despite this we just did the mkfs to the device itself then mounted it rather than any sub-partitions. I'm not sure if this was because we're using EMC's powerpath or just the way RAID to Linux works. If it is the latter you can try doing a mount.
If you need to subdivide it you may want to add the device into Logical Volume Manager (LVM) and create logical volumes (LVs - can be thought of as virtual partitions).
man lmv = LVM overview
man pvcreate = How to prepare a device for inclusion in a Volume Group (VG)
man vgcreae = How to create a VG.
man lvcreate = How to create a logical volume (LV).
“...fdisk doesn't understand GUID Partition Table (GPT) and it is not designed for large partitions. In particular case use more advanced GNU parted(8)...”
jlightner: Even on relatively small software raids in which the underlying partitions were created with fdisk, “fdisk -l” gives the "no partition table" warning when reporting the raid devices.
I’m sure that fdisk has lots of limitations that most of us never see, except for the 15 partition maximum I ran into the other day.
Last edited by WhatsHisName; 10-24-2005 at 11:29 AM.
That'll teach me not to review my notes in detail.
On looking back you are correct. We DID do fdisk on these partitions. We just setup a single partition on each. The largest was 300 GB. However using fdisk now I can't see any partitions despite the fact they show up in /proc/partitions. We're using these for OCFS (Oracle Cluster Filesystem) so not sure if that has anything to do with the way it appears since they're currently mounted.
FYI: I don't see the comments about GPT in the fdisk man page on any of my servers (RH EL 3, RHL 9 or Debian). Its also not on the LinuxQuestions man pages. What distro did you see it on?
Thanks a bunch for your replies. I actually just tried mkfs on the device instead of a partition. And it actually works.
Code:
[root@localhost ~]# mkfs /dev/sdb
mke2fs 1.35 (28-Feb-2004)
/dev/sdb is entire device, not just one partition!
Proceed anyway? (y,n) y
max_blocks 4294967295, rsv_groups = 131072, rsv_gdb = 878
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
305250304 inodes, 610476032 blocks
30523801 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
18631 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776
Writing inode tables: done
inode.i_blocks = 147512, i_size = 4243456
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 23 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
I tried parted before this and it would accept a cylinder above the default value but it wouldn't actually partition with the choosen value. It would default to the default.
So i will have to play around with this a bit because I will need partitions...
What are the implications of doing this? This seems like a work around?
Code:
(parted) select /dev/sdb
Using /dev/sdb
(parted) p
Disk geometry for /dev/sdb: 0.000-2384672.000 megabytes
Disk label type: loop
Minor Start End Filesystem Flags
1 0.000 2384672.000 ext2
Subliminal_2: There is a “List of partition utilities” in the Wikipedia Partition (computing) section that you could pick from: http://en.wikipedia.org/wiki/Partition_%28computing%29 (Anyone who has a different favorite should add it to the list.)
Since I already own a copy of it (actually, maybe 5-6 copies), I would try PartitionMagic8 next and see what happened. If you have access to the PM installation CD, you can boot the system from it and run PM8 that way. It really doesn’t matter how the partitions are (filesystem) formatted, since you will be overwriting them with mkfs, but you may want to use fdisk to change the partition types to something appropriate (raid auto detect?, type fd) when you are done.
QTparted run from a Knoppix disk would be a good idea, too.
The more I think about it, can you really partition a hardware raid device?
You can make it all one big happy filesystem or you can make it into an LVM that can be broken up and better managed, but can you really partition a hardware raid? It strikes me that if each drive was to be divided into smaller partitions, like we do in linux software raid, then that would need to be done before creating the hardware raid array and probably by utilities associated with the raid controller, not the OS.
For sure, I cannot partition /dev/md0 created with mdadm, but I can put a filesystem or an LVM on it.
For the “cheap” 3ware hardware raid controllers (and I use the word “cheap” very loosely), there seems to be no provision for anything smaller than the full drive. Of course, when you’re running a bunch of 36GB SCSI drives, dividing one up doesn’t make a lot of sense anyway. Instead, you would create a couple of different raids from the same controller to get some subdivision of the drives.
When you’re doing cheap raid with a few huge SATA or PATA drives, it might make some traditional sense to divide up each drive, but even then, you could let the LVM do the physical allocation of space, instead of dividing up each drive physically with partitions. It’s sure a lot easier to change your mind later on with an LVM than it is with physical partitioning.
So, back to my question: Can you really partition a hardware raid device?
Don't backpedal now Whatshisname - especially after I recanted my original statement
Actually in addition to the single partition I created on each of our Clariion array presented devices there was another one that I did partition into 3 small devices - we used those as raw devices (no mkfs) as they were required for the Oracle RAC installation we were doing.
For large systems though I suspect one would just want to use the entire raid set (seen as a single device within Linux) as a filesystem or within LVM. Of course there can be (as in my case) multiple raid sets presented each appearing to be a separate "device" witthin Linux.
On Unix it is fairly common to have arrays and present the storage that way then use a volume manager tool to "slice" it up. Its even common to add multiple RAID sets into a single volume group then slice it back up.
HP-UX has LVM natively like Linux - I gather AIX has its own form of LVM and Solaris can have Solstice Disk Suite add on - and of course there's Veritas Volume Manager (VxVM) that will run on various inlcuding HP-UX and Solaris. VxVM is actually designed for full software RAID so is a little more robust than LVM which essentially just concatenates disks.
When I first ran into LVM on HP-UX years ago it took me some time to see the benefit. I thought to myself. "Why combine everything together just to break it up again?" Now of course I realize that one can get both scalability and granularity out of volume managers that one can't get out of simple partitioning.
LVMs do give you a lot of flexibility, but people seem to be very resistant to using them on smaller systems.
I always had trouble guessing how big the partitions should be for the various directories. After a few months of usage, bad guesses would come back to haunt me and resizing the too-small partitions had become problematic. With LVMs, bad guesses are easy to fix.
After you stop and “play” with LVMs for a while on some empty disks, they become exceptionally easy to understand and to manage. And on a similar point, software raids set up with mdadm are so simple to do that it’s actually difficult to understand the commands at first. Both mdadm raids and LVMs are both so simple to set up from command line that I prefer creating them before doing the OS installation.
I found that the OS was able to recognize the /dev/sdb (mounted of course) as a drive to write files to, but applications fail or don't recognize it at all.
So i am now trying to make partitions with LVM to see if this will help but the first thing I try I get an error
Code:
[root@machine]# pvcreate /dev/sdb
Failed to wipe new metadata area
/dev/sdb: Format-specific setup of physical volume failed.
Failed to setup physical volume "/dev/sdb"
/dev/sdb: close failed: Bad file descriptor
/dev/sdb: close failed: Bad file descriptor
/dev/sdb: close failed: Bad file descriptor
/dev/sdb: close failed: Bad file descriptor
/dev/sdb: close failed: Bad file descriptor
/dev/sdb: close failed: Bad file descriptor
............... on and on and on and on and on... you get the point
“...For whole disk devices only the partition table must be erased, which will effectively destroy all data on that disk. This can be done by zeroing the first sector with:
[root@machine ~]# dd if=/dev/zero of=/dev/sdb bs=512 count=8
8+0 records in
8+0 records out
[root@machine ~]#
[root@machine ~]# pvcreate -f --metadatacopies 2 /dev/sdb
Failed to wipe new metadata area
/dev/sdb: Format-specific setup of physical volume failed.
Failed to setup physical volume "/dev/sdb"
/dev/sdb: close failed: Bad file descriptor
................
Is it possible that the LSI drivers for the Fibre Card aren't installed properly and this is why I get this error...? Just a thought.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.