LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 10-23-2005, 04:39 PM   #1
Subliminal_2
LQ Newbie
 
Registered: Oct 2005
Posts: 8

Rep: Reputation: 0
Disk Limitations ? - Help with fdisk


Hi there,

I have just recently installed RedHat Enterprise v4 on a Dell PowerEdge 1850. I am looking to attach a 2.5TB Fibre Channel RAID drive to it. It have gotten far enough that the system sees the RAID.

Code:
fdisk -l /dev/sdb 
Disk /dev/sdb: 2500.5 GB, 2500509827072 bytes
255 heads, 63 sectors/track, 304003 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
The problem comes when I attempt to partition this RAID
Code:
fdisk /dev/sdb
Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p 
Partition number (1-4): 1
First cylinder (1-36654, default 1): 
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-36654, default 36654):
I cannot create a partition greater than cylinder 36654.

Perhaps I am a total newbie but I thought the limit was 8TB in REHL 4.

Any help is greatly appreciated. I am probably missing something here... aren't I...

Thanks,
Subliminal
 
Old 10-24-2005, 10:54 AM   #2
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
Not an expert myself on Linux so just noting this as it may give you something to think about.

What I see on my systems that have Fibre attached Clariion arrays.

cat /proc/partitions shows all the partitions. We have 5 items there that were presented from the array to the host. We did NOT fdisk any of these from the array (only the internal raid disks). That is to say if I do "fdisk -l" on these they will show a display similar to yours but will then say the device "doesn't contain a valid partition table"

Despite this we just did the mkfs to the device itself then mounted it rather than any sub-partitions. I'm not sure if this was because we're using EMC's powerpath or just the way RAID to Linux works. If it is the latter you can try doing a mount.

If you need to subdivide it you may want to add the device into Logical Volume Manager (LVM) and create logical volumes (LVs - can be thought of as virtual partitions).

man lmv = LVM overview
man pvcreate = How to prepare a device for inclusion in a Volume Group (VG)
man vgcreae = How to create a VG.
man lvcreate = How to create a logical volume (LV).
 
Old 10-24-2005, 11:17 AM   #3
WhatsHisName
Senior Member
 
Registered: Oct 2003
Location: /earth/usa/nj (UTC-5)
Distribution: RHEL, AltimaLinux, Rocky
Posts: 1,151

Rep: Reputation: 46
Subliminal_2: Have you tried parted or QTparted?

From the fdisk man page:

“...fdisk doesn't understand GUID Partition Table (GPT) and it is not designed for large partitions. In particular case use more advanced GNU parted(8)...”
 
Old 10-24-2005, 11:28 AM   #4
WhatsHisName
Senior Member
 
Registered: Oct 2003
Location: /earth/usa/nj (UTC-5)
Distribution: RHEL, AltimaLinux, Rocky
Posts: 1,151

Rep: Reputation: 46
jlightner: Even on relatively small software raids in which the underlying partitions were created with fdisk, “fdisk -l” gives the "no partition table" warning when reporting the raid devices.

I’m sure that fdisk has lots of limitations that most of us never see, except for the 15 partition maximum I ran into the other day.

Last edited by WhatsHisName; 10-24-2005 at 11:29 AM.
 
Old 10-24-2005, 11:58 AM   #5
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
That'll teach me not to review my notes in detail.

On looking back you are correct. We DID do fdisk on these partitions. We just setup a single partition on each. The largest was 300 GB. However using fdisk now I can't see any partitions despite the fact they show up in /proc/partitions. We're using these for OCFS (Oracle Cluster Filesystem) so not sure if that has anything to do with the way it appears since they're currently mounted.

FYI: I don't see the comments about GPT in the fdisk man page on any of my servers (RH EL 3, RHL 9 or Debian). Its also not on the LinuxQuestions man pages. What distro did you see it on?
 
Old 10-24-2005, 12:07 PM   #6
Subliminal_2
LQ Newbie
 
Registered: Oct 2005
Posts: 8

Original Poster
Rep: Reputation: 0
Hey guys.

Thanks a bunch for your replies. I actually just tried mkfs on the device instead of a partition. And it actually works.
Code:
[root@localhost ~]# mkfs /dev/sdb 
mke2fs 1.35 (28-Feb-2004)
/dev/sdb is entire device, not just one partition!
Proceed anyway? (y,n) y    
max_blocks 4294967295, rsv_groups = 131072, rsv_gdb = 878
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
305250304 inodes, 610476032 blocks
30523801 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
18631 block groups
32768 blocks per group, 32768 fragments per group
16384 inodes per group
Superblock backups stored on blocks: 
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 
        102400000, 214990848, 512000000, 550731776

Writing inode tables: done                            
inode.i_blocks = 147512, i_size = 4243456
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 23 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
Code:
[root@localhost ~]# df -h 
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                       66G  1.5G   61G   3% /
/dev/sda1              99M   14M   80M  15% /boot
none                  2.0G     0  2.0G   0% /dev/shm
/dev/sdb              2.3T   73M  2.2T   1% /RAID
I tried parted before this and it would accept a cylinder above the default value but it wouldn't actually partition with the choosen value. It would default to the default.

So i will have to play around with this a bit because I will need partitions...

What are the implications of doing this? This seems like a work around?

Code:
(parted) select /dev/sdb
Using /dev/sdb
(parted) p
Disk geometry for /dev/sdb: 0.000-2384672.000 megabytes
Disk label type: loop
Minor    Start       End     Filesystem  Flags
1          0.000 2384672.000  ext2
what is loop???

Thanks again!!!
 
Old 10-24-2005, 12:28 PM   #7
WhatsHisName
Senior Member
 
Registered: Oct 2003
Location: /earth/usa/nj (UTC-5)
Distribution: RHEL, AltimaLinux, Rocky
Posts: 1,151

Rep: Reputation: 46
Subliminal_2: There is a “List of partition utilities” in the Wikipedia Partition (computing) section that you could pick from: http://en.wikipedia.org/wiki/Partition_%28computing%29 (Anyone who has a different favorite should add it to the list.)

Since I already own a copy of it (actually, maybe 5-6 copies), I would try PartitionMagic8 next and see what happened. If you have access to the PM installation CD, you can boot the system from it and run PM8 that way. It really doesn’t matter how the partitions are (filesystem) formatted, since you will be overwriting them with mkfs, but you may want to use fdisk to change the partition types to something appropriate (raid auto detect?, type fd) when you are done.

QTparted run from a Knoppix disk would be a good idea, too.

jlightner: The fdisk man page comment came from an FC4 installation, but it can also be found online here: http://www.die.net/doc/linux/man/man8/fdisk.8.html (and probably other places, too).

Last edited by WhatsHisName; 10-24-2005 at 12:33 PM.
 
Old 10-24-2005, 06:50 PM   #8
WhatsHisName
Senior Member
 
Registered: Oct 2003
Location: /earth/usa/nj (UTC-5)
Distribution: RHEL, AltimaLinux, Rocky
Posts: 1,151

Rep: Reputation: 46
The more I think about it, can you really partition a hardware raid device?

You can make it all one big happy filesystem or you can make it into an LVM that can be broken up and better managed, but can you really partition a hardware raid? It strikes me that if each drive was to be divided into smaller partitions, like we do in linux software raid, then that would need to be done before creating the hardware raid array and probably by utilities associated with the raid controller, not the OS.

For sure, I cannot partition /dev/md0 created with mdadm, but I can put a filesystem or an LVM on it.

For the “cheap” 3ware hardware raid controllers (and I use the word “cheap” very loosely), there seems to be no provision for anything smaller than the full drive. Of course, when you’re running a bunch of 36GB SCSI drives, dividing one up doesn’t make a lot of sense anyway. Instead, you would create a couple of different raids from the same controller to get some subdivision of the drives.

When you’re doing cheap raid with a few huge SATA or PATA drives, it might make some traditional sense to divide up each drive, but even then, you could let the LVM do the physical allocation of space, instead of dividing up each drive physically with partitions. It’s sure a lot easier to change your mind later on with an LVM than it is with physical partitioning.

So, back to my question: Can you really partition a hardware raid device?
 
Old 10-25-2005, 07:40 AM   #9
MensaWater
LQ Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, CoreOS, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 7,831
Blog Entries: 15

Rep: Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669Reputation: 1669
Don't backpedal now Whatshisname - especially after I recanted my original statement

Actually in addition to the single partition I created on each of our Clariion array presented devices there was another one that I did partition into 3 small devices - we used those as raw devices (no mkfs) as they were required for the Oracle RAC installation we were doing.

For large systems though I suspect one would just want to use the entire raid set (seen as a single device within Linux) as a filesystem or within LVM. Of course there can be (as in my case) multiple raid sets presented each appearing to be a separate "device" witthin Linux.

On Unix it is fairly common to have arrays and present the storage that way then use a volume manager tool to "slice" it up. Its even common to add multiple RAID sets into a single volume group then slice it back up.

HP-UX has LVM natively like Linux - I gather AIX has its own form of LVM and Solaris can have Solstice Disk Suite add on - and of course there's Veritas Volume Manager (VxVM) that will run on various inlcuding HP-UX and Solaris. VxVM is actually designed for full software RAID so is a little more robust than LVM which essentially just concatenates disks.

When I first ran into LVM on HP-UX years ago it took me some time to see the benefit. I thought to myself. "Why combine everything together just to break it up again?" Now of course I realize that one can get both scalability and granularity out of volume managers that one can't get out of simple partitioning.
 
Old 10-25-2005, 01:59 PM   #10
WhatsHisName
Senior Member
 
Registered: Oct 2003
Location: /earth/usa/nj (UTC-5)
Distribution: RHEL, AltimaLinux, Rocky
Posts: 1,151

Rep: Reputation: 46
LVMs do give you a lot of flexibility, but people seem to be very resistant to using them on smaller systems.

I always had trouble guessing how big the partitions should be for the various directories. After a few months of usage, bad guesses would come back to haunt me and resizing the too-small partitions had become problematic. With LVMs, bad guesses are easy to fix.

After you stop and “play” with LVMs for a while on some empty disks, they become exceptionally easy to understand and to manage. And on a similar point, software raids set up with mdadm are so simple to do that it’s actually difficult to understand the commands at first. Both mdadm raids and LVMs are both so simple to set up from command line that I prefer creating them before doing the OS installation.
 
Old 10-29-2005, 12:00 PM   #11
Subliminal_2
LQ Newbie
 
Registered: Oct 2005
Posts: 8

Original Poster
Rep: Reputation: 0
Hey guys me again...

I found that the OS was able to recognize the /dev/sdb (mounted of course) as a drive to write files to, but applications fail or don't recognize it at all.

So i am now trying to make partitions with LVM to see if this will help but the first thing I try I get an error


Code:
[root@machine]# pvcreate /dev/sdb 
Failed to wipe new metadata area
/dev/sdb: Format-specific setup of physical volume failed.
Failed to setup physical volume "/dev/sdb"
/dev/sdb: close failed: Bad file descriptor
/dev/sdb: close failed: Bad file descriptor
/dev/sdb: close failed: Bad file descriptor
/dev/sdb: close failed: Bad file descriptor
/dev/sdb: close failed: Bad file descriptor
/dev/sdb: close failed: Bad file descriptor

............... on and on and on and on and on... you get the point
Thanks a million
 
Old 10-29-2005, 03:46 PM   #12
WhatsHisName
Senior Member
 
Registered: Oct 2003
Location: /earth/usa/nj (UTC-5)
Distribution: RHEL, AltimaLinux, Rocky
Posts: 1,151

Rep: Reputation: 46
From “man pvcreate”:

“...For whole disk devices only the partition table must be erased, which will effectively destroy all data on that disk. This can be done by zeroing the first sector with:

dd if=/dev/zero of=PhysicalVolume bs=512 count=1...”

That may fix the problem.
 
Old 10-29-2005, 04:36 PM   #13
Subliminal_2
LQ Newbie
 
Registered: Oct 2005
Posts: 8

Original Poster
Rep: Reputation: 0
hi,

yes sorry I forgot to mention that I gave that a try:

Code:
[root@machine ~]# dd if=/dev/zero of=/dev/sdb bs=512 count=1
1+0 records in
1+0 records out
the same thing noted in my previous post occurs.

Thanks
 
Old 10-29-2005, 08:21 PM   #14
WhatsHisName
Senior Member
 
Registered: Oct 2003
Location: /earth/usa/nj (UTC-5)
Distribution: RHEL, AltimaLinux, Rocky
Posts: 1,151

Rep: Reputation: 46
Since you could browse the raid with an ext3 filesystem on it, I’m stumped as to why pvcreate isn’t working.

Lacking anything else to try, I would probably try writing a huge block of zeros to sdb and see if anything changed.

dd if=/dev/zero of=/dev/sdb bs=128M count=8

Followed by:

pvcreate -f /dev/sdb
--OR--
pvcreate -ff /dev/sdb
--OR--
pvcreate -f --metadatacopies 2 /dev/sdb
 
Old 10-30-2005, 10:58 AM   #15
Subliminal_2
LQ Newbie
 
Registered: Oct 2005
Posts: 8

Original Poster
Rep: Reputation: 0
Thanks again for the attempt but still no go...

[root@machine ~]# dd if=/dev/zero of=/dev/sdb bs=512 count=8
8+0 records in
8+0 records out
[root@machine ~]#

[root@machine ~]# pvcreate -f --metadatacopies 2 /dev/sdb
Failed to wipe new metadata area
/dev/sdb: Format-specific setup of physical volume failed.
Failed to setup physical volume "/dev/sdb"
/dev/sdb: close failed: Bad file descriptor
................

Is it possible that the LSI drivers for the Fibre Card aren't installed properly and this is why I get this error...? Just a thought.

Thanks.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Create a bootable disk containing fdisk manudath Linux - Software 1 03-14-2005 12:27 PM
fdisk from Win98 disk for XP? r_jensen11 Linux - General 7 04-06-2004 08:27 PM
Fdisk or Druid on boot disk jborges Conectiva 1 11-23-2003 04:19 PM
floppy disk limitations? ameksa Linux - Software 6 02-28-2002 12:02 AM
disk druid and fdisk problems vg3 Linux - Software 3 10-30-2001 08:40 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 09:38 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration