-   Linux - Hardware (
-   -   fedora 8 can't use more than 1.2 TB (

TorinGnom 07-30-2008 04:27 AM

fedora 8 can't use more than 1.2 TB
Hi everybody! Can you help me?
We got a new server and made a RAID-5 out of 6 HDDs 700GB each, totalling in 3.4 TB. When I tried to install Fedora 8 with a default layout (LVM with one partition / for 3.2 TB) it didn't want to boot for the first time at all. An error appeared:
init[1]: segfault at ... rip ... rsp ...
error 4

Second time I tried to install fedora I made two partitions: / and /home with 20 and 100 GB, the rest of the disk space was left unused.
Installation and booting went smoothly. Now Fedora is up and running. Then I tried to use system-config-lvm (GUI for managing LVM with pvcreate, lvcreate, etc) for creating another partition /data with 3.2 TB. It gave an error:
lvcreate: device-mapper: reload ioctl failed: Invalid argument
Then I tried to make a small partition /data00 with 1TB - that worked OK. But when I tried to make another small partition /data01 with 1TB it gave the same error! Even when I tried to make a tiny partition with only 500 GB it gave the same error :(

So currently only 1.2 TB is allocated and about 2.2 TB is left unused.
I'm desperate in fixing that problem!

What can be the reason for that error? Disk problems? Bad drivers for RAID? Old kernel or other Fedora components?

I checked at the adaptec website, its said there that the drivers for the Adaptec RAID controler are not provided for fedora 7 and higher since they are embedded into the distribution.

Pearlseattle 07-30-2008 05:33 AM

From another thread:


Now look in your kernel message log for a slightly more detailed error message.


...provide the output of 'pvs -o +dev_size'

TorinGnom 07-30-2008 05:55 AM

Srry for stupid questions, I'm quite a newbie.

Where can I get the kernel meassge log?

[root@jupiter antonk]# pvs -o +dev_size
bash: pvs: command not found

TorinGnom 07-30-2008 06:12 AM

I think I got the kernel message log:

Jul 30 12:45:06 jupiter kernel: device-mapper: table: device 8:2 too small for target
Jul 30 12:45:06 jupiter kernel: device-mapper: table: 253:5: linear: dm-linear: Device lookup failed
Jul 30 12:45:06 jupiter kernel: device-mapper: ioctl: error adding target to table

and here is output from pvs:

[root@jupiter ~]# pvs -o +dev_size
PV VG Fmt Attr PSize PFree DevSize
/dev/sda2 RAID00 lvm2 a- 3.41T 2.30T 1.41T

Pearlseattle 07-30-2008 08:35 AM

Looks like you have exactly the same problem this guy has - what do you think? :study:

Jul 30 12:45:06 jupiter kernel: device-mapper: table: device 8:2 too small for target
Yes, the kernel log is shown (at least with the Gentoo distribution) with "less /var/log/messages") and/or with the command "dmesg".

TorinGnom 07-30-2008 11:49 PM

Hi Pearlseattle! Thanx for the hint!!

It looks like I have the same problem.

My pvs gives the following output


PV VG Fmt Attr PSize PFree DevSize
/dev/sda2 RAID00 lvm2 a- 3.41T 2.30T 1.41T

Does that mean that Devise Size is 1.4 T only? But that's not true!!
I have a raid out of 6 HDDs, total 3.4 TB.
And there is no solution to that problem on the page you gave :(
And the partition table was not edited of course. I wish I knew hot to do that...

Finally, is that a problem of the device-mapper?
The version is
Library version: 1.02.22 (2007-08-21)
Driver version: 4.13.0
I'm trying to find a newer one...

Pearlseattle 07-31-2008 02:10 AM

Cool - that's already a step ahead.
Hmm, I don't think the problem is related to the version of the device-mapper. Usually they work fine and there are only few changes from version to version.
I think we need more informations:

What about "fdisk -l" as mentioned here? Does it list for all the drives that are involved a nice partition type "83" / Linux? Perhaps during the creation of the partitions something went wrong the correct partition type was set only on 3 out of 7 HDDs, leaving you with 2x700MB capacity + 1x parity HDDs? :rolleyes:

What about the output of "cat /proc/mdadm"? By the way here a guy mentions that "linear" is supposed to be mentioned in its output - but I wouldn't be so sure.

And you're really not the only one having this problem.

Pearlseattle 07-31-2008 02:17 AM

And sorry, which kernel version are you using?

lazlow 07-31-2008 03:16 AM

Maybe a silly question, but is your raid card(the hardware) able to handle this big of an array?

TorinGnom 07-31-2008 05:35 AM

Weeehaaa! I got it! :cool:

But first replies to your questions:

RAID can handle this array

Linux #1 SMP Tue Oct 30 13:18:33 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux

bash: fdisk: command not found

cat: /proc/mdadm: No such file or directory

We installed another drive (on IDE bus) with 180GB and installed linux (also fedora 8) on it. After that we had four devices

/dev/cda1 - /boot partition on small IDE HDD (actually ~10 MB)
/dev/cda2 - / partition on small IDE HDD (actually ~160 GB)
/dev/cdb1 - /boot partition on RAID (actually ~10 MB)
/dev/cdb2 - / partition on RAID (actually ~3.4 TB)

and with this configuration also only 1.4T was accessible at /dev/cdb2
Only after we removed the boot partition /dev/cdb1 we managed to allocate all 3.4 TB on this device.

After that I created another volume group, and created a volume based only on /dev/cdb/ and created logical volume of 3TB without a probelm!

Similar problem is known in MS Windows. The disk which contains the MBR (master boot record, located on the 0 track, equal to the linux /boot) cannot be used to more than ~2TB. If the disk doesnt contain MBR it can be used to full extent.

Thank you guys for your interest and for your prompt replies/questions!

Pearlseattle 07-31-2008 08:51 AM

Uh? But even my grandmother has fdisk - weird.
Ok, nice that you found out that think with the boot partition - I would have never thought about it.

us_ed 08-17-2008 01:21 AM

Hi all!
After update kernel to
in time boot
init[1]:segfault at 6 ip 0000..6 sp bfd782b8 error 4 in[110000+1b000]

All times are GMT -5. The time now is 09:20 PM.