Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I've just added another 4 drives to my raid, migrated the raid, resized the partition, and resized the ext3 filesystem. The new raid size should be 3TB+ but it's only 2048GB once mounted. anyone know why I can't seem to use over 2TB.
dumpe2fs 1.39 (29-May-2006)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: d81e832d-9ab8-4e0c-b7ea-75e20c9c09d2
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal resize_inode dir_index filetype needs_recovery sparse_super large_file
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 268435456
Block count: 536870202
Reserved block count: 16106106
Free blocks: 191907569
Free inodes: 268198089
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 896
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 16384
Inode blocks per group: 512
Filesystem created: Tue Jul 3 13:16:01 2007
Last mount time: Fri Jul 6 10:42:52 2007
Last write time: Fri Jul 6 10:42:52 2007
Mount count: 3
Maximum mount count: 37
Last checked: Fri Jul 6 08:32:23 2007
Check interval: 15552000 (6 months)
Next check after: Wed Jan 2 09:32:23 2008
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal inode: 8
Default directory hash: tea
Directory Hash Seed: 722122c6-1a70-4ff0-b9dd-2a5db8fb1cdf
Journal backup: inode blocks
Journal size: 128M
Disk /dev/sdb: 3499.9 GB, 3499925438464 bytes
255 heads, 63 sectors/track, 425508 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 267349 2147480811 83 Linux
I just realized fdisk isn't creating a big enough partition, maximum cylinder acceptable is 267349 while there are 425508 cylinders. Is it possible to create one partition larger than this?
I think so, I'm using a fedora core 7 complied kernel, 2.6.21-1.3228.fc7. I think the problem is that fdisk created a DOS type partition table. reading the manual for fdisk it says that quote
In a DOS type partition table the starting offset and the size of each partition is stored in two ways: as an absolute number of sectors (given in 32 bits) and as a Cylinders/Heads/Sectors triple (given in 10+8+6 bits). The former is OK - with 512-byte sectors this will work up to 2 TB. The latter has two different problems. First of all, these C/H/S fields can be filled only when the number of heads and the number of sectors per track are known. Secondly, even if we know what these numbers should be, the 24 bits that are available do not suffice. DOS uses C/H/S only, Windows uses both, Linux never uses C/H/S.
I think I need to change disklable type. anyone know if this is the problem and if changing this can be done safely without data loss?
OK, from what I've read I've realized I need to change the msdos partition table to something like a gpt partition table. Needing to retain data already on the raid changing it would destroy the ability to read the data, from what I've heard. I now realize I should have used the parted command instead of the fdisk command which default creates a msdos type partition table.