Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
It tooked me more than 5 days to make a ext3 partition and didn't succeed.
I have CentOS 5 X86_64b system with 4 HDD (2x 146GB RAID 1 15K rmp and 2x 146GB RAID1 15k rpm)
I have recently added another 2x300GB RAID1 15krpm hdd and I want to create partition on the 300GB hdd with ext3 that is mounted in /mnt/disk3
How do I do that ?
The CentOS 5 system dosen't have GUI, I only have CLI, putty.
How do I create ext3 partiton step-by-step ?
I've found alots of tutorials but didn't succed with any of them.
$ls /dev
$su
Password:
#fdisk /path/device/in/question
:n
<choose primary and let the disk take up the entire space>
:p
<check partition table>
:w
:q!
#mke2fs /path/to/fdisked/device <ext2>
#mke2fs -j /path/to/fdisked/device
#vi /etc/fstab
What do you mean by can't make a ext3 partition? can't make a partition or can't make a filesystem? These are two different steps that are not directly related (Aside from the partition needs to be done before the filesystem can be made).
What have you tried so far? Where are you getting stuck?
Disk /dev/sda: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 17769 142625070 8e Linux LVM
Disk /dev/sdb: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Disk /dev/sdc: 912.6 GB, 912680550400 bytes
255 heads, 63 sectors/track, 110960 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 110960 891286168+ 8e Linux LVM
[root@xxx ~]#
[root@xxx ~]# fdisk /dev/sdb
The number of cylinders for this disk is set to 17769.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4):
Value out of range.
Partition number (1-4): 1
First cylinder (1-17769, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-17769, default 17769):
Using default value 17769
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@xxx ~]# mke2fs /dev/sdb
sdb sdb1
[root@xxx ~]# mke2fs /dev/sdb ext2
mke2fs 1.39 (29-May-2006)
mke2fs: invalid blocks count - ext2
[root@xxx ~]# mke2fs /dev/sdb1 ext2
mke2fs 1.39 (29-May-2006)
mke2fs: invalid blocks count - ext2
[root@xxx ~]# mke2fs /dev/sdb1 <ext2>
-bash: syntax error near unexpected token `newline'
[root@xxx ~]# mke2fs /dev/sdb <ext2>
-bash: syntax error near unexpected token `newline'
[root@xxx ~]#
[root@xxx ~]# fdisk -l
Disk /dev/sda: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 17769 142625070 8e Linux LVM
Disk /dev/sdb: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 17769 142729461 83 Linux
Disk /dev/sdc: 912.6 GB, 912680550400 bytes
255 heads, 63 sectors/track, 110960 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 110960 891286168+ 8e Linux LVM
[root@xxx ~]#
What's the next command ? 'Cause the mke2fs /path/to/fdisked/device <ext2> dosen't work as you see.
Please advice.
[root@xxx ~]# fdisk -l
Disk /dev/sda: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 17769 142625070 8e Linux LVM
Disk /dev/sdb: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 17769 142729461 83 Linux
Disk /dev/sdc: 912.6 GB, 912680550400 bytes
255 heads, 63 sectors/track, 110960 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 110960 891286168+ 8e Linux LVM
[root@xxx ~]#
[root@xxx ~]# mke2fs -j /dev/sdb
mke2fs 1.39 (29-May-2006)
/dev/sdb is entire device, not just one partition!
Proceed anyway? (y,n) y
/dev/sdb is apparently in use by the system; will not make a filesystem here!
[root@xxx ~]#
[root@xxx ~]# mke2fs -j /dev/sdb1
mke2fs 1.39 (29-May-2006)
/dev/sdb1 is apparently in use by the system; will not make a filesystem here!
[root@xxx ~]#
You've got most of it correct - however, mkfs acts on partitions (i.e. /dev/sdb1), not devices (/dev/sdb).
The correct form would be "mke2fs -t ext3 /dev/sdb1" - "see "man mke2fs" for the details.
Last edited by syg00; 04-09-2010 at 11:55 PM.
Reason: misread ext2 requirement.
I have deleted the partition, using fdisk /dev/sdb I selected the partition and then deleted the partition with d command, and then I used d command again and it told me there are no partition is defined yet, so obviously it deleted the partition. I started from scratch and still dosen't work.
[root@xxx ~]# fdisk -l
Disk /dev/sda: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 17769 142625070 8e Linux LVM
Disk /dev/sdb: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
Disk /dev/sdc: 912.6 GB, 912680550400 bytes
255 heads, 63 sectors/track, 110960 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 110960 891286168+ 8e Linux LVM
[root@xxx ~]#
[root@xxx ~]# fdisk /dev/sdb
The number of cylinders for this disk is set to 17769.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-17769, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-17769, default 17769):
Using default value 17769
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
[root@xxx ~]#
[root@xxx ~]# mkfs -t ext3 /dev/sdb1
mke2fs 1.39 (29-May-2006)
/dev/sdb1 is apparently in use by the system; will not make a filesystem here!
[root@xxx ~]#
[root@xxx ~]# fdisk -l
Disk /dev/sda: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 17769 142625070 8e Linux LVM
Disk /dev/sdb: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 17769 142729461 83 Linux
Disk /dev/sdc: 912.6 GB, 912680550400 bytes
255 heads, 63 sectors/track, 110960 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 110960 891286168+ 8e Linux LVM
[root@xxx ~]#
You said you had two existing raid arrays right? They would be sda and sdb. the commands below just destroyed the partition table on sdb, but the kernel will not load that new table because sdb is active.
What is the output of
Code:
mount
If anything there has /dev/sdb in it you have to back up all the data on those mountpoints. Use an external USB disk or a network drive or anything that does not require a reboot.
Mmmm. Doesn't really make sense. The /dev/sdb device started off with no partition according to post #5. However, there does seem to be some confusion over the disks installed:
Disk /dev/sda: 146.1 GB, 146163105792 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 17769 142625070 8e Linux LVM
Disk /dev/sdb: 146.1 GB, 146163105792 bytes
Device Boot Start End Blocks Id System
Disk /dev/sdc: 912.6 GB, 912680550400 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 1 110960 891286168+ 8e Linux LVM
There are supposed to be six disks, in pairs, so this is hardware raid. Two sets are already in use and partitioned as LVM volumes, the third isn't.
But sdb is being reported as 146G, whereas the OP tells us that he installed a pair of 300G disks and says that volume is already mounted?
I said I have 300GB raid HDD mirroring 1 (Raid1) but I haven't installed them yet.
atm I have Raid 1 (2x146GB) and another Raid 1 (2x146GB) and another Raid 1 (2x300GB).
The 900GB space is something else, don't bother reading it.
[root@xxx ~]# fdisk -l
Disk /dev/sda: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 17769 142625070 8e Linux LVM
Disk /dev/sdb: 146.1 GB, 146163105792 bytes
255 heads, 63 sectors/track, 17769 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 17769 142729461 83 Linux
Disk /dev/sdc: 299.4 GB, 299439751168 bytes
255 heads, 63 sectors/track, 36404 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 912.6 GB, 912680550400 bytes
255 heads, 63 sectors/track, 110960 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdd1 1 110960 891286168+ 8e Linux LVM
[root@xxx ~]#
[root@xxx ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
262G 152G 97G 62% /
/dev/sda1 99M 20M 75M 21% /boot
tmpfs 2.0G 0 2.0G 0% /dev/shm
[root@xxx ~]#
[root@xxx ~]# mount
/dev/mapper/VolGroup00-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
[root@xxx ~]#
[root@xxx ~]#
OK. I just want to make sure everything is safe. I'm still seeing a few confusing things. Perhaps there can be some clarification on what I see.
Your original request was:
Quote:
I have CentOS 5 X86_64b system with 4 HDD (2x 146GB RAID 1 15K rmp and 2x 146GB RAID1 15k rpm)
I have recently added another 2x300GB RAID1 15krpm hdd and I want to create partition on the 300GB hdd with ext3 that is mounted in /mnt/disk3
That would lead me to believe you have two existing 146G arrays that should not be touched. And you want to partition and format as extfs the new 300G array.
Looking at select lines of the output of fdisk -l I see:
Code:
Disk /dev/sda: 146.1 GB, 146163105792 bytes
Disk /dev/sdb: 146.1 GB, 146163105792 bytes
Disk /dev/sdc: 299.4 GB, 299439751168 bytes
Disk /dev/sdc doesn't contain a valid partition table
Disk /dev/sdd: 912.6 GB, 912680550400 bytes
sda == 146G array #1
sdb == 146G array #2
sdc == the new 300G array (Also notice that it is uninitialized without a partition table)
sdd == 900G disk that is out of the equation.
The disturbing part of what I see is:
Code:
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
and
Code:
[root@xxx ~]# mkfs -t ext3 /dev/sdb1
mke2fs 1.39 (29-May-2006)
/dev/sdb1 is apparently in use by the system; will not make a filesystem here!
That means that sdb is currently in use by something somewhere on the system. I'll get to that is a sec.
Before I go on I want to say that on a powered on running machine partition data is in two places. The first place is on the physical disks on first sector. The second place is in kernel memory. Once the machine is booted the data on the physical disk is unimportant and wil never be looked at unless specifically commanded to (fdisk, partprobe etc) You can view the partiton data that is in memory by looking at /proc/partitions and you can view the data on the physical disk by using fdisk. Point being you can wipe the partition table on the disk, but the OS will still have its copy in use and in memory.
From the output of df I see that you are using LVM and that the root partition seems to be a LV slice out of a say ~292G volume group.
I assume LogVol01 is swap accounting for the rest of the space (30 gig is a lot, but accounting for GiB vs GB and inherit lack of 100% efficiency of partitioning etc it should be less).
Assuming that 900G disk is to be ignored the only way this can happen is if sdb1 is of type LVM and part of VolGroup00.
If this is the case then it would explain the warning coming from fdisk and the error message coming from mkfs. It would also explain why LogVol00 is larger that either of the first two physical disks.
Please run:
Code:
pvscan
See if it returns something like (Ignoring actual sizes):
There are two outcomes I can see. First which I fear is the case is that pvscan returns something like the above. The other case is that sdd1 is part of VolGrouop00.
As the 300G disk(array) has come and gone It looks liek the system has been rebooted a few times. LVM is smart enough that is will find PVs on both linux and LVM partiton type. It looks like each time the system was rebooted sdb was left with a single partition that used all available space.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.