Linux - EnterpriseThis forum is for all items relating to using Linux in the Enterprise.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
i have a hardisk of 144GB. how do i make use of the remaining hardisk. info from below in df -k shows that i only have a GB here and there. i want to take out about 50 GB to mount as /software. Hope u can help me..
[root@xxxxxxx local]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/cciss/c0d0p3 2.0G 514M 1.4G 27% /
/dev/cciss/c0d0p1 99M 21M 74M 22% /boot
none 2.0G 0 2.0G 0% /dev/shm
2.0G 36M 1.9G 2% /home
2.0G 36M 1.9G 2% /opt
6.0G 823M 4.9G 15% /usr
4.0G 69M 3.7G 2% /var
Why do you want to do that? The whole point of LVM is to add physical drives to what looks like an existing partition, not to partition a drive into smaller "bits". If you really want separate partitions for /home, etc., create them with a partitoning tool (fdisk, parted, gparted, etc.). The other thing you could do is delete the partitions you don't need, and use something like gparted (again) to move the other partitions around so you have one decent sized free block, create a partition with that and mount it.
It may be better to show your starting point and desired end point, then someone can suggest the best way to get there.
Edit - I re-read your post, and I think you are more talking about my final suggestion
Last edited by billymayday; 03-26-2008 at 12:52 AM.
i dun really understand the terms like volume group.
let me illustrate with my understanding on my situation.
I have 2 x physcial hardisk 144GB. i did a mirror on it. so it's considered 1 x hardisk that can be used.
this hardisk is made into 1 Volume Group.
out of this volume group, i have
ACTIVE '/dev/Volume00/home' [2.00 GB] inherit
ACTIVE '/dev/Volume00/opt' [2.00 GB] inherit
ACTIVE '/dev/Volume00/usr' [6.00 GB] inherit
ACTIVE '/dev/Volume00/var' [4.00 GB] inherit
the size is only about 14gb.
therefore i want to create another 'partition', is this term correct? that i will make use of some of the remaining drive.
I think you need to give yourself some basic background first. Do some googling, but here's a start on LVM
In very short form, we divide disks into partitions, so they appear to be separate disks. We might do this to separate critical parts of the system such as programs and data (/ and /home partitions for example) or we might want separate environments, such as having a partition to boot Windows and another with Linux.
LVM on the other hand aggregates disks (it probably aggregates partitions in fact, but I'm writing from my head), so if I needed a really big disk, I could have 2 in a logical volume that would look like one big disk to the system. Useful if I decide to add another disk to a system at some point. A bit like raid, but not the same.
Does that give a basic view? I leave the rest to you for now.
It may help you to think of Logical Volume Management as it relates to disk-level partitioning. The closest thing in LVM to the traditional disk "partition" (BSD: "slice") is a physical volume (PV). In fact you must first create partitions on the disk to assign as PVs. A Volume Group (VG) is one or more Physical Volumes joined together. Keep in mind setting 3 disks next to each other and calling them a VG does not make it so, you must use your distributions disk manager or the vgcreate command to join PVs. Once you have established your Volume Group you can do what you earlier referred to as "partitioning" by creating Logical Volumes (LVs).
In your situation you already have your PVs joined as VG 'Volume00' and as you demonstrated you have plenty of unused capacity in that VG. I strongly encourage you to type 'man lvcreate' and 'man lvm' to check the documentation for further details, but.., If you have blind faith in total strangers, here is the command for what you want...
lvcreate -L 50g -n software Volume00
To have your new LV automount to a persistent directory on boot-up. Create the directory '/software' ('mkdir /software') and add a line to /etc/fstab using the path '/dev/mapper/Volume00-software'. Again, I strongly recommend typing man fstab for details.
LVM is just an additional abstraction layer that makes administration and design changes easier once a system is in place. If you wish to add usable storage to an LVM system with no free space there are four steps.
Add physical drive and create one or more disk-level partitions.
Assign PVs to disk-level partitions.
Join PVs to existing VG or create new VG(s).
Create more LVs on available space.
The "whole point" of LVM is that Logical Volumes are abstracted from the physical disks and can be arranged somewhat independently from the underlying physical infrastructure.
thanks very much for the reply guys. But what is the difference if i use the lvm or the fdisk? are they basically doing the same thing? or fdisk is the 1st layer and lvm is the 2nd layer administration? can i totally ignore fdisk command?
cghcgh, as with several of your earlier posts you are ahead of the game by answering most of your own questions.
fdisk operates directly on the disk setting boundaries as to where the platters can be written to. LVM operates "above" that level but is still tied 1-to-1 via 'Physical Volumes' to the disk-level partitions. In practical terms that means that there are no changes you can make via LVM that will change the physical boundaries set by fdisk, however, any change made via fdisk to a partition in use by LVM will disrupt your LVM setup and most likely lead to an unbootable system and or data loss.
There is nothing wrong with foregoing LVM and just using old school hard partitioning schemes but the system you have described is already configured using LVM and you DO NOT want to go making changes to existing disk-level partitions with fdisk.
You have plenty of available space in your Volume Group and if you need more usable storage space you can either create new Logical Volumes using a variant of the command I supplied earlier or resize an existing LV using the 'lvresize' command. Expanding Logical Volumes is trivially easy, considerably easier than shrinking them, so only add what you need when you need it.
Just to make sure that I dispell some very misleading information provided earlier in this thread I will try to summarize LVM one more time...
LVM is built upon the disk-level partitions created by utilities like fdisk.
The foundation of Logical Volume Management is the "Physical Volume" and Physical Volumes are created by labeling disk-level partitions with the 'pvcreate' command. Just like "traditional" partitions a physical disk can have multiple Physical Volumes but Physical Volumes can not exceed the maximum size of the disk they occupy. Unlike a traditional partition a Physical Volume is not formatted with a file system, it is just there to support a Volume Group.
Volume Groups are composed of one or more Physical Volumes. Even if you have only one disk and create only one disk-level partition resulting in a single Physical Volume you still must create a Volume Group composed of that single volume. Physical Volumes can be added to and removed from Volume Groups but Volume Groups can not be resized arbitrarily, they are the sum of their underlying Physical Volumes. It is with Volume Groups that you get your first real layer of abstraction from the disk hardware, you can have multiple Volume Groups on a single physical disk or multiple physical disks in a single Volume Group.
Logical Volumes are the functional equivalent of traditional partitions, they are where the file system resides. For your purposes when you asked earlier about creating new partitions this is what you meant. Logical volumes reside within Volume Groups, there is nothing to say that a Logical Volume couldn't consume an entire Volume Group, and a file system can consist of Logical Volumes from multiple Volume Groups but a single Logical Volume can not span multiple Volume Groups.
As for the earlier comments about RAID, RAID operates on disk-level partitions and presents the resulting usable space as a single "device". That "RAID device" resides below LVM and can be labeled as a Physical Volume just like a disk-level partition.