-   Linux - General (
-   -   New Volumes - mounting (

danimalz 12-27-2006 01:04 AM

New Volumes - mounting

I have a RAID-1 file server. Most of the shared (samba) storage is mounted as /var/share/..whatever...

I am running out of space.

What are the implications of putting in a new drive, and adding it in as: /var/share/newspace

IE - I'd be adding a new physical device under an existing mountpoint.

This gets more complicated.... My shared resources are part of a RAID-1 array. So: /var/share/... is already raid'd...

Bottom line: What's the best way to add additional storage capacity...???


Mutation_1101 12-27-2006 01:57 AM

I am not sure about RAID, but I once added a hard drive as an extension to my home directory. I did that by formating the new drive with the same file system as the current /home, then made the entry in /etc/fstab, then mounted the new drive.
Good luck.

archtoad6 12-27-2006 06:51 AM

If /var/share/<whatever> is working for the shared (samba) storage, what makes you think /var/share/<newspace> won't work for the new drive?

It's the same RAID array, & the shared (samba) storage is just as external isn't it?

What am I missing here?

rhoekstra 12-27-2006 10:27 AM

Be aware that the /var/share/newspace will be available through samba, but won't be protected by the RAID1 configuration (it is not part of the RAID array).

Mounting filesystems on other filesystems is very generically done, so other than the RAID config raising questions, there's nothing different with other generic situations...

danimalz 12-29-2006 02:37 AM

Thanks for the replies folks.!

Let me break it down into two questions...

1. Lets say I have a single drive, with all the normal directorys - /home /var /usr /root /boot ...etc. ...

... over time, this drive can become full. I know how to add another disk, move /home onto it, then delete /home on the orginal disk and thus free up space on it & have /home nicely segregated & increased, ive done this.

I am interested in opinions on simply deciding to keep the original disk as is, and add a new disk as say /home/username/

I know it can be done cause i've done it. This does strike me as kinda difficult to mangage, but are there any other risks.?

2. With RAID1 - the array that I built for the entire system, with several partitions, how can one add new storage & keep it all raid'd..? is this possible.?

rhoekstra 12-29-2006 03:10 AM

To answer the questions (to my opinion)

1. you COULD create disks / partitions as /home/<username>, but in my opinion that would be a waste of disks to single users.
Consider LVM, with which you can add disks to your Volume group, making it possible to extend any volume that lives in there. Say you have /home on a volume in your volume group. /home is 90% full.. you can extend the volume using unallocated space from your volume group. When your volume group grows out of its space, you can add disks to your system, add them to your volume group, and you can grow further.

2. Any two disks you add can be configured as a RAID system, and as a mirror, they can be added to your system (either as a disk, or as a Physical volume joining a volume group).
Besides.. adding physical disks to the same raid array won't increase storage, it will increase redundancy (and thus availability of - the same - data).

archtoad6 12-29-2006 05:41 AM

How big is the current RAID-1 array?

How much additional space can you afford to add (as RAID-1)?

LVM is a fine idea for managing stuff like this, especially if you are regularly going to make changes. On the other hand, if you have no experience w/ it yet, it may be way more trouble than it's worth.

I myself am in the middle of setting up a new 500 GB server. I have 4 250 GB drives that I am putting into a pair of (hardware) RAID-1 arrays. I have considered joining them w/ LVM, & have decided against it, simply because because I don't understand LVM well enough to know what I would have to do to rebuild if^H^H^H when 1 of the drives fails. I hope that by the time that happens the cost of storage will have "Moored" a couple of more decrements & it will be the proverbial new ball game.

So, w/ 4 identical drives, why did I choose 2 x RAID 1, rather than RAID 01, RAID 10, or RAID 5? -- Simplicity! I believe 2 x RAID 1 has the best chance of easy data recovery after the eventual 1st hardware failure, be it drive or RAID card. Furthermore, before I trust any important data to it I am going to simulate failure & recovery to ensure that I understand what I have.

trickykid 12-29-2006 06:59 AM

Simply put, you have /var/share and you want to mount a new disk at /var/share/newdisk

Simply create the directory:

mkdir /var/share/newdisk

Add your new drive, add the proper mount info in /etc/fstab and then mount it. If it's not mounted, /var/share/newdisk is just an empty directory, with it mounted, you'll add your new disk with more space in which it does not affect your existing drive configuration, etc.

rhoekstra 12-29-2006 07:17 AM

With RAID1 disks, you won't have to phase out LVM Physical entities.. if you do though, LVM provides a powerful tool to move data blocks away from one physical volume to have it removed from your volume group..

It's like this

  physical disk 1 (like hda, or md0)                                                    ,  disk 2
 |---------------------------------------------------------------------------------------|, |----------------------...
  boot      physical volume 1 (PV1), divided in 32MB blocks or Physical Entities (PEs)  ,  PV2
 |------|,  |----------------------------------------------------------------------------|, |----------------------...
  volume group 0 (VG0), consisting of 1 or more PVs (additional VGs can exist, using other PVs)
  logical volumes (LVs), consisting of PEs spread over the VG, and thus over the PVs
 |----------|, |-----------|, |-----------------------|, |--------------------------------|, etc etc

Because LVs can be grown or shrunk, they will consist of PEs spread over the PVs, scattering the data arround. to the system that is no problem, but it could become a problem if you want to remove a disk (and thus, a PV) for upgrading or whatever reason.

To remove one PV, with lvm you can see how many PEs are used on a PV and make sure that other PVs have enough free PEs to hold the used PEs of the PV which is to be removed. You can add a PV to a VG to extend storage needs for this purpose..

With lvm you can order to move used PEs from one PV to other PVs, rendering the PV to be removed (in your case, a RAID1 set) empty, ready to be removed.

I hope this will help you any further... ?

All times are GMT -5. The time now is 11:07 PM.