Linux - ServerThis forum is for the discussion of Linux Software used in a server related context.
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Introduction to Linux - A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free.
I have an opportunity to redesign how I present SAN LUN's to a Linux Oracle server. Currently our process is to present new LUN's as disk needs grow, roll them under Veritas Volume Manager control, carve them up into volume groups and mount them or extend existing mounts. I have approval to remove the Veritas software primarily because it carries a yearly licensing charge and we just don't need it.
I will be using multipathd for multipathing of the new SAN LUN's.
My first inclination is to use LVM. But I am questioning if that is even needed.
I think it would be more efficient to create a LUN on the SAN for each mount, then, as disk needs grow, extent that LUN. (we don't have any LUN's anywhere close to 2TB which is our SAN limitation). Then use standard fdisk to take those disks and mount up where they are needed.
The only draw back, is I am somewhat "scared" (??) of the method of extending disks under Linux when using just fdisk for the partitioning - that is delete the partition, create a larger partition and then resize2fs the filesystem. It seems dangerous to do with live data (Oracle DB's).
We use LVM and ext3 filesystem on a Oracle Data Warehouse.
For another Oracle pass through database we're using ASM (on raw devices) so no LVM for the EMC LUNs - just a single partition on each emcpower device.
For an earlier Oracle pass through database we're using OCFS (1) so we same as above - no LVM, single partition on each emcpower device.
LVM gives you a lot of flexibility in how you combine and divide storage so if you're using filesystems I'd recommend it.
VxVM has the benefit that it works on multiple platforms and allows you to do things like add LUNs from a new array to existing disk group and evacuate disks from old array to the ones on new array while Oracle is still running. On our OCFS system we had to move storage from one Clariion CX700 to another and it was a much more complicated process because we weren't using VxVM and sancopy had limitations of which we weren't previously aware.
ext3 reportedly doesn't perform well on extremely large filesystems and it sounds like you're thinking of doing extremely large filesystems so you might want to look at other fstypes such as OCFS2 which was created by Oracle for RAC environments but can be used on stand alone servers as well.
thank you for the advice.
For the Oracle servers that I have setup to date at this office I have setup ext3 fs on LVM lv's and simply used LVM to combine the SAN LUN's into a single VG. They work well and I don't have any problems with them, except it is somewhat "messy" as our Oracle DBA's request new storage space in small chunks - 5GB here, 10GB there. So, on the SAN it is getting to be inefficient because I am having a lot of small LUN's presented to the hosts and I can't trace a LUN from Linux mount to SAN LUN.
This morning I found and tested a way to extend a PV under LVM so, maybe that is the best of both worlds for me.
I also use an IBM San Volume Controller (SVC) so invisible disk migrations to different storage frames are handled by that device (which I love!!) - so that is another reason I don't need the VxVM drivers. I also don't have the option of using Oracle raw devices as the app owner doesn't want anything changed in that space.
Short of any other suggestions I think i'll stick with LVM in that case.
The beauty of LVM is you don't have to add 5 GB, 10 GB, 1 GB LUNs to it piecemeal. You can add 100 GB LUN each time then create the 5 GB, 10 GB or 1 GB logical volume (LV) they requested - they don't need to know you have more space pre-allocated to the VG. Then when you get the next request you just create another LV and don't have to muck with the array at all until you've used up that 100 GB LUN.
yeah, and I used to do that, but the way we cost allocate back to the business units, it is per GB as reported from the SAN LUNs allocated. So, if the SAN reports 100GB allocated to a server, the business wants to see that 100GB mounted up.
Annoying, I know, but the "bean counters" (accountants) rule the roost around where I work :-)
With my latest code update to the SVC though, it supports thin provisioning of LUN's so that may be my savior.