LinuxQuestions.org
Review your favorite Linux distribution.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Server
User Name
Password
Linux - Server This forum is for the discussion of Linux Software used in a server related context.

Notices

Reply
 
Search this Thread
Old 02-29-2008, 06:09 PM   #1
l_long_island
LQ Newbie
 
Registered: Feb 2008
Posts: 25

Rep: Reputation: 0
Question Can Red Hatís GFS and/ or Cluster Suite implmnt an iscsi, client/server desgn?


The plan:

Client nodes (1..6) desire to access a data storage array: They do so by invoking some client / server protocol and talk to the server(s) over a LAN
The Servers (1..2) (one active, one used for failover) communicate with the storage array over a LAN using iscsi and communicate back to the clients via the client / server protocol

Does Red Hat’s GFS and Cluster Suite toolset lend itself to implementing this configuration. If so can you explain how?

The diagram below looks much better if you were to view this post via the "edit" feature



Client Node 1 Client Node 2 … Client Node 6
| | |
---------------------------------------------------------------- LAN
| |
Server A Server B
| |
---------------------------------------------------------------- LAN
|
Cloud
| |
Storage Array 1 Storage Array 2


Thanks,
l_long_island (Newbie to linux and servers)

Last edited by l_long_island; 02-29-2008 at 09:37 PM. Reason: Suggest viewing in edit mode.
 
Old 03-01-2008, 11:20 AM   #2
rayfordj
Member
 
Registered: Feb 2008
Location: Texas
Distribution: Fedora, RHEL, CentOS
Posts: 475

Rep: Reputation: 73
Absolutely.

RedHat offers a clustering and storage class that I definitely recommend if you are looking to do this. It is well worth the $$$. The instruction was great and the class (and labs) really clarified some of the questions I had when doing this before I had the class. It also brought to light why I had some of the problems I had. Their on-line documentation is really helpful too.

I've actually got something similar running as a "proof-of-concept" and "reference" for demonstrating its operation to support technicians.



For a two-node HA cluster you'll want to use quorum to avoid split-brain situations where they keep shooting each other. I've actually got a 3-node so I didn't have to deal with this in my lab. In addition to the HA I also use LVS for load-balancing. This particular instance is used to demonstrate http load-balancing pLVS (primary balancer -- bLVS is backup balancer) and Server A,B,C have concurrent access to a GFS volume so that they may all present the same content. I also use CDPN to demonstrate on the client that the balancer is actually doing its job.

using your diagram mine looks something like this
Code:
Clients 
|  |  |
----------------------------- LAN
        |         |
       pLVS      bLVS
        |         |
----------------------------- LAN
   |         |         |
ServerA   ServerB   ServerC
   |         |         |
----------------------------- LAN
      |
    Cloud
      | 
   Storage1
I used lucci (management) and ricci (agent on servers) for all of the configuration; made it extremely easy to implement. (As compared to my limited experiences in prior versions of RHEL)

A single iSCSI target is presented to all three servers, using a single partition for Clustered LVM and then GFS on an LV. Anything created on one Server is immediately available to any other server. While mine is a very simplistic example using http it works well for demonstrations. Without the use of the balancers (LVS) it is even easier (less configuration to deal with because you don't have to create rules on the balancers) to provide other services. In your configuration, with active backup, it will lend itself to be even easier to host services like NFS, ftp, ... since only one system will ever be active at a time for any given service. I'm not well versed with LVS so for things like ftp, nfs, ... I tend to remove LVS out of the equation for demonstrations. You'll need a floating IP that the client(s) connect to for your hosted service(s) which has a DNS entry distinct from your servers'.
 
Old 03-01-2008, 10:56 PM   #3
l_long_island
LQ Newbie
 
Registered: Feb 2008
Posts: 25

Original Poster
Rep: Reputation: 0
: Can Red Hatís GFS and/ or Cluster Suite implmnt an iscsi, client/server desgn?

Thanks for the reply RayFordJ. I have a lot of reading ahead of me on this topic. New to this kind of stuff. My understanding is that GFS would take the place of NFS, is this correct? Would GFS logic be required in both the clients and servers? Would adding Red Hat's GNBD make things easier or would it add undo complexity? I would think that GNBD would be in the servers, since they will be clustered not the clients. Am I close?
Thanks.
 
Old 03-02-2008, 08:16 AM   #4
rayfordj
Member
 
Registered: Feb 2008
Location: Texas
Distribution: Fedora, RHEL, CentOS
Posts: 475

Rep: Reputation: 73
Quote:
My understanding is that GFS would take the place of NFS, is this correct?
GFS would replace ext3. You can then export the GFS filesystem via NFS as you would ext3 (or whatever your fs of choice). GFS is a more robust filesystem, designed to be concurrently accessed, and more scalable.

Quote:
Would GFS logic be required in both the clients and servers?
Just the servers would need the gfs packages. Then the server(s) would provide the service to your clients as they are now. I'm guessing this is via NFS by your first question...

Quote:
Would adding Red Hat's GNBD make things easier or would it add undo complexity?
I think you'd just be adding to the complexity. I've not seen many implementations using GNBD (not to say they do not exist). It is my understanding that iSCSI kinda does away with GNBD more-or-less. You'd want to use one or the other and it seems that the industry is moving to iSCSI. If you are already intimately familiar with GNBD then it might be a better choice for you than iSCSI but if you will be learning either then I say go with iSCSI; plus, from your original post it sounded like you already have an iSCSI "fabric" going for this site so you'd be wanting to take advantage of that.
 
Old 03-02-2008, 12:35 PM   #5
l_long_island
LQ Newbie
 
Registered: Feb 2008
Posts: 25

Original Poster
Rep: Reputation: 0
Rayford, Very nice response. Many precise answere to many dubious questions. Thanks much. I've got an interesting couple of weeks ahead of me. I'll let you know how it goes.
Appreciatively yours,
l_long_island

Last edited by l_long_island; 03-02-2008 at 12:45 PM. Reason: spelling
 
Old 03-07-2008, 07:23 PM   #6
l_long_island
LQ Newbie
 
Registered: Feb 2008
Posts: 25

Original Poster
Rep: Reputation: 0
2 server system (1 failover) not for GFS ?

It seems that in a system, as the one I laid out, with 2 servers not running concurrently (server B is used in a failover scenario only) the power of GFS is diminished.. Cluster management can still be administered using Red Hat Cluster Suite, but the block sharing of data that GFS provides is not relevant to this system. Is this a correct analysis?
l_long_island
 
Old 03-07-2008, 09:39 PM   #7
rayfordj
Member
 
Registered: Feb 2008
Location: Texas
Distribution: Fedora, RHEL, CentOS
Posts: 475

Rep: Reputation: 73
2 server system (1 failover) not for GFS ? I do not see why GFS could not be utilized. It has many robust features in addition to being a shared-filesystem. global, directory, and file level tuning (direct i/o); CDPN; dynamic inode allocation; larger filesystem sizes and file sizes; ...



yes, ext3 (or whatever fs) may be implemented in an active/backup HA-Cluster where only one system will ever access the filesystem at any given time and there are facilities within the cluster management to accomodate this configuration.

I wouldn't say that it isn't relevent but it may be more than what you are after at the moment. There may still be features of GFS that you may want to take advantage of even if not using it for concurrent/shared access among hosts. Depending on what is being done the ability to "grow" inodes on-the-fly and "reclaim" metadata may still put GFS at an advantage of ext3 since the only way to increase inode count on ext3 (aside from re-formatting the fs) is to grow the storage and resize the fs.

Ultimately the decision is yours (the implementer, maintainer, administrator, ...) to decide what would work best for your environment taking into account the expected workload and future growth demands (and I'm sure manny other factors). Will you ever need to add more servers/services and share the workload while providing shared access to a given filesystem? Is the number and/or size of files stored to remain constant for the filesystem's life-span? I'm sure there are any number of other questions that go along these lines that you'll be considering. I'm not wanting to pry but rather get you thinking about what you currently need to meet "right now"'s demands as well as be able to scale if/when needed in the future. If you have no clear business need to implement GFS and find that ext3 will clearly meet your requirements, then go for ext3; save learning the gfs tools for a later date.


Long story short: evaluate what you need, what's at your disposal, and then identify and utilize the right tool(s) for the job.







Quote:
The point to all this is that you must use the right tool for the right job. Think of each solution as a different tool in a toolbox. If you were asked what one tool you would use to build something with, answering “hammer” would sound quite foolish. Instead, having a selection of tools that fit the current challenge is the best recipe for success.
...from
Internet Software Development: Choosing the Right Tool
 
Old 03-10-2008, 06:51 PM   #8
l_long_island
LQ Newbie
 
Registered: Feb 2008
Posts: 25

Original Poster
Rep: Reputation: 0
Exclamation Good News / Bad News

Rayford,
Thanks again for taking the time to write back.

There is very little chance we would be adding servers to the network. And regarding some of the other positives you spoke about (CDPN). I am unsure of their value added to the planned use of the system (I am new to this project).
However, there is some good news / bad news regarding the possibility of still going with GFS.
I was just made aware that the servers (in the system layed out) will be running applications that will [also] be writing to the storage media. This will be in addition to the server function (honoring NFS requests) that I thought was the only disk function running. With this new information, GFS becomes a stronger candidate since both servers will be accessing the disks (not just the primary server) and sharing data is more pertinent. This is the good news.
The bad news is the user has a requirement to remove the disks, take them to another machine and transfer the data to another disk, for later analysis. It is my understanding that accessing this data on another machine may be a problem since the second machine would have no knowledge of the GFS filesystem.

I like your advice on picking the right tool for the job. But what usually happens, we are given assignments without fully understanding what needs to be done. Milestones are put into place (requirements, design, test dates, etc) before the proper understanding can be achieved. More often than not, we go with our best guess and hope we werenít too far off. Pushing back on the schedule makers may be the right answer, but can be a career ending approach. Then again, so can guessing on an approach that IS too far off.

Thanks again,
l_long_island.
 
Old 03-13-2008, 07:32 PM   #9
rayfordj
Member
 
Registered: Feb 2008
Location: Texas
Distribution: Fedora, RHEL, CentOS
Posts: 475

Rep: Reputation: 73
the tertiary system will also need gfs tools to manage the gfs filesystem.

Why "remove the disk(s)"? This is an iSCSI target, no?
Why not add the tertiary system to the GFS-cluster and let it access the data that way?
other options may include
- taking a snapshot of the iSCSI target and presenting it to the tertiary system
- if using LVM, you may be able to snapshot the GFS LV and then "backup" the data for analysis
- depending on the iSCSI device it may offer other features for cloning, mirroring, or otherwise presenting a distinct but identical copy of the target presented to your HA-Cluster




I'm all to familiar with starting down a project only to have the scope change completely mid-stream. There is a fine line you have to walk on what to push back on and when to push back on it and when to just suck it up and hope for the best...


Hopefully you'll be able to nail down a solution that meets all of the requirements with some flexibility to shift as dictated.
 
Old 03-14-2008, 05:26 PM   #10
l_long_island
LQ Newbie
 
Registered: Feb 2008
Posts: 25

Original Poster
Rep: Reputation: 0
deleted duplicate entry

Last edited by l_long_island; 03-14-2008 at 05:40 PM. Reason: remove duplicate entry
 
Old 03-14-2008, 05:31 PM   #11
l_long_island
LQ Newbie
 
Registered: Feb 2008
Posts: 25

Original Poster
Rep: Reputation: 0
Thumbs up Full steam ahead with GFS !

More good news (does it ever stop). I was just informed that I can assume the customer will configure his/her tertiary system to be GFS2 compatible. So now itís full steam ahead with GFS. Iíll let you know how it works out. Should get back to you in about 5 years.
Thanks again for all your help.
l_long_island
 
Old 03-14-2008, 10:02 PM   #12
frndrfoe
Member
 
Registered: Jan 2008
Distribution: RHEL, CentOS
Posts: 375

Rep: Reputation: 38
I am running a system like this using RHEL3AS and we finally had to move from GFS to LVM because we had many problems with the lock manager balling up the system. We routinely had to stop the cluster so that we could restart GFS because the lock manager lost it's brains. Life with LVM has been harmonious ever since.
The issues we had may be fixed in modern versions but keep CLVM2 in mind when you start beating on the system.
 
Old 03-17-2008, 04:57 PM   #13
l_long_island
LQ Newbie
 
Registered: Feb 2008
Posts: 25

Original Poster
Rep: Reputation: 0
Question Is LVM a substitute for GFS?

I am confused. Do GFS and LVM provide different functionality? I thought LVM provides a mechanism to dynamically partition the disk(s). GFS uses LVM to provide sharable blocks on the disks. GFS makes use of LVM but I think their goals are different(??)
 
Old 03-17-2008, 06:00 PM   #14
rayfordj
Member
 
Registered: Feb 2008
Location: Texas
Distribution: Fedora, RHEL, CentOS
Posts: 475

Rep: Reputation: 73
Quote:
Do GFS and LVM provide different functionality?
yes. LVM is an abstraction to your storage. GFS is a filesystem.

You may use LVM without GFS and, likewise, GFS without LVM.

RedHat's RHEL5 Clustered-LVM guide better illustrates but basically LVM sits between a partition and your filesystem. Rather than place a fs on a partition you format a logical-volume.

https://www.redhat.com/docs/manuals/enterprise/

LVM Administrator's Guide
 
Old 03-20-2008, 11:48 AM   #15
l_long_island
LQ Newbie
 
Registered: Feb 2008
Posts: 25

Original Poster
Rep: Reputation: 0
Still Confused

Thanks. If that's the case, I'm having trouble interpreting frndrfoe's post (03-14-2008, 09:02 PM)relating the move from GFS (a filesystem) to LVM (not a filesystem, I think).
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
red hat cluster suite and mysql stefano65 Linux - Enterprise 6 09-12-2010 11:45 AM
How to use Red Hat Cluster Suite for Apache guages7 Linux - Enterprise 6 12-19-2007 05:49 AM
red hat cluster suite version 5 renegade7 Linux - Software 0 06-27-2007 10:17 AM
MySQL InnoDB Failover and Red Hat Cluster Suite stefano65 Red Hat 1 11-18-2006 02:35 PM
Solaris 9 client -->Openldap Red hat 9 server unix-o-matic Linux - Networking 2 05-03-2004 01:20 PM


All times are GMT -5. The time now is 07:09 PM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration