LinuxQuestions.org
Visit Jeremy's Blog.
Go Back   LinuxQuestions.org > Forums > Enterprise Linux Forums > Linux - Enterprise
User Name
Password
Linux - Enterprise This forum is for all items relating to using Linux in the Enterprise.

Notices

Reply
 
LinkBack Search this Thread
Old 08-01-2006, 11:31 PM   #1
xmdms
Member
 
Registered: Oct 2003
Posts: 134

Rep: Reputation: 15
RHAS 3.0 i64, Oracle 10g, Oracle RAC, and ASM


Greetings,

I was just curious if anyone here on this forum has this setup, as I mentioned in the subject line? Can Oracle RAC, and ASM be setup for redundancy and stanby DB?

Please shine some lights on this topic.

Thank you in advance
 
Old 01-04-2007, 01:37 PM   #2
MensaWater
Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 5,773
Blog Entries: 4

Rep: Reputation: 697Reputation: 697Reputation: 697Reputation: 697Reputation: 697Reputation: 697
We run RHAS 3, Oracle 10g, Oracle RAC but use Oracle Cluster Filesystem) OCFS for our data filesystems instead of ASM raw devices. Even with that we had to use ASM to setup raw devices for Oracle Cluster Ready Services (CRS). One reason we opted for OCFS over ASM was our misperception that we could do filesystem backups. Since OCFS is a clustered filesystem standard tools (tar, cp etc...) don't work on it though Oracle does provide OCFS enabled rpms that help.

However for backup the only way Oracle and NetBackup support it are with use of RMAN. It was an attempt to avoid RMAN that led us to the use of OCFS so if I had it to do over again I'd go the ASM route instead. You of course have to do RMAN for that as well.

P.S. Missed this post until today when I saw it as one of the "similar" posts to a more recent thread.

Last edited by MensaWater; 01-04-2007 at 01:38 PM.
 
Old 01-05-2007, 10:51 AM   #3
xmdms
Member
 
Registered: Oct 2003
Posts: 134

Original Poster
Rep: Reputation: 15
Question

Thanks for the reply.

We're using both methods at the moment and seems to be working fine. I often wordering what would be the best practices for setting up OCFS partition though. Do you know the answer?

Thanks,
Bob

Quote:
Originally Posted by jlightner
We run RHAS 3, Oracle 10g, Oracle RAC but use Oracle Cluster Filesystem) OCFS for our data filesystems instead of ASM raw devices. Even with that we had to use ASM to setup raw devices for Oracle Cluster Ready Services (CRS). One reason we opted for OCFS over ASM was our misperception that we could do filesystem backups. Since OCFS is a clustered filesystem standard tools (tar, cp etc...) don't work on it though Oracle does provide OCFS enabled rpms that help.

However for backup the only way Oracle and NetBackup support it are with use of RMAN. It was an attempt to avoid RMAN that led us to the use of OCFS so if I had it to do over again I'd go the ASM route instead. You of course have to do RMAN for that as well.

P.S. Missed this post until today when I saw it as one of the "similar" posts to a more recent thread.
 
Old 01-05-2007, 01:52 PM   #4
MensaWater
Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 5,773
Blog Entries: 4

Rep: Reputation: 697Reputation: 697Reputation: 697Reputation: 697Reputation: 697Reputation: 697
Don't know that I've seen a "best practices" white paper but Oracle has a site for Oracle on Linux at:
http://oss.oracle.com/projects/coreutils/files/EL3/

On the left you'll see information regarding OCFS. As I mentioned there are bundles for coreutils and tar that are to allow for use of tar, mv etc... with the OCFS filesystem. They do work to a certain extent (at least better than what you get by default).

Note that the OCFS you can use with 2.4 kernel (the one RHEL AS 3 has) is not the latest one. From a presentation I saw some months back I gather there are some significant improvements in the later OCFS version that runs on 2.6 kernels (RHEL AS 4 and higher). Haven't used that.

We do OCFS on a RAID 5 built on a Clariion CX700 using fibre drives over a SAN for our production systems.

I did actually setup OCFS on a standalone server that had a PERC covering its internal disks in a RAID 5.

Recently we had to migrate the Production environment from the original CX700 to a replace CX700. We came up with a procedure for that but it is rather specific to the fact we were using qlogic fibre cards, CX700 (and EMC Navisphere on the host to talk to the array) as well as EMC PowerPath to deal with multipathing for the two fibre cards and two SPAs being used.
 
Old 01-05-2007, 03:37 PM   #5
xmdms
Member
 
Registered: Oct 2003
Posts: 134

Original Poster
Rep: Reputation: 15
We used OCFS and raw devices to shared the mount point on multiple nodes. And yes, we're currently using RHAS 3.0 in our production and test environment. We also setup ASM and by doing so, we have the ASM and present that to a 3rd server for our production standby. We're also using Oracle RAC, not OS clustering at this time.

However, we used mainly RAID 10 for production and RAID 0 for the test environment. Would you know the syntax to setup OCFS partition?
 
Old 01-05-2007, 05:16 PM   #6
MensaWater
Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 5,773
Blog Entries: 4

Rep: Reputation: 697Reputation: 697Reputation: 697Reputation: 697Reputation: 697Reputation: 697
We use Oracle RAC as well. (More properly we're using Oracle Cluster Ready Services [CRS]). Didn't mean to imply we were doing OS clustering - we aren't.

On our install we got mkfs.ocfs which has a man page.

We also got ocfstool which is a graphic tool for managing ocfs.

I'm not in my office at the moment or I could give you the exact syntax I'd used for setting up the filesystems. One of the flags is -uid and you do have to specify that as the oracle user to allow Oracle DB to use the filesystem - I recall early on I tried to set it to root and it didn't like that at all.

Basically you have to use ASM for the CRS votingdisk and 2 other raw devices. You can use either ASM for raw data devices or OCFS for filesystem data devices that are shared between the nodes.

Packages we have installed with ocfs in the name:
ocfs-2.4.21-EL-1.0.11-1
setupOCFS-1.0.0-2
ocfs-support-1.0.10-1
ocfs-tools-1.0.10-1
ocfs-2.4.21-EL-smp-1.0.11-1

As mentioned we also downloaded and installed replacement tar and coreutils from the link I'd sent earlier at Oracle.
 
Old 01-05-2007, 06:06 PM   #7
xmdms
Member
 
Registered: Oct 2003
Posts: 134

Original Poster
Rep: Reputation: 15
Sounds like we're taking the approach but different hardware. As for ocfstool (graphic tool), where do I get a hold of the tool?? Is this shareware tool or something you have to buy??

I'd greatly appreciate it if you have the time to reply back with the OCFS syntax and the OCFS graphical tool. I love learning about new tools and such.

Have a great day!!
 
Old 01-05-2007, 06:41 PM   #8
MensaWater
Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 5,773
Blog Entries: 4

Rep: Reputation: 697Reputation: 697Reputation: 697Reputation: 697Reputation: 697Reputation: 697
I believe both mkfs.ocfs and ocfstool are free with bundles from the Oracle link I'd mentioned above. The packages I show installed are the ones that would contain them and should be available at that site. We actually got them installed from a Dell Deployment CD as our RAC was originally installed from that. The documentation I got with that gave me the specific syntax and it is that

You can get the syntax for mkfs.ocfs with "man mkfs.ocfs".

You use the ocfstool GUI to tell it which nodes are sharing the OCFS filesystems you create.

The actual command sytnax used was like the following:
mkfs.ocfs -F -b 128 -L u01 -m /database -u 500 -g 500 -p 0775
/dev/emcpowera1
mkfs.ocfs -F -b 128 -L u02 -m /database/archive -u 500 -g 500 -p 0775
/dev/emcpowerb1

The /dev/emcpower devices were the PowerPath pseudodevices. You'd have to use whatever your shared storage was instead. (So for example on the stand alone node we created for testing I used /dev/sda13 which was not shared with another host.)

The meaning of the above flags:
-F = Force format
-b 128 = Block size of 128 K.
-L u01 = Allow mount via volume label (u01 being the label for first line). This isn't really necessary and in fact I don't mount by volume label but doesn't hurt to put it there either.
-m /database = Mount point directory (/database that line - notice it is /database/archive for the second line and this is submounted under the first one).
-u 500 = Set UID to 500 (the UID for oracle in /etc/passwd)
-g 500 = Set GID to 500 (the GID for oinstall in /etc/group)
-p 0775 = Permissions to set (-rwxrwxrw-).

I actually have 5 filesystems but the above should be enough to give you the idea.
 
Old 01-05-2007, 07:25 PM   #9
xmdms
Member
 
Registered: Oct 2003
Posts: 134

Original Poster
Rep: Reputation: 15
As for the block size 128K (-b 128 = Block size of 128 K.) -- Do you normally use 128K or depending on the RAID configuration? What would be the best practices, I wonder?

Thanks much!

 
Old 01-05-2007, 08:55 PM   #10
MensaWater
Guru
 
Registered: May 2005
Location: Atlanta Georgia USA
Distribution: Redhat (RHEL), CentOS, Fedora, Debian, FreeBSD, HP-UX, Solaris, SCO
Posts: 5,773
Blog Entries: 4

Rep: Reputation: 697Reputation: 697Reputation: 697Reputation: 697Reputation: 697Reputation: 697
I've used 128 K because that was what was specified in the deployment guide. At the time we did this OCFS was brand new to us. We've not seen I/O performance as an issue so much as limited shared memory. If we saw I/O peformance as an issue we'd likely look more closely at the block size. On our HP-UX systems where we run much larger Oracle (no RAC) databases we do use a different block size (8192) for the Veritas Filesystems (VxFS).

This year I'll be building a second machine for the test environment so can do some more in depth testing (it's only taken me a year and half to convince the powers that be that the test system for a cluster environment should also be a cluster environment). I couldn't just load another system because I need to get SAN storage and fibre cards for both nodes.

It appears doing a hugemem kernel would help enhance the shared memory (and therefore allow for a larger SGA in Oracle) but I wasn't willing to test it on the live Production system.
 
Old 01-05-2007, 11:07 PM   #11
xmdms
Member
 
Registered: Oct 2003
Posts: 134

Original Poster
Rep: Reputation: 15
Our block size is set at 8192K on HP SANs - You should look into pinning and paralism -- It's quite a performance lift.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Oracle 9i RAC Install on Linux AS gordib259 Linux - Software 2 06-01-2007 10:02 AM
Oracle Rac I0g waqar Solaris / OpenSolaris 1 03-16-2006 11:08 PM
NIC Bonding on Separate Switches for Oracle RAC Cluster legonz Linux - Networking 5 01-26-2006 11:31 AM
Install Oracle Rac on RHEL 4 on a Dell machine masand Linux - Enterprise 0 01-16-2006 01:00 AM
Oracle 9i on RHAS v. 3.0 PAarcher Linux - Enterprise 4 11-30-2004 01:42 PM


All times are GMT -5. The time now is 05:02 AM.

Main Menu
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration