Red HatThis forum is for the discussion of Red Hat Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Distribution: Gentoo, RHEL (Fedora, CentOS, OEL), Ubuntu, FreeBSD, Solaris 10
Posts: 170
Rep:
RHEL5 running as a VMware guest
Hi All,
Red Hat Enterprise Linux 5 kernel supports four I/O schedulers:
- cfq (Completely Fair Queuing)
- deadline
- noop
- anticipatory
I read in documentation that the recommended kernel line settings for 64-bit Red Hat Enterprise Linux 5 running as a VMware guest are:
divider=10 notsc iommu=soft elevator=noop
But for single instance databases with dedicated storage the deadline scheduler is recommended. The deadline scheduler reorders I/O to optimized disk heads movement and caps maximum latency per request to prevent resource starvation for I/O intensive processes.
I have an Oracle instance on RHEL5 running as a VMware(ESX) guest with dedicated storage. What scheduler is better in my case?
We have several x86_64 RHEL 5.3 systems that are Oracle servers running under ESX 3.5. All have just "divider=10 notsc". I haven't looked into iommu-- did that come from RHEL or VMware or Oracle? The docs from the different sources seem to say something different every few months, and if nothing is broken I don't obsess over trying to keep up with the latest changes.
I tried an experiment several months ago, probably on RHEL 4.6 and an earlier ESX, and I could not tell any significant difference between the performance of the different I/O scheduler options. When we moved most all of our storage to a fiber-connected SAN, the SAN controller's buffering washed out any kernel optimizations and all the schedulers gave about the same results. I leave it at the default. YMMV depending on specifics of your storage.
Distribution: Gentoo, RHEL (Fedora, CentOS, OEL), Ubuntu, FreeBSD, Solaris 10
Posts: 170
Original Poster
Rep:
Quote:
Originally Posted by jonesr
We have several x86_64 RHEL 5.3 systems that are Oracle servers running under ESX 3.5. All have just "divider=10 notsc". I haven't looked into iommu-- did that come from RHEL or VMware or Oracle? ...
Hello,
Thank you for attention to my question. I found about it in Red Hat documentation:
Oracle 10g Server on Red Hat®
Enterprise Linux® 5
Deployment Recommendations
Version 1.2
November 2008
Can you advice me any documentation about Oracle on VMWare?
...Can you tell me about a "noatime" mount parameter for ext2 and ext3 file systems. Is it really improves I\O?
noatime improves I/O by avoiding writes. When it is present the file system manager does not stamp directory and file inodes with the last time they were accessed, which saves the disk writes necessary to update those inodes.
People rarely care about the atime of an Oracle tablespace, so in those file systems this lets you give the system permission to skip that work.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.