LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 02-09-2006, 05:18 PM   #1
meatbiscuit
LQ Newbie
 
Registered: Feb 2006
Posts: 2

Rep: Reputation: 0
Kernel Memory Management


Hi all,

I haven't posted here before, but I will now be frequenting the board a lot, as I am somewhat inexperienced in Linux and am constantly finding that this board is a great resource with a ton of great knowledge.

I recently took a job as a Software Engineer, however, a large portion of my job entails a lot of systems administration work.

With that in mind,the company that I work for currently has a number of different software titles that rely heavily on memory. These applications are fairly graphic intensive and do a lot of 3D image manipulation, thus they are constantly writing large data sets into physical memory, and even more so, accessing the data in memory.

We are running Red Hat 8 on an intel x86 architecture with 1 GB of DDR 400 RAM, configured to support dual channel data transmission, and an intel D865 board.

There is an application that I have been testing extensively in order to maximize performance. As more image data is loaded into this application, the process grows larger and larger in memory. The strange deal here, is when the process reaches ~256 MB in size, the performance of the application takes an exponential decline. To be more specific, when a request is made to manipulate some image data, the performance goes from approximately 1-2 seconds of rendering time down to more like 4-8 seconds of rendering time.

There appears to be somewhat of a threshold, thats why I mention the ~256 MB mark. When the process is 240 MB, rendering time is fine, approximately 1-2 seconds. Add one more data set to the mix, to make the process more like ~265 MB in size and we see this huge performance decrease.

I am almost positive this is not a hardware issue, I have tested different boards, CPUs, different RAM sticks, etc... Another strange addition to the problem, is that when the application is run on a system with 2GB of memory, this exponential decline in performance goes away completely. Now, I know what you're all thinking, and that is that the Kernel is swapping this data out to disk. Here is the catch, there is no swapping going on at all. I have tested this extensively as well, and the Kernel does not start swapping until an inordinate amount of image data is loaded. At which point, rendering time is much worse at 12-15 seconds.

My final conclusion is that this is an issue of the way the Kernel is managing memory. I believe this is the case because I recompiled the Kernel such that Highmem was disabled (/proc/meminfo then shows ~130 MB less in RAM). When disabling Highmem, this exponential decrease in performance goes away, and there is a nice linear decline in performance as more image data is progressively loaded into this application, which is what I should expect to see.

My thoughts on this are that when disabling Highmem, we are taking away ZONE_HIGHMEM (896MB and above) completley and forcing the Kernel to exclusively use ZONE_NORMAL (16-896MB) as its sapce for memory allocation and since on the x86 platform, the Kernel sort of folds ZONE_HIGHMEM onto ZONE_NORMAL to effectively create 1 zone for memory (that is how I understand it).

Now that all of that is out of the way, my question is this: Does anyone know if there are some more tunable parameters that I can play with to adjust the way the Kernel is paging or managing memory? I am confident that somewhere, there is some sort of configuration file or something that I can change to optimize performance. Modifying the source is my preferred last resort, as the source code for this is pretty nasty looking. Maybe there is some way to increase the efficiency of the TLB?

I could just leave Highmem disabled, but then I lose all of my upper-memory (~130 MB worth). Any help would be greatly appreciated.

Also, I am running the 2.4.21 Kernel.

Thanks a lot,

Meat
 
Old 02-09-2006, 07:11 PM   #2
amosf
Senior Member
 
Registered: Jun 2004
Location: Australia
Distribution: Mandriva/Slack - KDE
Posts: 1,672

Rep: Reputation: 46
Sounds like an argument for going to a 64bit kernel and OS
 
Old 02-09-2006, 11:32 PM   #3
foo_bar_foo
Senior Member
 
Registered: Jun 2004
Posts: 2,553

Rep: Reputation: 53
i always preface discussing these complex topics with (i might get some of this wrong)
the 2.4 series kernels had this fault of redirecting highmem through
zone normal via overhead intensive bounce buffers.
Fortunately Linux is blessed with a very good mind concerning vm and
some of this sructure was beginning to be addressed by Andrea Arcangeli around kernel version 2.4.23
you might even find patches he made for earlier versions.
with the modern kernels 2.6 series he fixed it and this is no longer a problem.
larger page size in the new kernels gives room for permanent mappings in the 128 MB reserved for kernel houskeeping and not mapped to physical RAM or using the new option to map highmem page tables in highmem. slight overhead but with smaller lookaside buffers caused by larger pages (less pages) compined with new reversed map PTE chain it's not noticeable.
 
Old 02-09-2006, 11:35 PM   #4
foo_bar_foo
Senior Member
 
Registered: Jun 2004
Posts: 2,553

Rep: Reputation: 53
Quote:
Originally Posted by amosf
Sounds like an argument for going to a 64bit kernel and OS
yes 64bit lack of limits on memory bandwidth is a HUGE advance.
 
Old 02-10-2006, 06:41 PM   #5
foo_bar_foo
Senior Member
 
Registered: Jun 2004
Posts: 2,553

Rep: Reputation: 53
another potential solution is to use one of the patches on the kernel that change the kernel to user space ballance from 1gb/3gb to 2gb/2gb.
if you don't mind loosing 1 gb of virtual memory
http://www.kernel.org/pub/linux/kern...2.4/2.4.23aa1/
the patch for adding the alternate split config options is the first one
00_3.5G-address-space-5
i don't think that one made it into the main kernel not sure.
there are other patches there also that address highmem in 2.4.23
some might be in the main kernel i'm not sure
you can certainly see he is working on the issue at this point.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
memory management navaladi Mandriva 2 02-18-2005 06:47 AM
Memory management Mojojo Linux - Hardware 11 08-31-2003 10:29 AM
memory management? snow Linux - Software 7 02-12-2003 01:57 PM
Memory Management hhegab Linux - General 3 08-07-2002 10:20 PM
Memory Management mrsolo Linux - General 7 06-26-2002 12:55 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 08:37 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration