LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 12-14-2012, 01:08 PM   #1
NetProgrammer
Member
 
Registered: Aug 2012
Posts: 30

Rep: Reputation: Disabled
Any experience in allocating 1TB of RAM


Dear all,
I'm studying the feasibility of a projet in which I have to
allocate 1 TB of RAM in C under 64 bit RedHat Linux.
Now, apart from all theoretical rules, I'd like to know if
the memory allocating functions (such as alloc, malloc or ...) will work easily or there are some considerations I have to obey ?
Does anyone have such an experience?
any guidande will be appretiated.
 
Old 12-14-2012, 02:24 PM   #2
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Rep: Reputation: 142Reputation: 142
Absolutely no experience in that area.
But probably if you try to directly allocate a 1TB chunk of RAM with 1 command you'll fail, as if even 1 byte within that chunk is already being used you'll probably get back a failure because such a contiguous block cannot be allocated?
Just delete this post if I did not understand your question...
 
Old 12-14-2012, 02:34 PM   #3
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by NetProgrammer View Post
I have to
allocate 1 TB of RAM in C under 64 bit RedHat Linux.
Now, apart from all theoretical rules, I'd like to know if
the memory allocating functions (such as alloc, malloc or ...) will work easily or there are some considerations I have to obey
I haven't tried it, but I am sure all the 64-bit functions can handle that with no problems.

But the default settings for over commit in Linux require that the anonymous memory used by a single process be available as either ram or swap space. The total anonymous memory of all processes (by default) can go far beyond the size of ram plus swap. But you would need to adjust over commit settings to let a single process do that.

If you want to just allocate it, or you want to contiguously allocate it then sparsely use it, then change the over commit settings.

If you want to really use the 1TB of memory, then you obviously need the ram+swap.

I'm curious. Did you mean just to allocate it? Or to fully use it? If the latter, how much ram and how much swap space do you have?

I don't get to use such systems. But others, where I work, use systems with over 256GB of actual ram. For some access patterns, such systems could use a 1TB dynamic arrays without hopeless performance.

Quote:
Originally Posted by Pearlseattle View Post
you'll probably get back a failure because such a contiguous block cannot be allocated?
Contiguous in what sense? It needs to be contiguous in the process's private virtual address space. In X86-64, finding 1TB contiguous in process virtual space should be trivial. In physical ram, the 1TB doesn't need to exist at all, much less be contiguous. It is "demand zero" when allocated and becomes scattered as used.

Last edited by johnsfine; 12-14-2012 at 02:42 PM.
 
1 members found this post helpful.
Old 12-14-2012, 03:22 PM   #4
dugan
LQ Guru
 
Registered: Nov 2003
Location: Canada
Distribution: distro hopper
Posts: 11,235

Rep: Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320Reputation: 5320
Quote:
Originally Posted by Pearlseattle View Post
Just delete this post if I did not understand your question...
Only moderators can delete posts. And they'll only do it in very exceptional circumstances (e.g. spam)

EDIT: realized after posting that I probably should have PMed this.

Last edited by dugan; 12-14-2012 at 03:24 PM.
 
Old 12-14-2012, 06:52 PM   #5
Pearlseattle
Member
 
Registered: Aug 2007
Location: Zurich, Switzerland
Distribution: Gentoo
Posts: 999

Rep: Reputation: 142Reputation: 142
Quote:
Contiguous in what sense? It needs to be contiguous in the process's private virtual address space. In X86-64, finding 1TB contiguous in process virtual space should be trivial. In physical ram, the 1TB doesn't need to exist at all, much less be contiguous. It is "demand zero" when allocated and becomes scattered as used.
Not sure what you mean - I just know that from my point of view allocating with "new" 100MB RAM having 500MB free often did not work in Windows => assumed the same might happen in Linux.
 
Old 12-14-2012, 07:01 PM   #6
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by Pearlseattle View Post
Not sure what you mean - I just know that from my point of view allocating with "new" 100MB RAM having 500MB free often did not work in Windows => assumed the same might happen in Linux.
These things are fundamentally the same in Windows as in Linux. So I am pretty sure you are not talking about a Linux vs. Windows difference.

I'm pretty sure you are talking about a 32-bit vs. 64-bit difference.

I'm not sure what you mean by "having 500MB free". I expect you don't really know what you mean either (what "free" memory means is a more complicated question than you might think and what various tools mean by various reports of "free" memory may be very misleading). For some meaning of "having 500MB free" there is probably some way to get a 64-bit application to fail a 100MB allocation. But it would be a very obscure situation and I don't believe you ever hit that. For any definition of "having 500MB free" there is some plausible (even common) situation in which a 32-bit application would fail a 100MB allocation (equally plausible in Windows or in Linux). So I deduce that you are describing some failure of a 32-bit application.

Quote:
Originally Posted by NetProgrammer View Post
I have to
allocate 1 TB of RAM in C under 64 bit RedHat Linux.
You can run a 32-bit application under 64 bit RedHat. You obviously cannot allocated a 1TB memory area in a 32-bit application, even if the OS is 64-bit.

So I pretty much assumed at the start of this thread that NetProgrammer is talking about a 64-bit application.

Regardless of whether the OS is 64-bit or 32-bit, I'm pretty sure Pearlseattle is talking about a failure specific to 32-bit applications.

I guess you might consider that a Windows vs. Linux difference in that most applications on 64-bit Windows are 32-bit applications, while 32-bit applications on 64-bit Linux are supported but rare.

Last edited by johnsfine; 12-14-2012 at 07:06 PM.
 
Old 12-14-2012, 11:24 PM   #7
NetProgrammer
Member
 
Registered: Aug 2012
Posts: 30

Original Poster
Rep: Reputation: Disabled
thank you all
I think I must add some more explanations to my question.
I'll write the code myself. I mean I'll run a 64bit code under a 64bit OS.
I'll use total of the allocated ram.
I'll deactivate the SWAP function because I want to be sure that every read/write happens
in the main memory.

Now, please give me your advices !

at last but not at least, all of your comments are worthy and if I could I would never delete
any of them.

thank you again.
 
Old 12-15-2012, 07:01 AM   #8
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Quote:
Originally Posted by NetProgrammer View Post
I'll use total of the allocated ram.
I'll deactivate the SWAP function because I want to be sure that every read/write happens
in the main memory.
You have a computer with over 1TB of physical ram?
 
Old 12-15-2012, 07:07 AM   #9
NetProgrammer
Member
 
Registered: Aug 2012
Posts: 30

Original Poster
Rep: Reputation: Disabled
Yes, I do.
I know that OS occupies some memory.
 
Old 12-15-2012, 07:45 AM   #10
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
If you want best performance on very large data structures that will not be swapped, you should use "hugepages" rather than ordinary allocation. I haven't done so myself, so I can't tell you any details, but it is easy to look up with google.

Some of the OS size depends on physical ram size. If you have over 1TB of physical ram, I think your OS size will be pretty big.

If it is questionable whether you have enough beyond 1TB physical to accomplish the task, either start testing with a large swap area enabled or start testing with a somewhat smaller allocation.

Even if you plan to use hugepages, that is complicated enough that you probably should test first with ordinary allocation and only switch to hugepages after you have the basics working.

I'm not certain, but I believe properly configuring and using hugepages will greatly reduce the amount of physical ram the OS itself uses for managing the 1TB of application memory. So if your computer has only a little more than 1TB, you may need a slightly smaller allocation while using ordinary allocation and may be able to use a larger allocation after configuring for hugepages (and I think rebooting).

Even if you have far more than 1TB physical, it is worth learning how to use hugepages. The improvement in average memory access time will be worth the trouble if you do anything like random access to very large data structures. Any reduction in the kernel's own memory requirements may be a trivial benefit compared to the speed improvement. (Though if your algorithms do a very good job of localizing access, the performance benefits of hugepages might be as low as a fraction of one percent.)

Last edited by johnsfine; 12-15-2012 at 07:53 AM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Experience with OCZ's ram SkyerSK Linux - Hardware 4 12-11-2010 02:55 PM
Allocating RAM to CSP Humtum Linux - Server 0 10-11-2009 08:46 PM
Allocating Memory in the kernel arunachalam Linux - Software 4 10-12-2005 08:51 AM
allocating memory eshwar_ind Programming 6 02-26-2004 07:06 AM
Allocating bandwidth to applications ilikejam Linux - Networking 2 10-13-2003 05:04 AM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 05:24 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration