ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Dear all,
I'm studying the feasibility of a projet in which I have to
allocate 1 TB of RAM in C under 64 bit RedHat Linux.
Now, apart from all theoretical rules, I'd like to know if
the memory allocating functions (such as alloc, malloc or ...) will work easily or there are some considerations I have to obey ?
Does anyone have such an experience?
any guidande will be appretiated.
Absolutely no experience in that area.
But probably if you try to directly allocate a 1TB chunk of RAM with 1 command you'll fail, as if even 1 byte within that chunk is already being used you'll probably get back a failure because such a contiguous block cannot be allocated?
Just delete this post if I did not understand your question...
I have to
allocate 1 TB of RAM in C under 64 bit RedHat Linux.
Now, apart from all theoretical rules, I'd like to know if
the memory allocating functions (such as alloc, malloc or ...) will work easily or there are some considerations I have to obey
I haven't tried it, but I am sure all the 64-bit functions can handle that with no problems.
But the default settings for over commit in Linux require that the anonymous memory used by a single process be available as either ram or swap space. The total anonymous memory of all processes (by default) can go far beyond the size of ram plus swap. But you would need to adjust over commit settings to let a single process do that.
If you want to just allocate it, or you want to contiguously allocate it then sparsely use it, then change the over commit settings.
If you want to really use the 1TB of memory, then you obviously need the ram+swap.
I'm curious. Did you mean just to allocate it? Or to fully use it? If the latter, how much ram and how much swap space do you have?
I don't get to use such systems. But others, where I work, use systems with over 256GB of actual ram. For some access patterns, such systems could use a 1TB dynamic arrays without hopeless performance.
Quote:
Originally Posted by Pearlseattle
you'll probably get back a failure because such a contiguous block cannot be allocated?
Contiguous in what sense? It needs to be contiguous in the process's private virtual address space. In X86-64, finding 1TB contiguous in process virtual space should be trivial. In physical ram, the 1TB doesn't need to exist at all, much less be contiguous. It is "demand zero" when allocated and becomes scattered as used.
Contiguous in what sense? It needs to be contiguous in the process's private virtual address space. In X86-64, finding 1TB contiguous in process virtual space should be trivial. In physical ram, the 1TB doesn't need to exist at all, much less be contiguous. It is "demand zero" when allocated and becomes scattered as used.
Not sure what you mean - I just know that from my point of view allocating with "new" 100MB RAM having 500MB free often did not work in Windows => assumed the same might happen in Linux.
Not sure what you mean - I just know that from my point of view allocating with "new" 100MB RAM having 500MB free often did not work in Windows => assumed the same might happen in Linux.
These things are fundamentally the same in Windows as in Linux. So I am pretty sure you are not talking about a Linux vs. Windows difference.
I'm pretty sure you are talking about a 32-bit vs. 64-bit difference.
I'm not sure what you mean by "having 500MB free". I expect you don't really know what you mean either (what "free" memory means is a more complicated question than you might think and what various tools mean by various reports of "free" memory may be very misleading). For some meaning of "having 500MB free" there is probably some way to get a 64-bit application to fail a 100MB allocation. But it would be a very obscure situation and I don't believe you ever hit that. For any definition of "having 500MB free" there is some plausible (even common) situation in which a 32-bit application would fail a 100MB allocation (equally plausible in Windows or in Linux). So I deduce that you are describing some failure of a 32-bit application.
Quote:
Originally Posted by NetProgrammer
I have to
allocate 1 TB of RAM in C under 64 bit RedHat Linux.
You can run a 32-bit application under 64 bit RedHat. You obviously cannot allocated a 1TB memory area in a 32-bit application, even if the OS is 64-bit.
So I pretty much assumed at the start of this thread that NetProgrammer is talking about a 64-bit application.
Regardless of whether the OS is 64-bit or 32-bit, I'm pretty sure Pearlseattle is talking about a failure specific to 32-bit applications.
I guess you might consider that a Windows vs. Linux difference in that most applications on 64-bit Windows are 32-bit applications, while 32-bit applications on 64-bit Linux are supported but rare.
thank you all
I think I must add some more explanations to my question.
I'll write the code myself. I mean I'll run a 64bit code under a 64bit OS.
I'll use total of the allocated ram.
I'll deactivate the SWAP function because I want to be sure that every read/write happens
in the main memory.
Now, please give me your advices !
at last but not at least, all of your comments are worthy and if I could I would never delete
any of them.
If you want best performance on very large data structures that will not be swapped, you should use "hugepages" rather than ordinary allocation. I haven't done so myself, so I can't tell you any details, but it is easy to look up with google.
Some of the OS size depends on physical ram size. If you have over 1TB of physical ram, I think your OS size will be pretty big.
If it is questionable whether you have enough beyond 1TB physical to accomplish the task, either start testing with a large swap area enabled or start testing with a somewhat smaller allocation.
Even if you plan to use hugepages, that is complicated enough that you probably should test first with ordinary allocation and only switch to hugepages after you have the basics working.
I'm not certain, but I believe properly configuring and using hugepages will greatly reduce the amount of physical ram the OS itself uses for managing the 1TB of application memory. So if your computer has only a little more than 1TB, you may need a slightly smaller allocation while using ordinary allocation and may be able to use a larger allocation after configuring for hugepages (and I think rebooting).
Even if you have far more than 1TB physical, it is worth learning how to use hugepages. The improvement in average memory access time will be worth the trouble if you do anything like random access to very large data structures. Any reduction in the kernel's own memory requirements may be a trivial benefit compared to the speed improvement. (Though if your algorithms do a very good job of localizing access, the performance benefits of hugepages might be as low as a fraction of one percent.)
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.