LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 07-15-2009, 02:20 PM   #1
darsunt
Member
 
Registered: Jan 2007
Location: Huntington Beach, CA
Distribution: Red Hat
Posts: 51

Rep: Reputation: 15
Does linux have a size limit on a process?


I've been told that microsoft vista has a 2G size limit on an individual process.

Does linux have such a limit? I suspect it does but I hope it is more flexible. I need as large a process size as possible.

If Linux is too limited, are there alternatives?

Thanks
 
Old 07-15-2009, 02:28 PM   #2
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195
Quote:
Originally Posted by darsunt View Post
I've been told that microsoft vista has a 2G size limit on an individual process.
Microsoft 32 bit XP defaults to a limit of 2GB per process, but has a boot time option to make it 3GB instead.

I'm not sure about 32 bit Vista. Maybe that is the same.

Microsoft 64 bit XP and Vista both allow 4GB for 32 bit processes and far more than that for 64 bit processes. But the 64 bit version of XP or Vista costs more than the 32 bit version.

Quote:
Does linux have such a limit?
32 bit Linux, by default, has a limit of 3GB per process. There is a kernel build time option to make that either 2GB or 4GB (the 4GB choice may have significant extra overhead).

64 bit Linux (like 64 bit Windows) allows 4GB for 32 bit processes (without the extra overhead that a 4GB process has in 32 bit Linux) and allows far more than 4GB for 64 bit processes. Unlike Windows, there is no extra charge for the 64 bit version of the kernel (32 bit is free, 64 bit costs twice that much).

Quote:
I need as large a process size as possible.
Use a 64 bit OS.

Since you listed Red Hat, I should explain the above pricing info does not apply to Red Hat. If you want free 64 bit, you might need to select Centos (or other) distribution instead of Red Hat. (I don't know any details about Red Hat pricing, availability, or memory restrictions).

Last edited by johnsfine; 07-15-2009 at 02:33 PM.
 
Old 07-15-2009, 04:02 PM   #3
darsunt
Member
 
Registered: Jan 2007
Location: Huntington Beach, CA
Distribution: Red Hat
Posts: 51

Original Poster
Rep: Reputation: 15
I understand in unix you can set process size to unlimited.

Is this possible in linux 64bit? And would that setting have any negative consequences?
 
Old 07-15-2009, 04:31 PM   #4
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 19,970

Rep: Reputation: 3633Reputation: 3633Reputation: 3633Reputation: 3633Reputation: 3633Reputation: 3633Reputation: 3633Reputation: 3633Reputation: 3633Reputation: 3633Reputation: 3633
You don't need to set anything - a 64-bit process will be effectively unlimited.
If you really are needing huge size, addressability will be the least of your problems.
 
Old 07-15-2009, 04:44 PM   #5
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195
Quote:
Originally Posted by darsunt View Post
I understand in unix you can set process size to unlimited.
I think you are referring to a different kind of limit than the one I'm talking about, a limit that normally wouldn't be set and would default to unlimited.

The 2GB, 3GB and 4GB limits on 32 bit processes I described in my previous post are more fundamental and would apply even if this other kind of limit were set or defaulted to unlimited.

Quote:
Is this possible in linux 64bit? And would that setting have any negative consequences?
The default behavior for 64 bit processes in 64 bit Windows or 64 bit Linux is to allow amounts of virtual memory far beyond what could be practical for almost any purpose.

If you actually try to use that much virtual memory, the actual limit would be the total of physical ram plus swap space and that total will be far lower than any theoretical limit enforced by the OS kernel design.

There are reasons one might want a 64 bit application to reserve absurd amounts of virtual memory that it won't actually use. In that case, I don't know the limits nor what you might be able to do to change them. I think those extreme limits are higher in 64 bit Linux than in 64 bit Windows, but I'm not sure.

AMD in the design of the present (first) generation of x86_64 CPUs put in a limit of 131072GB. Nothing the OS can do could get you beyond that. I assume your swap partition is at most a tiny fraction of that size, so if your process's address space isn't incredibly sparse, that limit is effectively unlimited.

I have read that 64 bit Windows limits you to "only" 8192GB, but I'm not sure that was from a reliable source nor that I understood it correctly. I haven't had occasion to test that. Even with a limit of 8192GB, your process address space would need to be sparse for the limit to matter.

I don't recall seeing anything that says Linux 64 bit sets any theoretical limit lower than the 131072GB limit set by the hardware. But I haven't tested any of that.

Last edited by johnsfine; 07-15-2009 at 04:55 PM.
 
Old 07-15-2009, 04:47 PM   #6
karamarisan
Member
 
Registered: Jul 2009
Location: Illinois, US
Distribution: Fedora 11
Posts: 374

Rep: Reputation: 55
Just out of curiosity, would you be willing to share what it is that you're doing that will take so much memory?
 
Old 07-15-2009, 05:06 PM   #7
darsunt
Member
 
Registered: Jan 2007
Location: Huntington Beach, CA
Distribution: Red Hat
Posts: 51

Original Poster
Rep: Reputation: 15
research programs for physics graduate students.

Enormous calculations, linear equations and all that. I understand that their need for memory is nearly insatiable.
 
Old 07-15-2009, 05:20 PM   #8
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195
Quote:
Originally Posted by darsunt View Post
Enormous calculations, linear equations
More likely you mean linearized equations (iteratively recreating a system of linear equations that are successively closer approximations of the intractable non linear system you're actually trying to solve).

Quote:
I understand that their need for memory is nearly insatiable.
But the limits placed on process size by the OS only matter if they are smaller than the limits placed on process size by the available hardware.

In a 32 bit OS, the limits placed by the OS are lower than the limits placed by moderately expensive hardware.

In a 64 bit OS, the limits placed by the OS are higher than those placed by even absurdly expensive hardware. So for practical purposes the limits placed by the OS don't matter.

When you write those research programs you always need to find the compromise between what you want to compute and what you can compute.
 
Old 07-15-2009, 09:38 PM   #9
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 9,351
Blog Entries: 4

Rep: Reputation: 3334Reputation: 3334Reputation: 3334Reputation: 3334Reputation: 3334Reputation: 3334Reputation: 3334Reputation: 3334Reputation: 3334Reputation: 3334Reputation: 3334
32-bit implementations divided memory into "kernel space" and "user space" ... setting the boundaries between them at some location which was thought to be "unbelievably large" at the time. (And for nearly everyone these days, it still is.) They did it because it made memory addressing a whole lot easier ... and because it's what the hardware was most-easily capable of.

The 64-bit implementations did away with that. "Kernel space" and "user space" are distinct, entirely-separate address spaces. (And 64-bit processors have some nifty instructions and features which are specifically designed with that in mind.)

One day, of course, even that "unbelievably large" decision will seem woefully limiting. Say, in the next five years or so... max.
 
Old 07-16-2009, 05:50 AM   #10
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195Reputation: 1195
Quote:
Originally Posted by sundialsvcs View Post
The 64-bit implementations did away with that. "Kernel space" and "user space" are distinct, entirely-separate address spaces. (And 64-bit processors have some nifty instructions and features which are specifically designed with that in mind.)
That over states the case.

The x86_64 architecture is designed with a big hole in the address space. There is 128TB of usable virtual addresses at one end of the address space and 128TB of usable virtual addresses at the other end, with a 16776960TB hole (unusable addresses) in between.

The intent was that user mode get one 128TB area and kernel mode get the other, and actual 64 bit OS's work that way. But it is one address space and nothing in the architecture nor instruction set enforces the reservation of one area for user and the other for kernel.

So the hole in the middle of the address space is the only qualitative difference from the 32 bit address space.

Quote:
Originally Posted by sundialsvcs View Post
One day, of course, even that "unbelievably large" decision will seem woefully limiting.
The design goes to some lengths to try to prepare for the next generation of the architecture, in which that 128TB would change to at least 65536TB. In that architecture change there should be no application changes (not even recompile) and the OS kernel changes should be limited to one module. It would be a similar, but slightly smaller change than from a 32 bit kernel to 32 bit PAE, even though PAE changed just physical limits not virtual and this next x86_64 architecture would likely change both.

Last edited by johnsfine; 07-16-2009 at 06:00 AM.
 
Old 08-10-2009, 06:23 PM   #11
darsunt
Member
 
Registered: Jan 2007
Location: Huntington Beach, CA
Distribution: Red Hat
Posts: 51

Original Poster
Rep: Reputation: 15
Thanks for all the comments.

I am leaning towards getting Redhat. We're looking at huge amounts of memory, 48G minimum, plus as huge a swap space as feasible. A professor mention 1 terabyte of space, but I don't know if that is possible. Maybe a swap space can be distributed over a couple of disks.

They are concerned about VERY large numbers. I wonder how slowly the computer will run crunching numbers that are multiple times larger than double precision.

My concern would be if the op system had some built in limitation, like the working-set maximum in windows NT that set limits on process size to promote multiprogramming. But from your comments Redhat doesn't have this.

Also I might try Centos. This is all very experimental, and I'd hate to put in the serious bucks if these ideas don't work out.
 
Old 08-10-2009, 08:10 PM   #12
karamarisan
Member
 
Registered: Jul 2009
Location: Illinois, US
Distribution: Fedora 11
Posts: 374

Rep: Reputation: 55
If you're going to have 48 GB of memory, great, knock yourself out and write software to use it. However, maxing swap like that is a terrible idea. Swap is not a substitute for more memory. The kernel does its best to be intelligent about what it swaps out, but two great ways to make it work overtime are to give it many times more swap than the 1:2 rule and writing your software on the assumption that you can use way more memory than you really have. This will break your program's kneecaps, so to speak - since you know the details of your software, you can do a much better job deciding what goes to disk and when. Configure some minimal Linux distribution and your program will have 47.5 GB to work with. You should look into buying time on a supercomputer if you need more memory than that to do a single iteration.
 
Old 08-10-2009, 08:41 PM   #13
darsunt
Member
 
Registered: Jan 2007
Location: Huntington Beach, CA
Distribution: Red Hat
Posts: 51

Original Poster
Rep: Reputation: 15
Thanks for the comment. I am not experienced with using swap space, I know is that it can hold a considerable part of the process if there is not enough memory to hold the entire process image. I was unaware of possible problems.

The issue is very large numbers. I was told terabyte storage might be needed, so the swap would mostly hold those numbers. Since the program would only be accessing data from the disk, and only at the beginning or ends of calculations, would that lesson the problem?

I am not writing the software. I have no idea how such giant numbers are handled even with supercomputers. Numbers exponentially greater than double precision!

The professor who is running this project does rent supercomputer space, but she and her students have expressed considerable unhappiness with how that is going.
 
Old 08-10-2009, 10:59 PM   #14
karamarisan
Member
 
Registered: Jul 2009
Location: Illinois, US
Distribution: Fedora 11
Posts: 374

Rep: Reputation: 55
The issue with swap is that it's on disk. In a certain sense, yes, adding swap does increase the amount of memory you can work with - the problem is that disk is orders of magnitude slower than real memory, and your process is basically twiddling its thumbs while its pages are swapped back in. You should explain that to said professor and see if you can get her to think about how big an atomic part of her project really is.

The idea that she needs a terabyte to store a single number is kinda funny, though - the magnitude of 2 ^ 3 ^ 2 ^ 40 is far, far, far beyond human comprehension.

Edit: wait, no, brain fart. 2 ^ 8 ^ 2 ^ 40. The difference between the two is... well, it makes the first one look small.

Last edited by karamarisan; 08-10-2009 at 11:00 PM.
 
Old 08-10-2009, 11:25 PM   #15
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Centos 7.7 (?), Centos 8.1
Posts: 17,904

Rep: Reputation: 2614Reputation: 2614Reputation: 2614Reputation: 2614Reputation: 2614Reputation: 2614Reputation: 2614Reputation: 2614Reputation: 2614Reputation: 2614Reputation: 2614
There's definitely a difference between storing a 'large number' which would only take a few bytes (even for a 64 bit long long), and storing a lot(!) of large numbers which could take a significant amt of space.
Certainly as soon as you start swapping significantly, your process is going to slow down a lot.
You really need to understand exactly what sort of processing/storage is reqd.
Calculations are typically cpu bound and require some RAM. Data sets may require extensive disk space.
Here's a useful page: http://www.redhat.com/rhel/compare/ which also would apply to eg Centos.
Alternatively, see Scientific Linux, which is a RHEL clone from CERN/Fermilab https://www.scientificlinux.org/.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
32-bit process size limit on x86_64 system DonCornelius Linux - Kernel 1 12-06-2007 03:26 AM
How: Limit Process Size? RedHat AS3 Bruno88 Linux - General 2 11-22-2004 09:44 PM
Linux process limit kangaman Linux - General 4 10-14-2004 02:48 PM
Is there a size limit that Linux recognizes? alfy Linux - Hardware 4 05-30-2003 12:28 PM
is there a limit to file size in Linux (RH 7.2) hemlock Linux - Newbie 6 09-06-2002 07:06 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 01:18 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration