ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
I seem to be limited by the maximum size of a 4 dim array. What is the maximum number of elements for an integer array and is there anyway of extending?
As far as I know (which may not be very far) array limits are imposed by the amount of memory you can allocate. Try increasing your swap partition size to see if it helps.
Note 1:
Quote:
The above is a general statement. The specific programming language you're using may impose other restrictions. (Which language is it?)
Note 2:
Quote:
Do you anticipate that all your array cells will be filled? If not, look at the sparse array libraries available for your use.
Note 3:
Quote:
Are you planning to use the array for numeric calculations? If so, consider using the R system or Octave to prototype your problem. You may be able to use the prototype for your production system as a callable subroutine.
If not, consider linked lists or a database system (e.g., sqlite or mysql) as an alternative approach.
silly me, writing in C, but using some of the C++ libraries hence compiling with g++.
Its for numerical calculations of the form where I require [100][100][100][200] for a look up table.
silly me, writing in C, but using some of the C++ libraries hence compiling with g++.
Its for numerical calculations of the form where I require [100][100][100][200] for a look up table.
Assuming this is using a char requiring just one byte of storage and there's no other alignment issues that's ...
100
* 100 = 10,000
* 100 = 1,000,000
* 200 = 200,000,000 bytes of storage required, MINIMUM. (multiply times the sizeof whatever you really need.)
Are you sure you're taking the right approach to your problem? Will the data really use 200 million entries? Perhaps some kind of database would be a better fit (but probably slower).
I'm also curious to hear the exact problem you're trying to solve. It seems like once you get beyond a 2d array, you're probably making things more difficult than they need to be. (On the other hand, I've never developed scientific apps.)
I will try and ellucide the problem a little more. The problem is well known. In that in a particulate system, the particles are indexed, to make finding the relevant ones easier and for that purpose one uses a cell index method. So the volume is divided into cells and thats what you see the structure for. Each cell is [x][y][z][N] with N the maximum number of particles that can ever appear in that cell. Now for ease and speed, having a large number of cells is ideal. Hence I need to increase the cell numbers. But when I do this beyond certain limits I get a segmentation fault.
I don't understand why you need a 4th dimension, it's a volume hence three dimensions or have I missed something? <snip>(like is it to allow a reference to each particle? If that is the case then remove it and make the array reference a list of particles; the list can be dynamic and thus save yourself memory)</snip>
I will try and ellucide the problem a little more. The problem is well known. In that in a particulate system, the particles are indexed, to make finding the relevant ones easier and for that purpose one uses a cell index method. So the volume is divided into cells and thats what you see the structure for. Each cell is [x][y][z][N] with N the maximum number of particles that can ever appear in that cell. Now for ease and speed, having a large number of cells is ideal. Hence I need to increase the cell numbers. But when I do this beyond certain limits I get a segmentation fault.
Any ideas how to boost this?
As asked in a post above, are you merely indexing each point in 3-D space to count the number of particles at that point? (You only need 3 dimensions to do that) Or do you need a 4th dimension to track some other quality/attribute about each particle?
Are you going to have 200 million particles in 3D space? (Probably not? If you are representing this graphically at some point a 1600x1200 display only has 1.92 million pixels.) Instead of making an array representing 3-D space, why not try it from the other side and keep a list of structures representing each particle, each of which knows its own 3-D coordinates? Granted, there's more code involved to keep the list organized, so it can easily find all the particles in a given 3-D coordinate, but it should be considerably less memory intensive.
thanks for the reply guys. I will try and answer your questions. ...
The particle positions are not being stored. I am referncing the particles and their properties with a second look-up table. However I slicing the volume into cubes, hence the x,y and z as noted above. Hence the 4th dimension. There is a well known reason why I use this structure...but to talk about the detail is not probably not going to elucidate things.
So my question...how can I extend the range N, when I have the following array of integers [N][N][N][N]. This is just what I need to do.
But when I do this beyond certain limits I get a segmentation fault.
Have you used a debugger to see why and with which
values you get that segfault? Is your particle count
by any chance larger than the size you're trying to
store it in?
[/edit]
Cheers,
Tink
Last edited by Tinkster; 02-04-2007 at 05:16 PM.
Reason: spotted the segfault bit after all
However I slicing the volume into cubes, hence the x,y and z as noted above. Hence the 4th dimension. There is a well known reason why I use this structure...but to talk about the detail is not probably not going to elucidate things.
Whilst the reason may be well known, I don't know about it; so it is hard to give good advice but I still believe that your basic structure is wrong.
You have a 100x100x100x200 array of pointers (assuming a 32-bit pointer) thats 800Mb of memory. That is before you start to use up memory for the actual data that is being referenced.
You need to look at that and decide is there are reason for having all of that in memory at the same time. I believe that you have divided the volume into 200 (or N) slices, maybe you could put each slice into a file and read it in when and only when you require that slice. Whatever, you either need to be more flexible with your array or invest in much more memory.
There really is no way around this. I have plenty of memory. 4GB, so physical memory should not be a problem. Anyhow surely the memory can we swapped to disk if the memory is not enough.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.