c++ containers
currently I have
Code:
char charr[40][80]; I am wondering if a container would better suit this data. From what I gather, a map container looks like it may work better than a vector, although a have not used maps. How would I make the key based on 2 values (X and Y). Would this be done with a 2D map? ...and furthermore I know vectors can be made 2D, but is this good prog. practice?, or is there a better way to represent multi-dimensional data? |
Well I don't know much about practices, but I think if your application is such that you can easily supply indices to access data then array should be faster than map. You won't need a vector until you want to change size of vector on runtime.
|
Quote:
Code:
std::vector<std::vector<char> > charr; Quote:
Quote:
If you were to implement it as a map, it could be done in two ways:
There are other container types too; deciding which to use depends a lot on how the data is to be used. One of the nice features of the STL containers is that they have similar interfaces, so changing from one to another is often simple. |
Quote:
If it is just the contents that will be changing, the C Array you already have is the best structure for storing that data. If the 40 and/or the 80 are subject to change, then you may be better off with something based on std::vector (such as the obvious solution, a vector of vectors). Quote:
Quote:
If the 40 by 80 were much bigger and the use were sparse, then a map might be better. Otherwise, a map adds needless complexity and overhead. Quote:
The more general/professional approach (assuming a map is a good idea at all) is to define an object to contain the two parts of the key and define a comparison operator for that object and use that as the key to the map. Quote:
A vector of vectors makes most sense when the data is fairly dense, both dimensions vary (can't be predicted at initial allocation time) and the dimensions are "ragged" meaning the second dimension is not consistent across the first dimension. |
As johnsfine said, a vector of vectors is good for dense data and ragged dimensions. For strictly rectangular dimensions, there's Boost MultiArray.
|
Quote:
Quote:
2.Many of the values in each specific index will be uninitialized or unimportant. 3.Values will be quickly changing(initialized, uninitialized, changing values), and will be scattered throughout. 4.The "important" data would start out small, and gradually increase over time. Quote:
Quote:
If this databank was very large, seems there would be a point where the "important" data would be better represented be a different container. A memory vs. speed. for example: data starts out small and sparse across the array, but as time goes by, the data size increases (but is still sparse)...while it is small, a map would work well enough, but as it gets larger a vector would be quicker) Is this correct? and how would I know where this line would be drawn? |
Quote:
If it is much sparser than that, the choice of map vs. dense storage might be an interesting time/space tradeoff. But at 5% filled or more there isn't even a time/space tradeoff; a map is just worse. Quote:
Quote:
Quote:
If you know the size at compile time, a C array is simplest and best. If you know the size at allocation time, a C array has some performance advantage over a vector, but maybe not enough to care about. If you find out the size later, a vector is a lot simpler than a C array. Quote:
Many people I work with routinely use STL containers whenever they work, without thought to whether the use is appropriate. That often causes performance problems and sometimes even makes the code less readable. One co worker asked me for performance tuning help on some problem code. I immediately noticed a vector of vectors of vectors, but only one of the three dimensions was unknown at compile time. Fixing just that detail fixed the performance problem, but it also reduced the number of lines of code and the length of many lines, making it possible to really see what the code did. |
All times are GMT -5. The time now is 10:25 AM. |