Quote:
Do you think if I remove all 'l''s from formatting strings can I be hopeful that this code would not break again for this reason? |
Quote:
Code:
printf("some_int_var=%ll\n", (long long)some_int_var); |
Quote:
bool8 SysString::get(int16& val_a) const { // declare local variable // int32 tmp_val = 0; // use the 8-bit character conversion // if (sscanf((char*)(byte8*)(*this), (char*)DEF_FMT_LONG_8BIT, &tmp_val) != 1) { return false; } // set the output // val_a = tmp_val; // exit gracefully // return true; } Generally I want to specify the type of my variables to use certain number of bits so these variables should be exactly the same on all Machines. For formats we have: const char SysString::DEF_FMT_LONG_8BIT[] = "%ld"; Now if change formats to for example this: const char SysString::DEF_FMT_LONG_8BIT[] = "%d"; Can I be hopeful to get the same result for example on 128-bit machines too? I mean is it likely that "%d" definition change again? |
Quote:
In your design, you want val_a to be an explicit size independent of architecture. That makes sense. But you are being too strict in deciding tmp_val is also an explicit size independent of architecture. tmp_val exists only to interface to sscanf. sscanf does not work with explicit sizes independent of architecture. So you should change your objective to just make sure tmp_val is at least as big as val_a. There are lots of ways of doing that while declaring tmp_val as some architecture specific size that is compatible with sscanf. Mainly your problem is wrapped up in your choice of using sscanf at all. This is C++ code. sscanf is a lame holdover from C. If you were using some kind of stringstream as the text side source and using operator>> instead of sscanf then the operator overloading of streams would fit the format to the destination automatically rather than requiring all this work on your part to do so. I have no idea what a 128 bit architecture would look like. Lots of different things are different sizes in each architecture. But the virtual address size has been the primary driver of the size naming of architectures. You might think that the exponential growth from 16 bit virtual addresses to 32 bit virtual addresses to 64 bit virtual addresses would logically continue to 128 bit. But it won't: 16 bit virtual addresses were already too small when 16 bit x86 was introduced and were horribly too small by the time 32 bit x86 was introduced. 32 bit virtual addresses were plenty large enough when introduced and were still mostly large enough when 64 bit was introduced. 32 bits were closer to adequate when 64 bits were introduced than 16 bit was when 16 bit x86 itself was introduced. In that sense the available addressing doubled twice while the required addressing only really doubled once. Then the exponential growth in problem size just needs an exponential growth in memory, which is only a linear growth address size. So the jump from 32 to 64 was is another way twice the jump from 16 to 32. So 64 bit addressing should be plenty for at least four times longer than 32 bit addressing was plenty. So I think you're trying too hard to guess distant future portability issues. |
Quote:
Anyway, thanks a lot. All of you ,specially john, were very helpful. |
Quote:
Secondly, you need fixed sizes only if/when you deal with HW generated data, e.g. if/when you deal with, say, Ethernet packet. I don't see a case when one needs "bool8" (taken from http://www.linuxquestions.org/questi...ml#post4046144 ). I.e. I would give the compiler to choose width of 'bool' type. Anyway, if you want to complicate things, you still can use constructs like Code:
if(sizeof(my_int_var) == sizeof(int)) This can be scripted (i.e. C++ code can be generated by a script) and can probably be implemented through templates. Still, rethink the whole issue of imposed size variables. |
Edit - Misread (can't delete?)
|
All times are GMT -5. The time now is 04:30 PM. |