Originally Posted by amir1981
thanks but I think here the situation is a bit different
Only because you are enforcing your own rules beyond where they should apply.
In your design, you want val_a to be an explicit size independent of architecture. That makes sense.
But you are being too strict in deciding tmp_val is also an explicit size independent of architecture. tmp_val exists only to interface to sscanf. sscanf does not work with explicit sizes independent of architecture.
So you should change your objective to just make sure tmp_val is at least as big as val_a. There are lots of ways of doing that while declaring tmp_val as some architecture specific size that is compatible with sscanf.
Mainly your problem is wrapped up in your choice of using sscanf at all. This is C++ code. sscanf is a lame holdover from C. If you were using some kind of stringstream as the text side source and using operator>> instead of sscanf then the operator overloading of streams would fit the format to the destination automatically rather than requiring all this work on your part to do so.
I have no idea what a 128 bit architecture would look like. Lots of different things are different sizes in each architecture. But the virtual address size has been the primary driver of the size naming of architectures.
You might think that the exponential growth from 16 bit virtual addresses to 32 bit virtual addresses to 64 bit virtual addresses would logically continue to 128 bit. But it won't:
16 bit virtual addresses were already too small when 16 bit x86 was introduced and were horribly too small by the time 32 bit x86 was introduced.
32 bit virtual addresses were plenty large enough when introduced and were still mostly large enough when 64 bit was introduced.
32 bits were closer to adequate when 64 bits were introduced than 16 bit was when 16 bit x86 itself was introduced. In that sense the available addressing doubled twice while the required addressing only really doubled once.
Then the exponential growth in problem size just needs an exponential growth in memory, which is only a linear growth address size. So the jump from 32 to 64 was is another way twice the jump from 16 to 32. So 64 bit addressing should be plenty for at least four times longer than 32 bit addressing was plenty.
So I think you're trying too hard to guess distant future portability issues.