24 bytes of data manipulation
I have 5 byte of data
rop 2 bytes & por 3 bytes first 2 byte I declared as uint16_t rop; remaining 3 bytes i.e. por I cannot declared as uint24_t (doesnt exist) How should I declare it ,I am sure 24 bytes of datatype is not there. |
There are no 24 bit data types available. But you can use the "uint8_t [3]" data type or "uint32_t" data type and ignore the upper eight bits or, if this in a struct declaration use bit fields:
Code:
typedef struct { |
Quote:
|
you appear to have your bits and bytes muddled up.
What programming language are you using? Would arrays of bytes work for your specific needs? |
Actually I have a header that consist of 13 bytes.
Code:
struct header IMPORTANT NOTE HEADER should be to 13 bytes on receiving side also (client side) So as per your reply if I do uint32_t por : 24; will it take as 3 bytes only ,on receiving end will it take 13 bytes or 14 bytes because in POR it requires lots of parsing also |
Since you are content to use a character array for five bytes why not use a character array for three?
In fact you might what to use a character array for all 13 bytes. |
Sending and receiving bytes is safe method
Just because your program's internal representation is in bytes, uint_16s, and 24-bit values does not mean that the values should be sent across the network in those size pieces. I would insert a function layer between the data structures and the network layer functions to convert all the data to bytes on transmit and convert back to the other data types on receive. That way you know with certainty which bytes are assembled into which bits in the larger data fields. Network order is defined for some larger data types as well as for bytes, but there is no 24-bit data type universally available on all systems.
I recommend against abusing unions for converting data fields from one form to another. K & R explicitly states that the members of a union may only be read from a union in the same type that they were last written. The ordering of the data in the union is explicitly implementation-dependent. That is, if you write a uint_16 to a union and read it out as two uint_8s, there is no guarantee about the mapping of the bytes relative to the bits in the uint_16. This applies similarly to bit fields, because some systems assign bits starting with the most significant bit and others assign them starting with the least significant bit. Since it is possible for some implementation to order the bits or bytes in the opposite order on one system as another implementation on a different system, it will happen sooner or later. This will embed a bug into the code that may not appear for many months or years and will therefore be very difficult to locate and correct. It may not even be detected until after a lot of data has been corrupted. This can lead to a lot of #ifdef on system types which make the code very difficult to read and maintain. This convoluted sort of attempt to fix such a bad idea is frustrating and ugly and best prevented rather than repaired. If you can not guess why I rant about this, send me email and I will send you sanitized excerpts of code where this was done. On the other hand, converting bytes to larger data types and using flag words to explicitly assign bits in larger words will always work the same way on every system. |
All times are GMT -5. The time now is 11:32 PM. |