Looking for C/C++ unicode related programming tutorial
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
The reason I suggested Qt in your earlier thread was that it hides much of the complexity of Unicode. From personal experience I would avoid doing it in C/C++ and use a library.
For all thing UNICODE you can go to the official UNICODE site.
The reason I suggested Qt in your earlier thread was that it hides much of the complexity of Unicode. From personal experience I would avoid doing it in C/C++ and use a library.
For all thing UNICODE you can go to the official UNICODE site.
I can not open my previous post so I start another one. Sorry for any inconvenience. :-)
I am not programming with GUI, so I can not use QT. The unicode official web site is very informative -- but I am going to find dedicated deucation tutorials for C/C++ programming, the web site only deals with general concepts of unicode.
I am not programming with GUI, so I can not use QT.
You can still use Qt, it has a switch to allow you to ignore the GUI libraries see
That way you can use the QString class that supports UNICODE without having to bundle all the GUI libraries with your application.
The difficulty with UNICODE programming is that C++ supports wide characters but these tend to be in a different format to how UNICODE is stored on disk. So the typical process is, read it in, convert it, do something (such as display on the terminal), convert back, write to file. It really is quite messy and so I strongly urge you to look for a wrapper class that manages that all for you.
As I said Qt is one solution (which maps across multiple O/S) but there are probably others out there.
You can still use Qt, it has a switch to allow you to ignore the GUI libraries see
That way you can use the QString class that supports UNICODE without having to bundle all the GUI libraries with your application.
The difficulty with UNICODE programming is that C++ supports wide characters but these tend to be in a different format to how UNICODE is stored on disk. So the typical process is, read it in, convert it, do something (such as display on the terminal), convert back, write to file. It really is quite messy and so I strongly urge you to look for a wrapper class that manages that all for you.
As I said Qt is one solution (which maps across multiple O/S) but there are probably others out there.
I will consider QT, the only question is that, it will add footprint of my program.
Regarding the C++ UNICODE support, I am not quite sure what do you mean "but these tend to be in a different format to how UNICODE is stored on disk" and the conversion operations you mentioned afterwards. Could you show me an example please?
"but these tend to be in a different format to how UNICODE is stored on disk" and the conversion operations you mentioned afterwards. Could you show me an example please?
As I remember C++ stores UNICODE internally (that is as variables) as a two byte char or w_char. However when it is stored on disk it can be stored as anything from a one byte to a four byte char. Essentially this is how the conversion goes, if the original character is from the original 7-bit ASCII character set then it will be stored on disk as a single byte with the left most bit set to zero. If the left most bit is not zero then it will be stored as a multi byte character, with some bits set aside to determine how many bytes are to be used and the rest to hold the actual data.
The reason for this it that internally to do sorts and comparisons it is easier if each character is of the same length, historically ASCII is the key encoding and so when storing these files on disk significant space can be saved if these can bas saved as single byte characters.
As I remember C++ stores UNICODE internally (that is as variables) as a two byte char or w_char. However when it is stored on disk it can be stored as anything from a one byte to a four byte char. Essentially this is how the conversion goes, if the original character is from the original 7-bit ASCII character set then it will be stored on disk as a single byte with the left most bit set to zero. If the left most bit is not zero then it will be stored as a multi byte character, with some bits set aside to determine how many bytes are to be used and the rest to hold the actual data.
The theory is very useful. I am looking for some practical programming resources (tutorials) on the topics you mentioned above. Do you have any recommended ones?
Quote:
Originally Posted by graemef
The reason for this it that internally to do sorts and comparisons it is easier if each character is of the same length, historically ASCII is the key encoding and so when storing these files on disk significant space can be saved if these can bas saved as single byte characters.
Why comparison of the characters with the same length is faster than variable-length characters? I think it depends on how we implement the comparison algorithm. :-)
Why disk space is saved by "be saved if these can bas saved as single byte characters"? I think some characters are stored as single byte in unicode, but there are more characters which are stored as multiple bytes in unicode. Why disk space is saved?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.