[SOLVED] Some questions about C data types and type specifiers

ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices

Welcome to LinuxQuestions.org, a friendly and active Linux Community.

You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!

Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.

If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.

Having a problem logging in? Please visit this page to clear all LQ-related cookies.

Introduction to Linux - A Hands on Guide

This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.

Click Here to receive this Complete Guide absolutely free.

Location: Earth? I would say I hope so but I'm not so sure about that... I could just be a figment of your imagination too.

Distribution: Currently OpenMandriva. Previously openSUSE, PCLinuxOS, CentOS, among others over the years.

Posts: 2,995

Rep:

Some questions about C data types and type specifiers

While I re-read the chapter in the book about arrays; I figured it was a good idea to re-read the chapter about "data types". While I get what an int, float, _Bool and char mean; I'm not clear on exactly how a "double" is different to a "float".

So for example, under the "double" data type section it says this:

Quote:

The double type is the same as type float, only with roughly twice the precision.

then under the heading "The Extended Precision Type double" it says this:

Quote:

Type double is very similar to type float, but it is used whenever the range provided by a float variable is not sufficient. Variables declared to be of type double can store roughly twice as many significant digits as can a variable of type float. Most computers represent double values using 64 bits.

What exactly is meant by "precision" ? Does this mean how many numbers that data type can hold ? As I'm just not sure what exactly it means by "range".

Does anyone have a clear explanation of this?

Also, under the heading "Type Specifiers: long, long long, short, unsigned, and signed" it says:

Quote:

If the specifier long is placed directly before the int declaration, the declared integer variable is of extended range on some computer systems.

I've got basically the same question as above; what exactly does it mean by "range" ?

Does the "short" type specifier mean that something declared as a "short" can only hold a smaller value ?

Thanks for any help.

James

Last edited by jsbjsb001; 08-31-2019 at 10:29 AM.
Reason: added name of programming language to thread title

The best thing to do here is to write a test program to printout the size of each variable type.

Yes a double is intended to be a larger variable of the same characteristics as float. Say a float is 32 bits, then typically a double would be 64-bits.

I've always seen this to be true with my machines and compiler environments.

But it's best to check on the computer you are building your code on, to be sure.

You know:

Code:

printf("Size of a float is: %d\n", sizeof(float));

The point is that there is no limit to the size of a floating point number (because you can always include a powers-of-ten exponent) but there is a limit to its precision, i.e. the number of filled-in decimal places. Now for most purposes, an ordinary float is quite good enough, but that isn't so for applications like weather forecasting. These deal with "chaotic systems" where a minuscule change in the original state can cause a completely different end state. So greater accuracy is needed. Hence double precision.

Location: Earth? I would say I hope so but I'm not so sure about that... I could just be a figment of your imagination too.

Distribution: Currently OpenMandriva. Previously openSUSE, PCLinuxOS, CentOS, among others over the years.

Posts: 2,995

Original Poster

Rep:

Thanks guys.

Quote:

Originally Posted by hazel

The point is that there is no limit to the size of a floating point number (because you can always include a powers-of-ten exponent) but there is a limit to its precision, i.e. the number of filled-in decimal places. Now for most purposes, an ordinary float is quite good enough, but that isn't so for applications like weather forecasting. These deal with "chaotic systems" where a minuscule change in the original state can cause a completely different end state. So greater accuracy is needed. Hence double precision.

This exactly what I am confused about. But I'm still not quite clear about what you're saying here. I'm also still not clear on exactly what the difference is between the "range" and the "precision".

Does the "range" refer to whether or not it's 16-bit, 32-bit, 64-bit, etc and the "precision" refers to the number of decimal places? I'm also not clear on what exactly you mean by "the number of filled-in decimal places".

I know NevemTeve gives an example above, but I'm still not sure I'm understanding it properly. All I can tell is that the example values he's given are longer for the 64-bit example.

Term precision usually refers to floating-point values (say number of representable decimal digits), while range refers to both integer and floating-point variables (minimum and maximum representable values).

On some platforms, range of float is 1E-38..1E38, precision is (around) 9 decimal digits; range of double is 1E-308..1E308, precision (around) 17 decimal digits.

Distribution: CentOS, MacOS, [Open]SuSE, Raspian, Red Hat, Slackware, Solaris, Tru64

Posts: 1,478

Rep:

Quote:

Originally Posted by jsbjsb001

While I re-read the chapter in the book about arrays; I figured it was a good idea to re-read the chapter about "data types". While I get what an int, float, _Bool and char mean; I'm not clear on exactly how a "double" is different to a "float".

So for example, under the "double" data type section it says this:

then under the heading "The Extended Precision Type double" it says this:

What exactly is meant by "precision" ? Does this mean how many numbers that data type can hold ? As I'm just not sure what exactly it means by "range".

Floating point numbers consist of a mantissa and an exponent. Think "scientific notation". the number of bits allocated to the mantissa defines the precision---how many significant digits you can represent. The number of bits in the exponent define the range---how large or small a number that can be represented.

Doubles have a greater precision because of having more bits allocated to the mantissa. Doubles also allocate more bits to the exponent and have a greater range. The Wikipedia article on the IEEE 754 floating point standard has a lot more information.

There's an interesting video on computer number representation, how granular it is, and the pitfalls you can run into when dealing with floating point numbers at https://www.youtube.com/watch?v=pQs_wx8eoQ8 (one of the episodes of PBS's `Infinite Series' series on mathematics).

Location: Earth? I would say I hope so but I'm not so sure about that... I could just be a figment of your imagination too.

Distribution: Currently OpenMandriva. Previously openSUSE, PCLinuxOS, CentOS, among others over the years.

Posts: 2,995

Original Poster

Rep:

Thanks again guys!

The part of rnturn's reply I've quoted below seems to make the most sense;

Quote:

Originally Posted by rnturn

Floating point numbers consist of a mantissa and an exponent. Think "scientific notation". the number of bits allocated to the mantissa defines the precision---how many significant digits you can represent. The number of bits in the exponent define the range---how large or small a number that can be represented.
...

So while it's clearer to me now, and just to be as clear as I can be about it; the "precision" is only relevant for floats and doubles ?

I'm not clear on exactly what "significant bits" mean exactly tho - other that it has something to do with the "sign bit" of the processor, I think?

The number of significant bits represents the number of bits which can be used to specify the value.

As noted with the mantissa and the exponent, they use more bits in a double. Which means that you can go much farther in digits when you describe the number. This capability allows greater precision for calculations.

Say for example 1/4
That is 0.25
We needed two digits on the right of the decimal point to express that.
Now what is 1/8?
0.125
But say we only had enough space for two digits on the right of the decimal point?
Then we could never express 1/8 with that restriction.

Therefore having a greater size variable gives us the same sort of benefit.

So while it's clearer to me now, and just to be as clear as I can be about it; the "precision" is only relevant for floats and doubles ?

Yes, because integers increase and decrease stepwise. If you only have integers, you can't be very precise about something like pi. You can only say that it's less than 4 and a bit more than 3.

Quote:

I'm not clear on exactly what "significant bits" mean exactly tho - other that it has something to do with the "sign bit" of the processor, I think?

No, nothing to do with that. Significant figures is just mathspeak for the number of decimal places you include.

I think you mentioned being a visual learner before, so take a look at FIGURE 2-6 Number Line, which shows the values that can be represented for a float with 3 bits of precision (the link goes to the caption below the picture, so you need to scroll up a bit to see the picture). Each tick is a number than can be represented; numbers between the ticks cannot be represented.

Location: Earth? I would say I hope so but I'm not so sure about that... I could just be a figment of your imagination too.

Distribution: Currently OpenMandriva. Previously openSUSE, PCLinuxOS, CentOS, among others over the years.

Posts: 2,995

Original Poster

Rep:

Thanks again guys!

I think I get it now, but I'll probably have to review this thread (possibly among other things) for it all to sink in. I'm in a bit of a holding pattern with the mathematics at the moment, since the member that's helping me with it privately has had some other things come up they've needed to deal with/life has gotten in the way. And I don't really want to get too far ahead of myself, then realize I misunderstood something, then have to clear up what I misunderstood, if ya's know what I mean. Plus I've found that if I give myself time to absorb one thing before moving on to the next, things seem to be clearer and easier for me later on.

In any case, the replies seem to make sense to me. So I'll mark this thread as [SOLVED] given that.

LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.