LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Blogs > Ramblings from the soapbox of theNbomr
User Name
Password

Notices


My intention here is to explain some things that are frequently misunderstood in the field of computing. Usually, these will involve something technical, and will be of concern to programmers and others who use computers in ways that are beyond the end-user level.
Rate this Entry

A Bit About Digital

Posted 07-08-2012 at 04:45 PM by theNbomr
Updated 12-19-2012 at 10:08 AM by theNbomr

A Bit About Digital

This essay is to clarify some common misunderstandings about how digital data is created, stored and used.

In the beginning.
By now, we've all heard that computers 'only use ones and zeros', and mostly we've come to accept the apparent contradiction without really understanding what it means. The statement is true, if not somewhat over-simplified. The use of only 'ones' and 'zeros' is another way of saying that digital devices are binary in nature; their root behavior is a combination of elements that are either one or zero. These elements are what we call 'bits'. The word is a sort of contraction of the words 'binary digits'.

It would be natural to wonder why this extremely simple element became the basis for what we now know to be fairly complex and sophisticated devices. The answer lies in the very lowest level makeup of digital electronics. It so happens that a very simple circuit called a bistable multivibrator can be created using a limited number of components. The bistable multivibrator, also commonly known as a flip-flop (the name of which preceded the name of the footwear), has the property that its output can exist in one of only two states: on or off. It's other endearing property is that its state can be controlled. As well, as I mentioned earlier, it is a fairly simple circuit,and this lends it to easy replication en-masse, in the factories that produce the silicon based digital components that fuel our digital world.

The flip-flop is composed principally of two transistors which are connected in such a way that one is always in its 'on' state, while the other is in a complementary 'off' state. I say complementary, because electronically, the state of each tends to reinforce the state of the other. So as long as the circuit is powered, the two transistors will retain their present state. I did say that the state of a flip-flop can be controlled. It is also a property of a flip-flop that an electrical pulse of the right nature, applied to the right part of a flip-flop circuit will overcome the complementary holding condition of the two transistors, and cause each of them to switch to the other state. This accounts for the name 'flip-flop'; the circuit can flip back and forth between either of two states, but only when driven by a suitable pulse.

The ability of the flip-flop to retain its state indefinitely, as long as it is supplied with power, constitutes the basis of computer memory. Each bit of computer memory is, at its fundamental level, a flip-flop.

It is worth mentioning that there was a time when flip-flops were built up not from transistors, but from vacuum tubes. These were obviously much much larger, slower, and consumed comparatively enormous amounts or energy. Those were the days when computers took up whole floors of large buildings.

More than a bit to it.
If computer memory and other parts of computers are composed only of bits that represent ones and zeros, then how do we use them for doing all the things which use other numbers? After all, computers have given us the ability to use numbers that are almost infinitely large or small. The answer is that we can group bits in combinations. If a single bit can represent either of two values and we have two bits, then the combination of the two bits can represent four possible values. For each possible value of one bit, the other can also have two possible values. Adding another bit to the combination doubles the number of possible states again. So a three-bit combination now allows us to represent eight possible values. We can see that adding bits to the combination quickly expands the range of values that the combination of bits can represent.

At some point, our digital world settled on a convention that we would use bits in combinations of eight. We would call the combination of eight bits a 'byte'. By extending the logic described in the previous paragraph, we can determine that all the possible combinations of ones and zeros that can be represented by eight bits totals 256. So, in an a mere eight-bit byte, we can store any one of 256 discrete values. Just as we did to build up bits into a byte, we can combine bytes in multiples, to get exponentially greater combinations of bits.

Now, we've already stated that a byte combines eight bits, so that there are 256 possible unique combinations of ones and zeros that can exist within a byte. To use this property to the greatest advantage, we must somehow assign some distinction between the individual bits. Otherwise, how would we know which combination of four zeros and four ones, for example, we are talking about? And, how would we assign some value to any particular combination?

The answer to this question starts to lead us into some arithmetic. It is a little beyond the arithmetic used to balance your checkbook, but isn't too difficult.

In order to start giving all of the various combinations of ones and zeros in a byte some actual values, we assign each bit a 'weight'. We use a notation that describes the weight of a bit, by appending an index number to the bit. This ends up with bits that are numbered 'bit-0', 'bit-1', 'bit-2', etc. Since there are eight bits, and we started at 'bit-0', the 'last' bit will be 'bit-7'. Most readers will immediately notice that we starting counting our bits from number zero, rather than the more conventional counting from one. There is a good reason for that, and we will see it, as we start to develop our understanding of the weights of each bit.

When we say that bits are assigned a weight, it implies that some bits are more 'influential' than others in their contribution to the value of the combination if bits. Mathematically, this is correct. In fact as the index number of the bit increases by one, its weight doubles with respect to the next lower indexed bit. How does this help us, you might wonder? Well, it leads to a formula, which we can use to get the unique value that each combination of bits in a byte represents.

For a byte with bits numbered 0 through 7, the value of the byte is a sum of weighted bits. Bit 0 can contribute 0 or 1, depending on its state. Bit 1 can contribute 0 or 2, depending on it's state. Bit 2 contributes either 0 or 4, and so on for each bit in the byte. We see that the value of each bit is either 0, or it is 2 raised to the power of its bit number.

Lets see how that looks for the first two bits, bit-0 and bit-1. Bit zero contributes the values 0 or 1, and bit one contributes the values 0 or 2. All of the four possible combinations , and the respective sums of weighted bits:
Code:
       bit    1       0
              -----------
              0       0     ==>   0   +   0   = 0
              0       1     ==>   0   +   1   = 1
              1       0     ==>   2   +   0   = 2
              1       1     ==>   2   +   1   = 3
The above table shows that two bits can uniquely represent the values 0 through 3. We can build up the table, adding a bit at a time:
Code:
       bit    2      1       0
              ------------------
              0      0       0     ==>   0   +   0   +   0   = 0
              0      0       1     ==>   0   +   0   +   1   = 1
              0      1       0     ==>   0   +   2   +   0   = 2
              0      1       1     ==>   0   +   2   +   2   = 3
              1      0       0     ==>   4   +   0   +   0   = 4
              1      0       1     ==>   4   +   0   +   1   = 5
              1      1       0     ==>   4   +   2   +   0   = 6
              1      1       1     ==>   4   +   2   +   2   = 7
As we expand the table to cover all eight bits in a byte, we finally reach the limit of 255 as the maximum value, where all bits are ones. So the values we can represent in a single byte are the number 0 through 255.

Although we are starting to see how bits become part of a bigger whole with some probable purpose, it still isn't obvious how all of the ones and zeroes sitting around can do us any good. We stated earlier that the flip-flop used to store a bit is the fundamental unit of computer memory. What was not stated was that besides storing bytes, computer circuits can also be used to manipulated bytes in different ways. A byte can be read from its storage place, and copied into another storage place. A byte can be used in an arithmetic operation, such as addition or subtraction. A byte can be manipulated 'logically' by performing various kinds of comparisons of it against other bytes, such as the 'and' and 'or' comparisons. These are some of the fundamental ways that the microprocessors in our computers operate on bytes and multi-byte 'words'.

Not surprisingly, just as we combined bits into bytes, and bytes into words to create more and more useful building blocks, we can combine the fundamental operations that a microprocessor is able to perform into increasingly useful groups. This activity is called programming. More on that later.

At this stage, we have developed the concept of what makes up the bits and bytes in our computers, and a little bit about how they are organized. Since the point of this essay is to start clarifying some misunderstood concepts, we'll jump in with one now. You will notice that when we learned that a bit could have the distinct values 'zero' or 'one', we attached these numeric values to the bit states somewhat arbitrarily. We can, and do, also assign boolean values to the two states in which a bit can exist: true vs. false, for example. We sometimes use the terms 'on' and 'off', as well as any number of application-specific values for the state of a bit.

When we talk about computer memory, and the fact that it is composed of bits in combinations of bytes, we tend to think of the cells of memory as places where we store some value. What we often overlook is that a memory cell, be it a bit, and byte, or any other unit of memory always has a value. This is true, regardless of whether we have deliberately put some value into it or not, and is something that programmers in particular need to keep sight of.

Another concept that we need to understand, is that although a byte has a distinct value, its value does not have a unique meaning. The value of a byte can be used in a virtually unlimited number of ways, and only within a specific context does a byte have a particular meaning. A byte whose value is 32, for example, could mean different things to different people or different computer programs or different bits of computer hardware. We will see how this matters to us in more detail later.

How we represent bytes

We showed, above, that we can uniquely represent the values 0 through 255 in a single byte. In so doing, we used the familiar decimal counting system, having 10 unique digits. If we cascade two bytes together, we find that we can represent 65536 distinct values, and we can expand on this doubling of the number of bytes to increase the range of numbers available to us arbitrarily. Sometimes, we are less interested in the aggregate value of a byte than we are of the particular pattern of ones and zeros. Programmers, in particular are sometimes interested in the bit-level value of a byte. Its content may be a representation of some state of digital hardware, such as whether a button has been pressed or not, or whether a piece of equipment is in a particular state. In order to determine the state of an individual bit within a byte, the decimal notation we use to represent its value is not very convenient. As humans, we cannot very readily tell which bits have different values by comparing the values 123 and 129, for instance.

There are other numbering systems which can be used to express numeric values. Our familiar decimal number system uses 10 unique digits. Other numbering systems use different numbers of digits with which to count. One counting system has already been touched on here: the binary counting system. We can use only zeros and ones to count and to express values. In computing, especially for programmers, it is sometimes convenient to use other counting systems. Most commonly, these counting systems are the octal system, having 8 digits, and the hexadecimal system having 16 digits (0-9, and the letters A-F). The table below shows how all the possible combinations of 4 bits map on the 16 hexadecimal digits.

Code:
Binary      Hexadecimal   Decimal
---------------------------------
0 0 0 0         0            0
0 0 0 1         1            1
0 0 1 0         2            2
0 0 1 1         3            3
0 1 0 0         4            4
0 1 0 1         5            5
0 1 1 0         6            6
0 1 1 1         7            7
1 0 0 0         8            8
1 0 0 1         9            9
1 0 1 0         A           10
1 0 1 1         B           11
1 1 0 0         C           12
1 1 0 1         D           13
1 1 1 0         E           14
1 1 1 1         F           15

The use of other counting systems in digital systems usually stems from one or more of two root causes. In the hexadecimal (hex) counting system and also the octal system, the value of each digit in the number maps exactly to a specific sequence of bits. In the decimal counting system, this consistent mapping does not happen, and so the connection between bit patterns and the corresponding decimal values is more difficult for we humans to make.

In octal counting, the eight digits map directly to sequences of three bits, and in hex the sixteen digits are exactly represented by four bits. Hex has the added advantage that the four bits of a hex digit is exactly half of a byte, or to put it another way, a byte can be exactly expressed in exactly two hex digits. By partitioning the expression of a byte into two hex digits, we break the expression of the byte into two four-bit segments. The correspondence between the 16 hex digits and the respective pattern of bits is relatively easy to memorize, or even to calculate mentally. When the pattern of bits on a byte or word is significant, it is often more convenient to use the hexadecimal radix to express that value.
Code:
+--------------+-----+----+----+----+----+----+----+----+
|   Bit Number |  7  | 6  | 5  | 4  | 3  | 2  | 1  | 0  |
+--------------+-----+----+----+----+----+----+----+----+   
|       Binary |  0  | 1  | 0  | 0  | 1  | 0  | 1  | 0  |
+--------------+----------+---------+----+--------------+   
|        Octal |      1   |       1      |     2        |
+--------------+----------+---------+----+--------------+   
|  Hexidecimal |            4       |        A          +
+--------------+--------------------+-------------------+
The table above shows how the digits in hexadecimal or octal expression of a byte map to the bits within the byte. The digits in a decimal representation of the byte do not map consistently along bit boundaries, due to the non power-of-two radix, 10. The byte represented by the bit patttern in the above table has the value 74 when expressed in the decimal counting system. The table should serve to re0inforce the notion that all of the various representations refer equally, and concurrently to a single unique pattern of bits.

A byte is sometimes referred to as being comprised of two nybbles, an upper nybble and a lower nybble, referring to the bits 0-3 (low nybble), and bits 4-7 (high nybble). The hex digits in a byte are the high nybble and low nybble.

One place that many of us have seen the use of hexadecimal is in the HTML color representation. There, colors can be expressed in three hex words. This is convenient, since each word maps directly to one to one of the three color components: Red Green and Blue. We can adjust just one byte, and easily determine which of the color components we are adjusting. This makes it easier to predict what a particular color value might look like, and especially to predict how we might want to change it to get a desired result.

Notice the number of times that I used the word 'express' in the above paragraph. This is deliberate, as I want to make the point that the value of a byte does not vary merely by the way we express that value. The bit pattern '10100101' can be expressed in any radix we choose. The value is still the same, although we do need to know the radix used to express the value, in order to know exactly what value is being expressed. This leads us to the expression 'there are 10 kinds of people in the world: those who know binary and those who do not'. Without knowing that the number '10' was expressed in binary, our temptation would be to assume it was an expression in the decimal radix. Often, we see the notation where the radix of a number is given as a subscript to the number, like the number 12AF16, which removes ambiguity about the intended interpretation. The example 12AF is probably unambiguous already, since its alpha digits are a giveaway that the radix is 16. A number 123 could be interpreted as any of decimal, octal, or hexadecimal, and in some computing-related contexts, should be subscripted with a radix to avoid the ambiguity.

Just as it is convenient for humans to make a mental conversion between bit patterns and hex notation, it is similarly simpler to convert bytes or words to human readable ASCII form by a computer program. The exact mapping of a 4-bit half byte, called a nybble in some contexts, makes the conversion simple using a small 16-byte lookup table. The value of this is probably lost on most programmers, but there was a day when every cycle and byte counted. Even today, however, embedded systems programmers will write conversion routines using hexadecimal representation quite frequently.

For more on this subject, see Part II, elsewhere in this blog.
Views 17964 Comments 4
« Prev     Main     Next »
Total Comments 4

Comments

  1. Old Comment
    Not trying to nit pick but I felt like mentioning that if you write a thoughtful heading for this blog, Google will lead the people searching about the same info to your blog.
    That'll be helpful to the community in general.

    "A Bit About Digital" is *not* a good title.
    People won't usually search Google with the keywords - a bit about ...

    Pardon if this offended you.
    Posted 07-27-2012 at 02:59 AM by Aquarius_Girl Aquarius_Girl is offline
  2. Old Comment
    Quote:
    Originally Posted by Anisha Kaul View Comment
    "A Bit About Digital" is *not* a good title.
    Don't you get the touch of humour in it? A "bit" in the computer sense as well as meaning "a piece"?
    Like if I wrote something about security: "Tighten up your Slack".
    Posted 07-27-2012 at 03:59 AM by brianL brianL is offline
  3. Old Comment
    But that's not going to help the community in general.

    My point was that since Nbomr has bothered to write such a detailed explanation
    about this topic, then will it not be better if people actually come here and read
    it, understand it, and then ask follow up questions?

    And all that will happen if the title is written according to keywords people may
    actually use to search Google.
    IMO, people do not use humorous keywords to search Google when they are serious
    about learning a technical topic.
    TIA
    Posted 07-27-2012 at 04:06 AM by Aquarius_Girl Aquarius_Girl is offline
    Updated 10-28-2012 at 10:22 PM by Aquarius_Girl
  4. Old Comment
    Thanks for your comment. I wrote the essay in large part because it is a subject that no one seems interested in searching for. I intended to use it as a detailed reply to questions posed in these forums. As more links to the essay appears, Google will probably rank it higher. What keywords do you think would make a more compelling title?
    Posted 07-27-2012 at 09:37 AM by theNbomr theNbomr is offline
 

  



All times are GMT -5. The time now is 07:53 AM.

Main Menu
Advertisement
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration