Fact Checked

What is a Quadbit?

John Lister
John Lister

A quadbit is a unit of computer memory containing four bits.When using a system known as packed decimal format to store extremely long numbers on a computer, one quadbit contains each digit.

To understand how a quadbit is useful, you must first remember that the smallest unit of computer memory is a bit. A bit is simply one digit, which can only be either a zero or a one. This is the very basis of computer operation and very old machines actually had digits represented by individual cylinders of gas which were either empty or full depending on whether the relevant digit was zero or one.

Man holding computer
Man holding computer

A quadbit contain four bits. Because each bit can be one of two possibilities -- a one or a zero -- the number of possible combinations of data in a quadbit is 2 x 2 x 2 x 2, which equals 16. This very neatly coincides with the hexadecimal numbering system, in which there are 16 units, as compared with the 10 we use in the more common decimal system. These 16 units are usually represented by the numbers 0 to 9 and the letters A to F.

Hexadecimal is commonly used in computing because of the way that, as everything in computing derives from the two possible values of the binary system, each “level” of data storage usually doubles, creating the series 1, 2, 4, 8, 16, 32 and so on. One example of this is that a computer might have 256MB, 512MB or 1,024MB of RAM, the latter figure being equivalent to 1GB. In theory every collection of data stored in a computer can be broken down into 2 chunks, 4 chunks, 8 chunks, 16 chunks and so on. Because the hexadecimal system has 16 units, it fits into this pattern smoothly and makes it much easier to make calculations referring to data storage than if you used our traditional decimal system.

The most common use of the quadbit is in packed decimal format. This takes an extremely long string of numbers, such as the raw form of a computer program or other data, and rewrites it as a string of binary numbers. Each digit in the original number is turned into a string of four binary digits – in other words a quadbit. Using packed decimal format can allow computers to process data more quickly but easily convert it back to its original format when it is finished.

The quadbit is commonly referred to as a nibble. This is a form of wordplay based on the fact that the next largest unit of storage in computing is known as a byte. Because a byte consists of 8 bits, a quadbit is half of a byte. The joke comes from the fact that in the English language, a nibble is a word meaning a small bite.

You might also Like

Discuss this Article

Post your comments
Forgot password?
    • Man holding computer
      Man holding computer