A check bit is an extra binary one or zero which is used to indicate an erroneous condition within information. In situations where data may develop more than one error at the same time, the use of a check bit to indicate errors is not completely reliable. Check bits are often used with data that is transmitted serially or for ensuring that computational memory is functioning correctly. Either a one or a zero may be used to indicate an error depending on the type of parity being used.
Parity is the process of checking for errors when check bits are used. A check bit is frequently called a parity bit. If the total amount of ones in a correct binary sequence including the check bit is an odd amount of ones, it is called odd parity. Even parity is when the total amount of ones in the data and check bit together add up to an even amount.
For example, suppose a person wanted to transmit the seven bit binary sequence 1100101. There are four ones in this number, therefore if she wanted to add a bit to create odd parity, she would add a 1 to the end series, thereby creating the number 11001011. If a person decided to use even parity, she would add a 0 instead of a 1 to the end, thus producing 11001010. The choice of odd or even parity is typically a matter of set standards or designer choice. This is the simplest form of using a check bit.
Imagine what happens when the odd parity string 11001011 is transmitted. If a one or zero is changed in the number before it reaches its destination, the total amount of ones will add up to an even amount, thus indicating that there is an error in the data. On the other hand, if an error is introduced into an even parity sequence, an odd number of ones will occur. If two or more numbers get changed in transit it may be impossible to detect an error using a single check bit because both errors together could create the original parity.
More complex methods have been developed for error detection in computing. In cases where a simple error identification is all that is necessary, however, using a single check bit will often suffice. The manner in which characters are encoded in the American Standard Code for Information Interchange (ASCII) or the Extended Binary-Coded Decimal-Interchange Code (EBCDIC) are examples of how a single check bit is used in computer science. Check bits, parity, and other error detection schemes play a vital role in ensuring that data being manipulated by computable processes remains free from unwanted side effects caused by noise and errant conditions.