The topic of error-correcting codes was first discussed in Hamming's paper, "Error-Detecting and Error-Correcting Codes," written in 1950. The paper discussed methods of overcoming the physical limitations of large computing machines which inevitably make errors handling raw data. Today we are able to assume that a computer does not make mistakes only because error-correction makes this assumption possible.
It is important to distinguish between error-correcting and error-detecting. The following n-digit code is an example of error-detection. Put n - 1 bits of information at the beginning, and then for the nth bit insert a 1 or a 0 depending on the parity of the sum of the previous n - 1 bits. Then, if there is a single error, the error can be detected. However, the location of the error cannot be detected. Furthermore, even numbers of errors cannot be detected because the parity remains unchanged. Error-correcting codes actually pinpoint the error location. The Hamming code adds 3 "parity check" bits for each set of 4 information bits, and can pinpoint the location of any one error. This fails with two or more errors, however.
More complex codes can be devised that perform error-checking more efficiently than the Hamming code. Devising clever error-checking codes is one example of how information theorists use their ingenuity to approach the limits that Shannon set out for reliable information transfer.