BCD vs Binary - muneeb-mbytes/FPGABoard_edgeSpartan6 GitHub Wiki
BCD vs Binary
-
Both binary-coded decimal (BCD) and binary numbers are used in many digital applications.
-
Both have their advantages and disadvantages.
-
BCD is commonly used when decimal numbers must be represented in hardware, as each 4-bit BCD number maps directly to a decimal number.
-
Binary is more efficient for arithmetic, memory storage, and transmitting information, but is less human-readable
-
When should you use one or the other? This is entirely application dependent. If you want to display decimal number on a seven-segment display, using a BCD counter would be easier, as you would not need to convert from binary to BCD before passing each digit into your seven-segment display controller. Numerical calculations in Verilog use binary numbers. Sometimes, a system will need to convert between BCD and binary numbers. This can be accomplished through a look-up table, software, or conversion algorithm.
BCD counter
BCD number varies from 0 to 9 hence we split the given two digit number and count ones digit from 0 to 9 and when ones digit is equal to 9 we increment tens digit by 1. This is the logic of BCD counter.
Binary Counter
Block Diagram:
Double Dable example
- The “double dabble” algorithm is commonly used to convert a binary number to BCD.
- The binary number is left-shifted once for each of its bits, with bits shifted out of the MSB of the binary number and into the LSB of the accumulating BCD number. After every shift, all BCD digits are examined, and 3 is added to any BCD digit that is currently 5 or greater.