enter a decimal number (an and b represent 10 and 11) and output as required
(1) the first line outputs the number under the decimal system of each bit
(2) the second line outputs the decimal number of this decimal number
(3) the third line outputs the converted decimal number in each binary number in memory (a total of four bytes, each byte separated by a space)
for example: enter: a2
output: 10 2
122
00000000 00000000 00000000 00001100
I would like to ask how the decimal 122 is converted into the following binary string. Can you explain it in detail?