in js, we often encounter the problem of floating-point precision (0.1-0.2). We all know that it is because infinite cyclic decimals are generated when converting to binary, while the Mantissa part of js can only hold 52 digits, which is caused by intercepting a zero. But now dig deep:
,
const s = 0.1
console.log(s)//s=0.1
console.log(0.1+5)//5.1
,
the above code, s is exactly equal to 0.1, and there is no inaccuracy. According to the data, more than 16 bits will be calculated using toPrecision (16), and then there will be the case of slicing 0.1, so why can"t 0.1 / 0.2 do the toPrecision operation?
for the second case, my guess is that for the addition of integers and decimals, js will do toPrecision (m = 1), m is the-m power of decimal 10;
these are the two problems I have in studying the accuracy of js, and the blogs that can be found do not explain these two aspects. I hope to know the great god of js precision processing mechanism to help answer, thank you!
add: for when the toPrecision (m) method is called, more importantly, how is the value of m determined?