C
Clive Dixon
I came across the following code:
double d = 004166666667;
(i.e. no decimal point) which parses the literal as 0.04166666667, with a
decimal point. How so? The literal 004166666667 doesn't even look like a
valid real literal to me according to ECMA-334 section 9.4.4.3, so I would
have thought it would be treated as an integer literal, parsed as such and
the value undergo a straight integer -> double conversion.
double d = 004166666667;
(i.e. no decimal point) which parses the literal as 0.04166666667, with a
decimal point. How so? The literal 004166666667 doesn't even look like a
valid real literal to me according to ECMA-334 section 9.4.4.3, so I would
have thought it would be treated as an integer literal, parsed as such and
the value undergo a straight integer -> double conversion.