One question I have is about the choice of "decimal by default". See http://cobra-language.com/docs/python/
I don't quite understand this. A friend and I were debating the other day about what the "decimal" type was for, whether for its accuracy or for its decimal-ness. I think in most cases (general calculations, statistics, number crunching, drawing Mandelbrot sets on the screen, etc) users want fast binary floating point. But you should use decimal when the base-10 "decimal exactness" of the numbers is important. This is really only financial calculations, when yes, you should be using decimal. In other cases the base-10-ness doesn't matter.
Decimals have accuracy issues too, it's just that the default precision is usually higher (in Python, the default precision is 28 decimal places). Whereas the default floating point precision is closer to 16 decimal places (more precisely, 53 bits of precision).
Am I totally off base here?
Relatedly, Python 2.7 has improved the repr() of floats so that this example is no longer valid:
- Code: Select all
>>> from __future__ import division
>>> 4 / 5 # normal division
0.80000000000000004 # <-- there is an extra '4' at the end
That now returns just "0.8" on Python 2.7.