This type defaults to "decimal" but can be reassigned via a compiler flag or directive. Current choices are { decimal, float64, float32 }.
A choice of "BigDecimal" or "Rational" with unlimited size is likely to be added to that set in the future.
Forums
number vs. real poll
16 posts
• Page 1 of 2 • 1, 2
Re: number vs. real poll
I vote for real, it's less "vague" for the newby than number
- jonathandavid
- Posts: 159
Re: number vs. real poll
I'm bumping this poll back to the top. I'm interested in seeing more responses. Cast your vote!
- Charles
- Posts: 2515
- Location: Los Angeles, CA
Re: number vs. real poll
Personally I never liked "real" as the name of a type in computer programs. Real numbers are, to me anyway, a mathematical idea. They have infinite precision. Computers don't deal with them. The numbers we manipulate in our programs are at best a subset of the reals... or an approximation of the reals. My feeling is that "number" is a less loaded word and conveys a more generic feeling.
- pchapin
- Posts: 46
- Location: Vermont, USA
Re: number vs. real poll
Just a quick follow-up to a post on a different thread regarding 'int.'
It is true that many programming languages use 'int' to refer to a subset of the mathematical set of integers. However, not all programming languages do that. Haskell and I believe also Python use arbitrary precision integers by default so the language does indeed allow any possible integer to be stored in an integer variable. Of course the machine might run out of memory if one tries to store a gigantic integer, but that's not exactly the programming language's fault.
However, the reals are different. Since some reals have an infinitely large representation there is no way, even in principle, that you could create a type on a computer system that could hold them all. Integers are different in this way. No single integer has an infinitely large representation.
It is true that many programming languages use 'int' to refer to a subset of the mathematical set of integers. However, not all programming languages do that. Haskell and I believe also Python use arbitrary precision integers by default so the language does indeed allow any possible integer to be stored in an integer variable. Of course the machine might run out of memory if one tries to store a gigantic integer, but that's not exactly the programming language's fault.
However, the reals are different. Since some reals have an infinitely large representation there is no way, even in principle, that you could create a type on a computer system that could hold them all. Integers are different in this way. No single integer has an infinitely large representation.
- pchapin
- Posts: 46
- Location: Vermont, USA
Re: number vs. real poll
pchapin wrote:Real numbers are, to me anyway, a mathematical idea.
I've already posted my opinions on this subject on another thread. There you can see too why I think using "number" is not a good idea.
Regards,
Manuel
- jonathandavid
- Posts: 159
Re: number vs. real poll
Yes, I saw your other posts. You make some good arguments. I'm not entirely convinced, however, and I still like 'number' better. Yet I can see why you feel differently. In any case if the decision is made to use 'real', I can certainly live with that.
- pchapin
- Posts: 46
- Location: Vermont, USA
Re: number vs. real poll
I prefer 'number' - pchapin hit the high points as to why.
'real' is no more or less clear than 'number' in this context
( as in 'real number')
perhaps since its for an overriding of the default ( decimal) it would be both more expansive and more clear if
it ( the compiler switch '-number') also supported mapping to the various sized int types ...
i.e You could then generically code as 'number' and
choose the specific numeric type at compile time depending on desired use/target
(from both the various sized real numbers ( incl Decimal) and the various sized ints.
'real' is no more or less clear than 'number' in this context
( as in 'real number')
perhaps since its for an overriding of the default ( decimal) it would be both more expansive and more clear if
it ( the compiler switch '-number') also supported mapping to the various sized int types ...
i.e You could then generically code as 'number' and
choose the specific numeric type at compile time depending on desired use/target
(from both the various sized real numbers ( incl Decimal) and the various sized ints.
- hopscc
- Posts: 632
- Location: New Plymouth, Taranaki, New Zealand
Re: number vs. real poll
I think including the ints as choices for "number" could be confusing and also not particularly useful. Right now setting "number" to "float" affects literals like 5.0 and 2.1, but if you were to set it to "int", it would not affect those, but presumably would affect literals like 100 and 10_000. I find that confusing.
Also, given that both .NET and JVM use 32-bit signed ints for most things, the need for customization doesn't seem to come up that much. Whereas, fractional numbers like 2.1 have more pressing issues regarding accurate representation of the fractional part, speed and memory. Plus libraries in .NET seem fairly inconsistent on using float32 vs. float64. "ints" have some of these issues to an extent; they just don't seem to affect day-to-day programming very much.
Also, given that both .NET and JVM use 32-bit signed ints for most things, the need for customization doesn't seem to come up that much. Whereas, fractional numbers like 2.1 have more pressing issues regarding accurate representation of the fractional part, speed and memory. Plus libraries in .NET seem fairly inconsistent on using float32 vs. float64. "ints" have some of these issues to an extent; they just don't seem to affect day-to-day programming very much.
- Charles
- Posts: 2515
- Location: Los Angeles, CA
Re: number vs. real poll
I dont see it as any more confusing than the existing situation with the possibility of setting the varieties of floating point types - its just extended to all 'number' types rather than a subset.
Re the affect on assignment of int literals. Why would this act any differently than the situation with defaulting from Decimals - conceptually its just a way of changing the default
numeric type (Decimal to int/whatever) - type inference of otherwise untyped variable should just fall through from that setting at compile time
presumably
does the correct thing as does
Why should
be any different?
re customisation
If you're interacting with Libraries and other platform dependent things you are going to have to wire any arg types to the library specified types anyway ( or not use Number - or if you are (for other literals) these (interface points) would still need to be cast or otherwise forced to type).
I'd tend to see use of Number for genericising application code (rather than external-interface code) whether its the current floating point only number subset or any/all 'number' types.
Re the affect on assignment of int literals. Why would this act any differently than the situation with defaulting from Decimals - conceptually its just a way of changing the default
numeric type (Decimal to int/whatever) - type inference of otherwise untyped variable should just fall through from that setting at compile time
presumably
- Code: Select all
d as Decimal = 100
does the correct thing as does
- Code: Select all
d as Number = 100 # compile time can set number to float
Why should
- Code: Select all
d as Number = 100 # compile time set number to int
be any different?
re customisation
If you're interacting with Libraries and other platform dependent things you are going to have to wire any arg types to the library specified types anyway ( or not use Number - or if you are (for other literals) these (interface points) would still need to be cast or otherwise forced to type).
I'd tend to see use of Number for genericising application code (rather than external-interface code) whether its the current floating point only number subset or any/all 'number' types.
- hopscc
- Posts: 632
- Location: New Plymouth, Taranaki, New Zealand
16 posts
• Page 1 of 2 • 1, 2
Who is online
Users browsing this forum: No registered users and 32 guests