A finitist computer scientists only accepts those numbers as real that can be expressed exactly in finite base-two floating point?
0.1 is just as non-representable in floating point as is pi as is 100^100 in a 32 bit integer.
Terminating dyadic rationals (up to limits based on float size) are the representable values.
A finitist computer scientists only accepts those numbers as real that can be expressed exactly in finite base-two floating point?