Hacker News new | past | comments | ask | show | jobs | submit login

This is probably better known here under the name of implicit type conversions. Strictly speaking, the rational number 1 is not the same as the integer 1; we just have a conversion map that preserves all the relevant properties. It's all fun and games until there are 1000 types and 10000 conversion maps involved and the relevant diagrams no longer commute (i.e., it depends what order you go).



> (i.e., it depends what order you go)

At the risk of utterly derailing this with irrelevant discussion: path-dependent systems are particularly tricky for some people IMHO. I think in a more state-based way, and my first rigorous dive into path-dependent calculation was during my chemical engineering degree -- I learned to be extremely vigilant about memorizing what was path-dependent and triple-checking if that affected the situation I was calculating.

I do wish there was more rigorous exposure to them at lower levels of education and younger age. Because while I'm perfectly capable of handling path-dependent systems with proper focus and effort, my brain doesn't feel "native" when deriving solutions around those spaces - it feels similar to being "fluent enough" in another language. I feel this way about a lot of things -- I really feel I'd have been happier and more fulfilled if I'd been immersed in super rigorous first-principles education beginning around age 8-9. I didn't do well with things like "memorize this procedure for doing long division" and did much better with conceptual derivations of physics/math/science/historical arcs, etc.


The problem is that no one thinks of type conversions as taking any explicit paths! It's one thing to view the actual process of long division as path-dependent (something everyone who learns about Gröbner bases is familiar with, as at the right level of generality even the result is path-dependent); it's another thing to apply the same intuition to the way the inputs are parsed. (You said divide 3 by 2 with remainder? Sure, the quotient is 3/2 and the remainder is 0. Problem?)


> The problem is that no one thinks of type conversions as taking any explicit paths!

Indeed. This is super surprising to me and I’m adding the topic to my “study” list. I had no idea until today - I easily could imagine it’s possible if some of the type conversions are “lossy” (e.g. maps to lists), but I have a strong feeling that simpified lossy conversions are not what is being referenced.


Most of Buzzard's examples are indeed lossless (even bijective) conversions, but the notation mathematicians use (and programming languages often support) makes it look like they are completely absent, which comes back to bite you. In mathematics, it bites you when you want to actually compute your function and realize you don't remember how all those canonical arrows go. In programming, it bites you when the default conversions the compiler inserts aren't the ones you meant.


Isn’t that the intuition behind homotopy type theory — that equivalence is a path between types?

(And related, eg, cubical type theory.)


Or when your formulas on Google sheets start failing because you do 1-1 and the result is not 0, and then you spend 30 minutes creating new sheets, searching the web and double checking everything, until you realize that the value on the cell wasn’t 1, but 1.

This happened to me yesterday


Can you elaborate?


Now I noticed that the period at the end of the paragraph is a bit unfortunate

The value was 1. which I guess is shorthand for 1.0 and it’s technically not the same value as 1

On top of that, sometimes values like 0.961727 will be shown as 1, so sometimes you think that the value in the cell you are referring to is a 1 but instead it’s something close to it

In particular I was making a list of array positions from 1 to 32 and calculating x,y coordinates from the position using the formulas (x = (i-1) % width, y = (i-1)/width)

Some of the coordinates were wrong, and it was because the i values were not integers, which I couldn’t tell just by looking at the sheet, and only realized it when double clicked on the cells


It's frustrating that there isn't an easy way to see the "canonical" value and format of a cell (in Sheets or Excel).

Just looking at a cell, it's not trivial to see if it's the number 1 or the string 1 (you can enter text by using a leading apostrophe, but that's not the only way to get text in a cell!). Numbers and strings have different alignments by default, but that can be overridden. The numeric value of the string 1 is 0 if you're using the SUM formula, but it's 1 if you use +. In other words, =A1+A2 does not necessarily equal =SUM(A1:A2)

Then you can format numbers however you like. For example, dates are stored in spreadsheets as days since an epoch (not the Unix epoch). So you can have the number 2 in a spreadsheet cell, then format it as a date, but just the day of the month, and it can appear as 1.

There's rounding, which bit you. 0.95 can appear as 1 if you display fewer decimal places.

Finally, there's the fact that the calculation is done like IEEE 754. Programmers are used to floating point numbers and the fact that properties like associativity don't apply, but that's not obvious to everyone.


You definitely know a lot about mangling data with spreadsheets!

> Programmers are used to floating point numbers and the fact that properties like associativity don't apply, but that's not obvious to everyone

One of my first programming assignments in college was to simulate a pool table. I had to go ask the TA what the hell was wrong with my code, the balls would never bounce because their distances to anything were never 0… fun times


Thank you!


1 vs 1.0 ???


I assume you mean any random enterprise codebase? Because that is any random enterprise codebase.


In other areas, one might compare code jurisdictions, where presumably all the legislative clauses share a common vocabulary (and probably even conceptual framework), with common law jurisdictions, where all the legislative clauses occur in different centuries and thus one adds "picking a parallel transport" between the legal languages of the times (and jurisdictions) to the difficulties of adjudication.


Not if you use a language that bans implicit type conversions (like go)


I was thinking in much more abstract terms. You have information from a webflow, at some point converted to an XML message, converted to database entry, again to a JSON document, combined with an Azure workitem, then sent to Salesforce.

At any one point the data can interact with other systems, travel through codebases each seeing the data through their types, legacy APIs, the same-(ish) new-(ish) RPC API and in each service the data is represented in its own internal quirky way.

Sure, throw Go in that mix, hell at this point the more merrier! :)


This doesn’t sound right to me. The rational numbers are a superset of the integers. We know that there’s only one 1 in the rational numbers, then it must be the same 1 object as the 1 in the integers.

The statement “3/3 is in Z” is true. There’s no conversion happening: 3/3 is a notation for 1, just like 0.999… is a notation for 1. Many notations, but only one 1 object.

The case of R x R^2 = R^3 is different because the Cartesian product is defined to produce a set of ordered pairs. So it cannot give rise to a set of triples any more than a dog can give birth to a cat. So either x is not a Cartesian product or = is isomorphism not equality.


Your statements such as

> We know that there’s only one 1 in the rational numbers, then it must be the same 1 object as the 1 in the integers. > The statement “3/3 is in Z” is true.

make it sound very trivial while in reality it is not. I do not quite understand your example with R^3 but the defined applies equally to your statements. There are many ways to define and think about the objects you mentioned -- there is not one single truth. Unless you are a devoted platonist, in which case, it's still like your opinion, man.


Q is (usually) a set of equivalence classes of ZxZ\{0}, so obviously it's not a superset of Z since they're not even the same types of things.

There is however a canonical embedding of Z into Q, sending n to the class of (n,1).


Most people define it as the smallest set extending (the smallest set extending Z\{0} where * is invertible) where + is invertible


How do you "extend" Z (what are you adding to it)? How do you prove there is a field containing Z? Assuming you are working in ZFC, what is an example of an element of Q? What is an example of an element of Z?

One way to do things is to define N as von Neumann ordinals:

https://en.wikipedia.org/wiki/Set-theoretic_definition_of_na...

Then you define Z as an equivalence relation of NxN by the equivalence (a,b) ~ (c,d) iff a+d=c+b. This means each integer is itself an infinite set of pairs of natural numbers.

Then you define Q as an equivalence relation of ZxZ\{0} by (a,b) ~ (c,d) iff ad = cb. Again, each rational is now an infinite set of pairs of integers.

The point of the OP post is that we want to define things by their properties, but then we are defining what a set of rational numbers is, not what the set of rational numbers is, and we need to make sure we do all proofs in terms of the properties we picked (e.g. that Q is a field, there is a unique injective ring homomorphism i_ZQ: Z->Q, and if F is a field such that there is an injective ring homomorphism i_ZF: Z->F, then there is a unique field homomorphism i_QF: Q->F such that i_ZF = i_QF after i_ZQ) rather than relying on some specific encoding and handwaving that the proof translates to other encodings too. This might be easier or harder to do depending on which properties we use to characterize the thing, and the OP paper gives adding inverses to a ring as one of its examples in section 5 ("localization" is a generalization of the process of constructing Q from Z. For example, you could just add inverses for powers of 2 without adding inverses of other primes), proposing a different set of properties that they assert is easier to work with in Lean.


But such a set is not unique...


0.999… always reminds me of my favorite definition of the real numbers:

“A real number is a quantity x that has a decimal expansion

x = n + 0.ddd₃…, (1)

where n is an integer, each dᵢ is a digit between 0 and 9, and the sequence of digits doesn't end with infinitely many 9s. The representation (1) means that

n + d₁/10 + d₂/100 + ⋯ + dₖ/10^k ≤ x < n + d₁/10 + d₂/100 + ⋯ + dₖ/10^k + 1/10^k

for all positive integers k.”[1]

Defining the reals in terms of binary expansion is left as an exercise for the reader[2].

[1] Knuth, The Art of Computer Programming, Volume 1, Third Edition, p. 21.

[2] Ibid., p. 25, exercise 5.


You can easily define a Cartesian product that works on more than two sets at a time. See https://en.wikipedia.org/wiki/Cartesian_product#Cartesian_pr...


In the formalization of mathematics, things are built up structurally. Zero is the empty set. Natural numbers are sets that contain smaller natural numbers. Integers are pairs of natural numbers under an equivalence relation (a,b) ~ (c,d) if a + d = c + b. Rationals are pairs of integers under the equivalence relationship as above but if a * d = c * b. Reals are bounded sets of rationals (e.g. all the rationals q such that q * q < 2). Complex numbers are pairs of reals with addition and multiplication defined with i^2 = -1. Functions are really large sets with ordered pair (x,y) in the set f means f(x) = y. Etc.


I'm not sure I agree. That sort of type conversion involves inclusions, while the examples Buzzard talked about involved isomorphisms


Inclusions are a form of isomorphism. In the more common developments of elementary arithmetic, Q is constructed from Z, and in particular the subset of integral rational numbers are technically a different ring from Z. The conversion Z \to Q in particular is an isomorphism from Z to that subset.

Type conversions in every day programming languages though sometimes not only fail to be surjective, they can also fail to be injective, for example int32 -> float32.

type-conversions and "implicit isomorphisms" differ because the former does not need to be invertible, but they agree in that they are implicit maps, that are often performed without thought by the user. So I think that the type-conversions analogy is pretty good in that it captures the idea that implicit conversions, when composed in different ways from A to B, can arrive at various values, even if each stage along the way the choices seemed natural.


> Strictly speaking, the rational number 1 is not the same as the integer 1

So does that mean you can have different 0 zero's ?


Yes. 0/1, 0/2, 0/3 are all different in some sense, but they belong in the same equivalence class.


Even there, it would help to not apply the zero numerator equivalence class too early.

I.e. 2/3, 2/5, 2/7 are the numbers that divide 2 to get 3, 5 and 7 respectively.

Likewise, with full consistency, 0/1, 0/2, 0/3 cannot be reduced using common factors (the core equivalence for ratios), so have different ratio normal forms, and are the numbers that when they divide 0 produce 1, 2, and 3 respectively. All consistently.

The advantage of not applying zero numerator equivalence too early, is that you get reversibility, associativity and commutivity consistency in intermediate calculations even when zeros appear in ratios.

You can still apply zero numerator ratio equivalence to final values, but avoid a lot of reordering of intermediate calculations required to avoid errors if you had applied it earlier.

Of course, if you are coding it doesn't help that you are unlikely to find any numerical libraries, functions or numeric data types, that don't assume division by zero is an error, inf, or NaN, and that all zeros divided by non-zero numbers are equivalent to 0. So you simply can't get the benefits of holding off on that equivalence.

You have to do a lot of reasoning and reordering to ensure generally correct results, as apposed to "I am sure it will be fine" results.

I find it very surprising that this separate treatment of factor reduced equivalence, and zero numerator (and zero denominator) equivalences, on ratios, is not taught more explicitly. They are very different kinds of equivalence, with very different impacts on calculation paths.


Thanks!

I was actually toying with writing a rational number library going in the direction you sketch out. My inspiration was a trick in computational geometry for dealing with points at infinity. I think it's called homogeneous coordinates.

My point was to treat both p and q in p/q as symmetrically as possible.

Oh, I remember now: the motivating example was to write a nice implementation of an optimal player for the 'guessing game' for rational numbers.

One player, Alice, commits to a (rational) number. The other player, Bob, makes guesses, and Alice answers whether the guess was too high, too low or was correct. Bob can solve this in O(log p + log q). And the continued-fraction-based strategy Bob wants to use is generally symmetrical between p and q. So I was looking into expressing the code as symmetrically as possible, too.


How is the case of the zero numerator special? When you take any fraction 0/n with n>0 and reduce it by the greatest common factor, you get 0/1.


The numerators and denominators of 0/2 and 0/3 do not have a common factor to reduce.

Think of 0 as a prime number for the purpose of ratio factor reduction.

Treating 0/2 and 0/3 as the same number is an entirely different equivalence.


Why would you treat 0 as a prime number? It’s divisible by every integer.


That is circular reasoning.

If you start saying 0/x is 0 then of course 0/2 and 0/3 are zero. (Or equivalently that 0 x 2 = 0 x 3 = 0.) But that is an entirely different equivalence from factor reduction.

If you don’t start with that assumption and reduce 0/2 and 0/3 only according to shared factors, you find that all three numbers, 0, 2, and 3 can only be factored as a product of 1 and themselves.

(Thus a slightly generalized version of primes, for whole numbers instead of just natural numbers.)

So 0/2 and 0/3 are completely factor reduced.

The equivalence of 0/x = 0/2 = 0/3 = 0/1 = 0 is a separate equivalence. One which is not reversible, I.e. it erases information, and may break commutivity requiring special handling (exception generation, NaN representation, or calculation reordering, etc) if it is applied during intermediate calculations.

If you don’t apply the zero equivalence’s until a final value is calculated, you will always get the same answer, but will have avoided a need for special handling of intermediate calculations.

For instance, an intermediate division by zero followed by a multiplication of zero will cancel, avoiding any need to reorder calculations and yet resulting in the same final answer.


Which can be made "visible" by substituting 0 with e.g. `1 - 1`:

   0/1 = (1 - 1)/1 = 1/1 - 1/1
   0/2 = (1 - 1)/2 = 1/2 - 1/2
   ...


Note how your parent comment has specified "rational" 1 and "integer" 1. They use the symbol 1 for two similar concepts: We all "know" that 1 + 1 = 2 and 1.0 + 1.0 = 2.0. I have deliberately specified 1 for an int and 1.0 for a rational. Now we have two different representations for two very similar but not identical concepts.

At which point does 1 + a tiny amount cease to be an integer? By definition that tiny amount can have any magnitude and 1 plus anything will cease to be an integer. That is the property that defines an integer. Integers have subsequent "behaviours" that other types of numbers might lack.

You have picked zero/0. Now that is a sodding complicated concept 8) There are lots of things called zero but no more nor less than any other.

Zero might be defined by: 1 - 1 = 0. I have £1 in my bank account and I pay out £1 for a very small flower, my bank balance is now £0. Lovely model, all good except that interest calcs intervened and I actually have a balance of £0.00031. Blast. My pretty integer has morphed into a bloody complicated ... well is it a rational thingie or a ... what is it?

Now I want to withdraw my balance. I put a shiny £1 in, bought something and I have some change. What on earth does a 0.031p coin look like? Obviously, it doesn't exist. My lovely integer account has gone rational.

Symbols mean what we agree on with some carefully and well chosen language. Mathematicians seem to think they are the ultimate aces at using spoken and written language to make formal definitions, derivations and so on. That is a bit unfair, obviously. We all believe that what we think is communicable in some way. Perhaps it is but I suspect that it isn't always.

Have a jolly good think about what zero, nothing, 0 and so on really mean. Concepts and their description to others is a really hard problem, that some funky symbols sort of helps with.

Yes there are loads of things called zero. If I had to guess: infinitely things are zero! Which infinity I could not say.


Yes


1 is an integer, a rational number, a positive integer, an odd integer, a power of 2, etc.


It's also a complex number, a Unicode character, an ASCII character, an Extended ASCII character, a glyph, the multiplicative identity element, a raster image, ...

The GP point is correct; we implicitly convert between all these representations naturally and quickly, but there are interesting branches of mathematics that consider those conversions explicitly and find nuances (eg, category theory).


All the examples of shadows mentioned by Douglas in the video [0]:

Traditional Shadow from Sunlight: A tree casting a shadow in bright sunlight during the summer, characterized by the typical darkening on the ground.

Snow Shadow: Occurs in winter when a tree intercepts snowflakes, resulting in an absence of snow under the tree despite being surrounded by snow. This is likened to a shadow because it mimics the visual absence typical of shadows, even though caused by a different kind of blockage.

Rain Shadow: Explained with the example of the Cascade Mountains, where the mountains block rain clouds leading to dry conditions on the leeward side, effectively creating a "shadow" where rain is absent due to the geographical barrier.

Car Shadow by a Truck: Described during highway driving, where a large truck casts a "shadow" by preventing cars behind it from passing. This "shadow" is the area in front of the truck where cars tend to stay clear due to the difficulty in passing the truck.

Shadow Cast by England Due to the Gulf Stream: This is a metaphorical shadow, where England blocks the warm Gulf Stream from reaching certain areas, resulting in colder temperatures on one side, while allowing it to pass above to northern Norway, warming it. This is referred to as the "shadow of England" on the coast of Norway, influenced by the flow of water rather than light.

These examples use the concept of "shadow" in various physical and metaphorical contexts, showing disruptions in different types of flows (light, snow, rain, traffic, and ocean currents) caused by obstructions.

[0] : https://youtu.be/LnTVzB5S8m4?t=973


Mathematics is surprisingly weakly typed.

For example, the "2" in "2π" is not the same type of "2" as in x^2 or 2x generally. Yet, physicists (to pick a random group) will blend in these factors, resulting in nonsense. As a random example, one of the Einstein field equations has "8π" in it. Eight what!? What aspect of the universe is this counting out eight of -- a weirdly large integer constant? This actually ought to be "4(2pi)", and then "4" is the number of spacetime dimensions, which makes a lot more sense.

Similarly, in at least one place the square of the pseudoscalar (I^2) was treated as a plain -1 integer constant and accidentally "folded" into other unrelated integer constants. This causes issues when moving from 2D to 3D to 4D.


These examples miss the mark somewhat. The "2" in "2π" can mean several things (the nonnegative integer 2, the integer 2, the rational 2, the real 2) that are all commonly identified but are different. The "2" in "x^2" usually means the nonnegative integer 2. The "2" in "2x" can usually mean the nonnegative integer or the integer 2, but also the other 2's depending on what x is. The problem is not that the meaning of 2 varies across different expressions, but that it can vary within each single expression.

The best example is perhaps the polynomial ring R[x][y], which consists of polynomials in the variable y over the ring of polynomials in the variable x over the real numbers. Any algebraist would tell you that it is obviously just the two-variable polynomial ring R[x, y] in disguise, because you can factor out all the y-powers and then the coefficients will be polynomials in x. But the rings are very much not the same at the level of implementation, and every time you use their "equality" (canonical isomorphy), you need to keep the actual conversion map (the isomorphism) in the back of your mind.


Haskell shows (one way of) how you can have numerical literals like 2 that can be used with many different types, but still be strongly statically typed.

That by itself isn't a problem. But making all the other confusions you mention is a problem.


Saying 1 is both an integer and a rational number is wildly different from saying it is both an integer and an ASCII character. Z is a subset of Q. ASCII characters don’t overlap with either.

When you construct numbers using sets under ZFC axioms or inside lambda calculus what you get is representation. But 1 is just 1.


Your keyboard has a button with ‘1’ printed on it. When you push that, you don’t always get an integer or a rational number. You can convert what you get to either. So there must be overlap with ASCII somehow?


There is some overlap in some sense somehow, but not in any literal sense unlike in the sense in which all elements of Z are elements of Q.


By any common set-theoretic construction of Q (e.g., equivalence classes of integer pairs under cross-multiplication), 1 as an element of Z is not literally an element of Q: 1 ∈ Q merely corresponds to 1 ∈ Z, and this correspondence is carefully designed to preserve the ring operations of Z within a certain subset of Q. In this case, the distinction is so negligible that eliding it is harmless, but the whole point of the article is that such elisions can become harmful in more complex constructions.

1 ∈ Z and ASCII '1' can similarly be seen as corresponding in terms of having the same glyph when displayed. But of course, there are far fewer meaningful operations common to the two.


1 is not a set, any set-theoretic construction of 1 is not literally 1.


I am not sure I understand what you mean by “literally” here. For sure, if you use Zermelo–Fraenkel set theory as the foundation of mathematics, as is commonly done, every mathematical object is a set. The first definition of 1 encountered in that setting is the singleton set {0}, where 0 is the empty set. (And 2={0,1}, 3={0,1,2} and so forth – you get the picture.) This is precisely the sort of thing this is all about: The natural numbers are uniquely described up to unique isomorphism by some variant of the Peano axioms after all.


That doesn’t make 1 a set. Its representation in ZFC is a set. But its representation in eg lambda calculus is a function.

Saying that 0 belongs to 1 is false no matter what one uses to represent those numbers in any ZFC formalisation of numbers.

It’s a map-territory distinction.


Ah, that depends what the meaning of “is” is, does it not?

On a more serious note, if you are of a certain philosophical bent you may believe that the natural numbers have an existence independent of and outside of the minds of humans. If so, 1 is presumably not a set, even if we don’t fully understand what it is. I certainly don’t think of it as a set on a day to day basis!

But others may deny that the territory even exists, that all we have are the maps. So in this one map, 1 is a set containing zero, but in that other map, it is something different. The fact that all the different maps correspond one-to-one is what counts in this worldview, and is what leads to the belief – whether an illusion or not – that the terrain does indeed exist. (And even the most hard nosed formalist will usually talk about the terrain as if it exists!)

But this is perhaps taking us a bit too far afield. It is fortunate that we can do mathematics without a clear understanding of what we talk about!


If there are many different ways to represent what something 'literally is', then how do we know for sure that ASCII '1' isn't a true representation of the literal number 1, just considered under different operations? We can say that 1 + 1 + 1 ≠ 1 (in Z), and we can also say that 1 + 1 + 1 = 1 (in Z/2Z): the discrepancy comes from two different "+" operations.

For that matter, how do we know what infinite sets like Z and Q 'literally are', without appealing to a system of axioms? The naive conception of sets runs headlong into Russell's paradox.


But the integers are a subset of the rationals, which are a subset of the reals, which are a subset of the complex numbers. Looking only at the objects and not their operations 1 (integer) = 1 (rational) = 1 (real) = 1 (complex). Moreover, when we do account for the operations, we also see that 1 + 1 = 2 and 1 * 1 = 1 in every one of those systems. This isn't just a coincidence, of course; it's by design.

However, the way you arrive at 1 + 1 = 2 is not the same (though I suppose you could short-circuit the algorithm). Rational addition requires finding a common denominator, while integer addition doesn't. They achieve the same result when the inputs are integers, and again this is by design, but the process isn't the same. Ditto real addition vs. rational and complex addition vs. real.

In higher-level mathematics, the operations on the objects become definitional. We don't look at just a set of things, we look at a set of things and the set of operations upon those things. Thus "1 with integer addition and integer multiplication" becomes the object under consideration (even if it's just contextually understood) instead of simply 1. This is why they don't satisfy higher-level notions of equivalence, even if they intentionally do satisfy simple equality as taught in grade school.

Of course, the entire point of the submitted paper is to examine this in detail.


> But the integers are a subset of the rationals, which are a subset of the reals, which are a subset of the complex numbers.

It depends on definitions, and, in some sense, the point of the common approach to mathematics is not just that one does not, but that one cannot, ask such questions. One approach is to look at natural numbers set theoretically, starting with 0 = ∅; to define integers as equivalence classes of pairs of natural numbers; to define rational numbers as equivalence classes of certain pairs of integers; and to define real numbers as equivalence classes of Cauchy sequences of rational numbers. In each of these cases there is an obvious injection which we are used to regarding as inclusion, but most of mathematics is set up to make it meaningless even to ask whether the natural number 1 is the same as the integer 1 is the same as ….

That is to say, if you're working on an application where encoding details are important, then you can and will ask such questions; but if I am writing a paper about natural numbers, I do not have to worry about the fact that, for some choice of encoding, the number 2 = {∅, {∅}} is the same as the ordered pair (0, 0) = {0, {0, 0}} = {∅, {∅}}, and in fact it is meaningless to test whether 2 "equals" (0, 0). The philosophy of studiously avoiding such meaningless questions leads some to avoid even testing for equality, as opposed to isomorphism; failing to do so used to be referred to in category-theoretic circles as "evil", although, as the nLab points out if you try to go to https://ncatlab.org/nlab/show/evil , it seems common nowadays to avoid such language.


This is not the point of the article. Even at the level of the objects themselves, 1 : integer is not 1 : rational. The latter is an ordered pair (1, 1) of two coprime positive integers, or an equivalence class of ordered pairs up to cancelling. Some ugly hackery is required to really make the integers equal to their respective rationals, and its consequences aren't great either (just imagine that some rationals are pairs while others are not -- that's what you get if you forcibly replace the rational k/1 by the integer k), and no one wants to do that.


There are multiple things that we denote using a particular symbol: 1. 1 in and of itself is not a single concept. You could replace the symbol 1 with any other, even the sound of a fart and the concepts still remain the same. Given that we somehow agree that a fart sound shall be the way that we refer to a concept/object/thing.

It's a largely useful conceit to use 1 for all of those objects and more besides. It makes talking about them easier but we do have to be careful to use the correct rule-set for their manipulation.

I would personally prefer to see 1.0 for "rational 1" or perhaps 1. but that would require a convoluted sentence to avoid 1. being at the end of the sentence, unless we allow for: 1.! Well, that would work but what about 1.? Oh for ffs, I mean: 1..

One notes that one's own ones may not be the same as one's other ones.

If only I could spin "won", "own" and perhaps "wan" into that last sentence! ... Right, I've wedged in own. Needs some work 8)


Well, you give give 1.0 for the rational one. But what about the real or complex one? Or what about the Gaussian Integer? What about the 2-adic one, or the 3-adic one, or any other p-adic one? What about the different kinds of floating point numbers?

I don't think the rational one is special enough that we need to different notation just for her and for her alone. (Though that specific distinction can make sense in some contexts. Just not universally.)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: