
The lat/lon floating point delusion - eaguyhn
https://www.datafix.com.au/BASHing/2019-08-09.html
======
jjoonathan
Yes, why don't people spend their time doing the extra work to figure out the
exact number of meaningful digits in their measurements when the defaults work
just fine for their non-scientific, non-metrological purposes?

A mystery for the ages.

~~~
falcolas
Because the value, as presented, is wrong. "[but it's being used for] non-
scientific, non-metrological purposes" which doesn't change that it's the
wrong value.

Imagine, for a moment, that someone uses these values to put up a fence. Is
the fence on their property? Those crap values beyond the actual significant
digits of the measurement may tell you that you are, when you're not. That
will cost you time and money down the road.

~~~
rumanator
> Because the value, as presented, is wrong.

That's not true nor is it the problem at all. The thesis was that the values,
albeit correct, have too many significant digits, which in turn reflect
differences which lie somewhere between having no practical use or being
absurd.

> Imagine, for a moment, that someone uses these values to put up a fence.

That example is very poor. Any engineer can tell you that a measurement is
meaningless without tolerances/margin of error, and the tolerance in effect
when putting up a fence is not expressed in a microscopic scale.

~~~
ramshorns
> a measurement is meaningless without tolerances/margin of error

Reporting a value with too many significant digits is the same as reporting an
incorrect margin of error, which (arguably) makes the value wrong.

~~~
seandougall
That's certainly true in a scientific paper, where the concept of significant
digits is relevant and widely recognized. In a "please remove dead animal" web
request, the value is only wrong if it points to the wrong location (i.e. a
location that does not contain a dead animal).

------
dfranke
People have this idea that when you take a measurement, you have so-and-so
number of significant figures that are probably correct and the rest are just
pure noise. But that's not how measurement error works. In the real world,
physical measurement errors are more-or-less normally distributed (we don't
have to argue about the "more-or-less" part because my argument here holds for
any distribution other than a uniform one). Let's say your measurement gives
you a latitude of 45.73490534578° with a standard deviation of 0.00001°
(that's about 11 meters). Those last few digits of your measurement are almost
certain to be wrong. But does that make them meaningless? No! Because if your
measurements are unbiased, then slightly more than half the time,
45.73490534578° is still going to be closer to the true value than
45.7349053457° is. By performing significant figure rounding, you aren't
throwing _very much_ information, you may not be throwing away any information
you care about, but you are nonetheless throwing away information.

~~~
protonfish
The problem is that these lat/lon figures are not displayed with precision
information at all, so we can't even know the std dev. Sig figs are a simple
and clear way to communicate the precision of a measurement, but if you want
to be more statistically accurate, you can use parentheses to indicate the
standard deviation. In your example (if I mess this up please correct me, but
I think) it would be 45.73490534578(1000000)° Which again is silly because
there is no way to know the standard deviation to that level of precision. A
more reasonable number would be 45.734905(10)°

Is it difficult to include precision when reporting measurements? No

Is it sometimes valuable? Yes

Is it really too much to ask for? No

~~~
dfranke
I've never seen the convention you're using and I don't think I understand it.
Conventions I've seen include:

* Give an error bound like 45.73490534578° (±0.00002°) and indicate in prose that this is a 2σ bound.

* Put non-significant figures in parenthesis, like 45.73490(534578)° (EDIT: possibly I've misinterpreted this one when I've seen it, see logfromblammo's reply)

* Put a bar over the last significant figure, like 45.73490̄534578 (hopefully this one renders properly when I post this... (EDIT: nope))

~~~
busyant
You can see it done here:
[https://en.wikipedia.org/wiki/Standard_atomic_weight#List_of...](https://en.wikipedia.org/wiki/Standard_atomic_weight#List_of_atomic_weights)

~~~
logfromblammo
The brackets in the table mean that different sources of the element have
different proportions of isotopes, so the mean atomic weights for specific
deposits may cover a range that differs from the overall mean for every
deposit of the element ever measured.

I.e. if you measure carbon from the upper atmosphere, it's going to have more
C-14 in it, from cosmic rays flipping protons in N-14 to neutrons. And if you
measure carbon buried for thousands of years, it's going to have less C-14,
from natural decay.

If you look at
[https://en.wikipedia.org/wiki/List_of_physical_constants](https://en.wikipedia.org/wiki/List_of_physical_constants)
you can see that the parentheses are omitted from _defined_ constants, and
included for _measured_ constants.

------
wpietri
In contrast to his 4-digits-are-fine notions, let me offer a counterexample:
[http://scissor.com/transient/destination_heatmap/](http://scissor.com/transient/destination_heatmap/)

I am working on a Twitter bot called @sfships, which monitors the comings and
goings of large ships in the San Francisco Bay:
[https://twitter.com/sfships](https://twitter.com/sfships)

As part of that, I generated the above map from AIS data. [1] It's basically
where ships stop. If you zoom in on the San Francisco waterfront, you will see
a grid of dots. That's because the AIS protocol stores lat/lon as
minutes/10000 [2], throwing away more detailed information.

This is adequate for its initial intended purpose, which is putting ships on
radaresque displays so that ships don't hit one another, etc. But it produces
all sorts of artifacts and issues when one tries to use the data more broadly.

And in case anybody is interested in playing with this, I have written a
python parser for AIS data with a bunch of Unixy command line tools, including
one that just turns the weird 90s protocol into more modern JSON:
[https://github.com/wpietri/simpleais](https://github.com/wpietri/simpleais)

[1]
[https://en.wikipedia.org/wiki/Automatic_identification_syste...](https://en.wikipedia.org/wiki/Automatic_identification_system)

[2]
[https://gpsd.gitlab.io/gpsd/AIVDM.html#_types_1_2_and_3_posi...](https://gpsd.gitlab.io/gpsd/AIVDM.html#_types_1_2_and_3_position_report_class_a)

~~~
tony_cannistra
I used to have a job where I analyzed global ship tracking data (AIS and other
less-open (i.e. more classified) data streams) for law enforcement and
environmental conservation purposes, and learned one thing: _man_ is AIS data
a bitch.

This is especially true when you're dealing with potential bad actors who can
spoof their transmissions, combined with incredibly poor vessel metadata
databases and strange satellite AIS coverage issues.

If there are any data scientists out there looking for a real head-basher,
start an AIS project.

~~~
iooi
Why do bad actors spoof transmissions? Smuggling?

~~~
jandrewrogers
Many reasons, and a significant percentage of all AIS transmissions are dodgy.
Smuggling is just one reason.

Other common reasons include illegal fishing in protected waters, covert
meetings (e.g. a couple yachts coming together in the middle of the ocean),
spy ships by state actors trying to look like ordinary commercial traffic, and
misrepresenting state of shipping assets for commercial leverage in contract
negotiations.

~~~
wpietri
I would love to read more about this. Any suggestions?

~~~
jandrewrogers
These are things you learn by doing in-depth analysis of AIS and related data
sources. As far as I know, no one writes about it. Physical world data sources
are full of interesting patterns and anomalies. Few people ever look because
the data sources are challenging to work with and very different from Internet
data, which is the only kind of data most people are used to.

~~~
wpietri
Well at this point I have terabytes of AIS data from the last few years, so if
anybody is looking to dig into this, I'm glad to share.

------
remus
I find it hard to get that worked up about this. The positions are accurate at
least, and if you're really so hard up for space that saving a couple of
digits is going to make all the difference then you can think about it for 5
mins and choose a precision/storage trade off that's appropriate for you.

~~~
maxerickson
The representation of the numbers doesn't convey any information about
accuracy, and it's at least a good practice to store data in a way that
conveys the precision it was gathered with.

~~~
yoz-y
The issue then is that there is no reasonable primitive format to store a
“real number with a small decimal precision”. Arguably storing these as ints
would be better but then everybody would need to agree on the denominator.

~~~
protonfish
It is frustrating (to me, at least) that there is no common primitive data
type the stores precision. But it isn't rocket surgery to store two floating-
points (a value and precision) in a structure and format it properly for
display.

------
cafard
Not just lat/lon. I find myself amused when The Washington Post's education
columnist Jay Matthews--quite a sharp guy from all I can tell--runs his high
school "challenge index" (AP tests taken / size of graduating class) out to
six decimal points for small schools. Somebody give that man a slide rule.

~~~
sokoloff
When I see those errors, I often try to work out the exact input figures that
led to the stats reported.

At work, I recently saw a survey result from a small customer base reported as
"31.3% of respondents" rather than the more communicative "5 out of 16..."

~~~
cafard
Quite. That way I was able to figure out how many students graduated from
Washington International School that year.

For that matter, my wife taught a class one year at a local university. The
class had 29 students, or at least 29 who bothered to fill out the evaluation,
and the evaluation had a couple of places right of the decimal point. I found
myself going reckoning, "hmm, 27 of 29 though that ...".

------
janpot
It doesn't matter how precise the value is, as long as it's precise enough for
your use-case and as long as machines are handling it. If the value is
displayed in a UI and it's rounded off to a reasonable decimal there, I don't
see the point why you would get so worked up about it.

I'm sure my thermostat internal temperature representation is way more precise
than the 0.5 degrees precision it's showing on the display.

~~~
scblock
As the article discusses, much (though not all) of the issue he has is with
the display value in UIs or published data.

------
kbutler
The author also ignores the fact that the distance associated with a degree of
longitude changes dramatically as you go from the equator (69m/111km) to the
poles (0) - roughly as a cosine. So the number of decimal places of longitude
should shrink as you get further from the equator.

[https://www.thoughtco.com/degree-of-latitude-and-
longitude-d...](https://www.thoughtco.com/degree-of-latitude-and-longitude-
distance-4070616)

"At 40 degrees north or south, the distance between a degree of longitude is
53 miles (85 kilometers)."

So while it's appropriate for Melbourne (37.8S) to use (very slightly) lower
precision than Sydney (33.9S), they're both using too many digits.

~~~
crvnc
Err, no, that fact is in the article.

~~~
kbutler
Went back and re-read, still not seeing it there.

It is described in the linked wikipedia article, though:
[https://en.wikipedia.org/wiki/Decimal_degrees](https://en.wikipedia.org/wiki/Decimal_degrees),
so it isn't completely ignored by the author.

------
sccxy
I usually go with 6 decimal places.

Geojson format advises it also

[https://tools.ietf.org/html/rfc7946#section-11.2](https://tools.ietf.org/html/rfc7946#section-11.2)

~~~
nixpulvis
> For geographic coordinates with units of degrees, 6 decimal places (a
> default common in, e.g., sprintf) amounts to about 10 centimeters, a
> precision well within that of current GPS systems. Implementations should
> consider the cost of using a greater precision than necessary.

If you actually read the RFC, it's giving a common example as an artifact of
sprintf, and explicitly stating that implementations should consider how much
precision they need...

------
not_a_cop75
Agreement on the decimal places. But honestly, why would you commit to using
GPS as the end means of locating in Australia when you'll be noticeable off in
20 years and outright inaccurate in 100?

[https://www.nationalgeographic.com/news/2016/09/australia-
mo...](https://www.nationalgeographic.com/news/2016/09/australia-moves-gps-
coordinates-adjusted-continental-drift/)

~~~
phs2501
Using GPS/WGS84 to locate is fine, you just need to convert to a plate-local
datum before long-term storage of those coordinates.

------
jandrewrogers
Professional geospatial systems frequently use fixed point internally but
export floating point for convenience. The Internet treats all geospatial as
floating point but positional measurement systems and mapping base layers
often come from fixed point data models.

Many of the underlying fixed point systems are designed for meter precision,
with some newer systems going down to a centimeter. As a practical matter,
repeatable positioning on the Earth's surface becomes difficult below 10
centimeters of precision, so centimeter precision is widely viewed as the
physics floor.

~~~
jhayward
There are large networks of GPS monitoring sites that regularly report their
positions, including movement, down well below centimeters.

~~~
jandrewrogers
The point is about repeatability, not precision. Nominally fixed points on the
Earth's surface are constantly in motion relative to GPS and relative to other
fixed points. Many of the effects that cause this lack of repeatability can
induce centimeters of measured deflection from the mean position over
relatively short periods of time. There are models that can be employed to
correct some of these effects (e.g. fluctuations in the gravity field, which
is measured by satellites) but not all of them. Consequently, the measurement
may be precise but there is never a reference frame in which the points are
actually "fixed" over time, which is indistinguishable from reduced precision.
Local changes in surface geometry are also easily detected with LIDAR.

For surveys that require maximum repeatable precision, they will often use the
mean position as measured over e.g. 12 hours. The magnitude of the variance
varies quite a bit depending on where you are on Earth.

~~~
jhayward
I've done a fair amount of work with GPS and precise positioning, and I'm
aware of all the dynamics of earth, orbit, and electron path that are
involved. I've never before seen a claim of 1cm being some kind of physical
limit on position.

There are both short and long-term motions of the earth. All are sufficiently
measured and modeled such that repeatable measurements are very much
achievable well below 1 cm.

In fact, it is GPS that is used to develop the models of earth movement. So I
don't quite get what you are saying.

~~~
jandrewrogers
The centimeter limit is an engineering heuristic used by people combining
high-precision GPS, LIDAR, remote sensing, and sometimes other sensors to
build high-precision models of the world e.g. maps for autonomous systems.
LIDAR alone can routinely detect changes in relative local surface geometry in
excess of a centimeter across a day in practice. Autonomous driving typically
uses an error bound of around 10 cm on absolute positioning even though the
sensors used to determine that positioning often have sub-centimeter
precision, for similar reasons. Real-time localization is an interesting
problem because measurement with orthogonal sensor modalities can produce
conflicting results with differences greater than the nominal precision of
those positioning algorithms. Getting the last few centimeters of functional
precision out of contradictory measurements is one of the things you need AI
for.

I agree that parts of the world have repeatable measurements below 1cm if you
can account for the myriad effects that cause displacement, but others do not.
The models are also approximate, since some effects require real-time
measurement of things we do not have real-time measurements for, or which are
not practically available in contexts where local position needs to be
determined.

------
krapht
Did the author think about the implementation? If we assign semantic meaning
to the number of decimal points, now we also need to store the number of
significant digits side by side with the actual number. We shouldn't round off
internally because we might use it in as an intermediate result in a
calculation.

~~~
vonmoltke
> We shouldn't round off internally because we might use it in as an
> intermediate result in a calculation.

Only if the precision is meaningful. The author's point is that it isn't for
geographical coordinates.

~~~
whatshisface
Error tends to grow as the calculation proceeds. 6th-decimal "insignificant
rounding" done over and over as the calculation proceeds can grow to become
arbitrarily large.

~~~
Majromax
What numerically unstable algorithms are ever performed with geolocations?

Moreover, given the author's point that real measurement errors exceed the
false precision of published data, if such a calculation were performed and
did provide "arbitrarily large" error, it would indicate that the result
should in fact be nonsense.

~~~
whatshisface
I'm thinking about the case where a calculation extends across many roundings.

~~~
Majromax
That doesn't necessarily make the algorithm unstable. See for example:

    
    
      >>> import numpy as np
      >>> x0 = np.random.rand(1)
      >>> x_rnd = np.round(100*x)/100.0
      >>> for i in range(1000):
      ...  x = np.mod(x+np.pi,1)
      ...  x_rnd = np.mod(x_rnd + np.pi,1)
      ...  x_rnd = np.round(100*x)/100
      ... 
      >>> x_real = np.mod(x0+1000*np.pi,1)
      >>> print(x_real,x,x_rnd)
        (array([0.51013166]), array([0.51013166]), array([0.51]))
    

Note no loss of accuracy for all of the intermediate roundings. The accuracy
is preserved because there's no mechanism here to amplify the error.

That's why I ask about the kind of algorithms typically applied to geolocated
data. Off the top of my head, I can't think of anything that would be both
useful and error-amplifying.

------
moron4hire
I recently ran into a weird error concerning Lat/Lngs and floating point
numbers. Google has a service for retrieving imagery from Maps, and you can
even request that imagery have certain paths drawn on it. You encode those
paths as part of your request URI, but if you have a lot of points, you could
end up exceeding their maximum URI length restriction. So they also define a
hashing system for compressing those points into a single value, which is
defined here:
[https://developers.google.com/maps/documentation/utilities/p...](https://developers.google.com/maps/documentation/utilities/polylinealgorithm)

One of the things they don't mention is that the rounding step does not work
as _they_ expect for single-precision floats. You have to use double-precision
floats to get the same results that they are demonstrating in the example.

They are asking for 5 values to the right of decimal points, which, with the
maximum of 3 digits to the left of the decimal point, means a total of 8
significant digits. Single-precision floats should be good to 9 digits, but
the rounding step is off by one when using singles.

The solution was to cast singles to doubles before performing the rounding.
Which seems absurd.

Single-precision floats should be enough precision for lat/lngs anywhere on
the Earth, for everything other than some applications in commercial-grade
surveying. And if that is the job you're doing, you shouldn't be using Google
Maps.

------
thelittlenag
These kinds of issues are what we've solved at QALocate.

There are a number of different issues here, which I think are only partly
explored through the article.

Let's take as given that you need to direct a person some place. In the
article, they are directing someone to a restaurant. But this gets complicate
fast. Is this person a patron rather than, say, a deliver driver there to pick
up an order, or a plumber there to fix and appliance, or an inspector there to
observe the kitchens, or ...?

By using a lat/long, or any geo-coordinate, we lose the human-value of
context. Each of the folks I list above has a very different place they
potentially need to navigate to. And even if their destination is the same,
the routes they took may NOT have been. Where they park, or were dropped by
ride share, and which doors they use are also influenced by their role.

Using a geo-coordinate to drops the rich meaning that humans in all their
roles require. A better solution is to use a real identity and then, when and
where coordinates are needed, derive them based on the persons role, whether
it be patron, plumber, or paramedic. As roles change, or as the structure
itself changes, then directions change to meet them. I find this a much richer
solution than just blindly telling my maps app to direct me to some GPS
location.

Another issue hinted at but not deeply explored is that often what we want to
specify is a region or volume, not a coordinate. Things in the real world
consume volume. Coordinates are idealized points and do not. This may seem a
trite observation, yet, the majority of our tools think in terms of points.
With the rise of autonomous vehicles, especially drone delivery, we need to
move towards representations of regions and/or volumes, depending on needs.
I'm not convinced that geopolys are quite right for this task and instead we
need something that is comfortable with the fuzzy and complicated boundaries
humans have to deal with.

[1] [http://www.qalocate.com](http://www.qalocate.com)

~~~
grouseway
Is the Joe Pesci picture a joke or has your website been hacked?

[https://www.qalocate.com/company/](https://www.qalocate.com/company/)

~~~
thelittlenag
HAHAHAHA! Having not met Joe I don't actually know, but I presume that is a
legit picture.

------
kazinator
I think the issue is that people don't have an good intuition for how many
digits past the decimal point are relevant in latitude and longitude. "42
degrees north" is obviously imprecise. 42.123456 is precise down to a
millionth of a degree; but if you wake me up in the middle of the night and
ask me how long is one millionth of a degree of latitude at sea level, I
wouldn't just be able to blurt out the answer.

Turns out it's about 11 cm.

A millionth of a degree is about 1.75E-8 radians. For small degree values,
sin(x) = x, so we can just multiply that figure directly by the radius: ~
6400000 m * 1.75E-8 = 0.112 m.

------
mangecoeur
On the flip side - who cares? It's just a question of shunting data around -
ok someone could add a format specifier somewhere and it would be neater. But
there's no particular harm in it either.

~~~
nixpulvis
Sending more data than you need is bad for two reasons.

1\. It implies more precision, which in many cases is a flat out lie. Savvy
users will know to truncate, round or something, but others will be lead
astray.

2\. It takes more space to represent. If the engineers storing double precious
floating points had asked themselves if they needed to be sending these out at
such a high resolution, they may have been able to cut the storage of these
things down 50%.

Though yes, there are probably bigger issues to worry about. Doesn't mean we
should outright dismiss these points.

------
alkonaut
The presentation is just lazy and javascript defaults to double. No mystery.

You can use digits to indicate accuracy but if you don’t know whether the
measurement stored as a double is accurate to 2, 4 or 12 decimals then you
can’t really do much in presentation. The reader has to interpret the number.
Usually the reader doesn’t care much about accuracy, and simply copies the
numbers to another system. The coordinates with full double precision is then
basically a machine readable format.

------
ellisv
> Popping those long coordinates into Google Earth, it looks like the
> georeferencing puts the point somewhere on ILI's back wall

Perhaps the Google Earth image is "wrong". Stitching aerial and satellite
images together and aligning every pixel to the exact geo-location isn't as
simple as it sounds -- especially given that the ground shifts over time.

~~~
ISL
And that the aerial images have perspective and optical distortion to
deconvolve.

------
rurban
I could explain this. Internally all those coordinates are stored as double
(not long double). The precision of a double is max 15-17 digits. Usually it's
very inprecise though. But not too imprecise, because tiny errors could lead
to very costly mistakes. my biggest did cost ~40.000, wrong after the third
digit. 30cm. You might remember the financial stories on microdiffs with
improper arith. That why one of the most important lessons I told my students
was precision and it's limits. Esp. when zooming in.

CAD/GIS system round to the default vsprint %f precision which is 6. They
could use a zoom dependent number but nobody would do that, they would laugh
at you. It's not a delusion, it's industry practise. If you have better
precision, you stick to it.

------
parliament32
lat/lon kinda sucks because of the different notations (degrees with decimals?
degrees and minutes with decimals? degrees and minutes and seconds with
decimals?) and because of the way 0.1 degrees is a different amount of
distance based on where you are on the earth.

UTM is better in every way, and already used on most modern hiking and topo
maps.

[https://en.wikipedia.org/wiki/Universal_Transverse_Mercator_...](https://en.wikipedia.org/wiki/Universal_Transverse_Mercator_coordinate_system)

~~~
emergie
Looks like regular grid, but they did special irregular 32v cell to cover land
part of Norvegia. For me this places the UTM in "not reasonable" land.

------
donarb
I've been working with the AWS Textract, it scans image files and returns the
coordinates of the scanned text. The coordinates are percentages of the
width/height of the image where one edge is 0.0 and the opposite edge is 1.0.
The software returns numbers like "left = 0.3140530586242676", which means
that the text's left coordinate is 31% from the edge. This is providing sub-
pixel precision for 150 dpi documents.

------
danmg
Users don't usually care about raw lat/lon. If you want to display them as an
approximate so you can tell them apart as some kind of logging output do
something like "(%+07.4f,%+07.4f)".

------
bytematic
What about future GIs programs that can handle these digits, should we not
futureproof anything that doesn't exactly utilize it today?

~~~
thelittlenag
What would future proofing even look like?

For me I think we need to move away from thinking first/primarily in terms of
coordinates. Yes, I know that sounds silly but hear me out.

Say someone is going out to meet a friend at a restaurant they haven't been to
before, or perhaps they don't exactly recall the directions they took the last
time -- how do they find directions? The first thing they most definitely do
not do is say "hey maps app, tell me how to get to -37.80467681 144.9659498".

There is a level of indirection we go through via the names of businesses,
places, etc. So we ask, "tell me how to get to I Love Istanbul". And we hop
through a number of steps mapping that unstructured name to some coordinate.

But what if instead we used structured names instead of unstructured "names"?
Then those names could map to a coordinate, or a street address, or well
anything. What's more we could attach metadata to the name.

This is what DNS does. Humans know the names. They are structured according to
some simple to follow rules. And computers take care of mapping those names to
IPs, as well as other metadata.

Why not do exactly the same, but with locations? Re-use the same system but
map to a coordinate? Or a region? Or whatever you need for your use case.

If you did this you'd now have an L-NS instead of a D-NS. That's exactly the
insight we've had at the startup I'm working at.

[1] [https://www.qalocate.com/resources/](https://www.qalocate.com/resources/)

------
ivanhoe
Does anyone here know what's the actual precision that we can get by the
available equipment?

~~~
urschrei
Survey-grade GPS (using CORS and WAAS correction) using RTK will give you
accuracy to ~10 cm within a few seconds. You can get that down to several
millimetres if you can tolerate post-processing several hours worth of static
data.

~~~
ISL
Is accuracy at that level actually possible with GPS? Precision seems likely,
but it is substantially more difficult to provide absolute accuracy at that
level.

~~~
scblock
Yes, but not on its own, that I am aware. High accuracy GPS is is based on a
combination of GPS satellite data itself with real time or post-processed
positional correction using references stations which themselves are placed in
a known surveyed location.

In the US, 60 cm accuracy is often possible with nothing more than WAAS
corrections. 10 or even 1 cm accuracy is often possible in real time with
access to an RTK network.

------
itroot
Ugh. It's better to use geohash almost everywhere. It solves this and many
other problems (lat/lon ordering), etc..

------
cojxd
When you completely run out of things to complain about.

~~~
thedanbob
That's pretty much what I was thinking as well. It's funny and an enjoyable
read, but at the same time it's a complete non-issue. Unless you have some
application that chokes on that many decimal places AND can't do any sort of
input sanitation, why worry about it?

------
michael_fine
Edit: This is what I get for only skimming the article. Ignore

~~~
kbutler
While obxkcd is usually good for a few upvotes, you're probably losing karma
because the article included this link.

------
knolax
As far as I know sig figs aren't a primitive type in most languages so the
author's point is moot.

