Would it be possible to extend geohash to define a polygon in a compact manner? e.g. ['9bc1db2', '9bc1db6', '9bc1db1', '9bc1db0', '9bc1db4', '9bc1db9', '9bc1db8', '9bc1dbd', '9bc1db3'] has lots of repetition. Might be nice to have a geopolyhash or something that can compress this...
e.g. maybe have a cbor structure that encodes this? Then maybe encode as base32 or base64 or whatever. If we can get a spec going then maybe we could also get a CBOR semantic tag if it doesn't exist already.
H3 hexagons by Uber solves a similar problem with better characteristics- when people talk locations, they typically also want to get distances between locations. Because h3 is hexagonal (pretty much) it has a nice property that all immediate neighbors are the same distance away, which makes traversal and distance easier. This is not the case with squares.
> In some geographical information systems and Big Data spatial databases, a Hilbert curve based indexation can be used as an alternative to Z-order curve, like in the S2 Geometry library.[12]
Interesting to note that this S2 library is by Eric Veach, the guy who basically invented modern Monte Carlo path tracing, besides AdSense and AdWords: https://en.wikipedia.org/wiki/Eric_Veach
> [Geohash] subdivides space into buckets of grid shape, which is one of the many applications of what is known as a [...] space-filling curves.
> Geohashes offer properties like arbitrary precision and the possibility of gradually removing characters from the end of the code to reduce its size (and gradually lose precision).
This is really cool and obvious reading it, but I somehow never thought about it that way. Are space-filling curves used in compression?
Any regular space decomposition can be described in terms of a space-filling curve. However, there is little benefit in doing so in practice so you rarely see it expressed that way. They were briefly used by major database vendors 30+ years ago for indexing on the theory that it could mitigate head seek latency on spinning disk, hence the use of Hilbert curves, but were quickly abandoned without much fanfare because they have a lot of issues for this use case. If you were going to use them today, not that I would recommend it, Morton curves are generally optimal.
While space-filling curves are not useful for compression per se, they are commonly used in some data compression algorithms to convert multi-dimensional data structures (e.g. rasters) into a 1-dimensional byte stream for the compressor.
Yeah geohashing is cool. It takes something like 12 steps to go from the world node to the location you’re standing on. You can cache the city and county nodes for a performance bump. The hash tells you your position in the partition too.
Anyone interested in this sort of thing should check out google's S2 projection, which is a space filling curve over the surface of a cube, where the orientation of the cube was chosen such to put the most problematic points out over the ocean. This has some nice advantages over just using a mapping from lat/long to a single square curve.
What3Word's bullshit pissed me off enough I fooled around with an open system based on the EFF's word lists some years back. If you combine them you can address any place on earth with 2 meter precision using 4 words, or 40 meter precision with 3 words.
I was just curious about how the numbers worked out so I never coded up anything meaningful to share, but I think it's an interesting idea. I also don't wanna get into any IP nonsense with people as obviously scummy and connected as W3W.
Compared to geohash the word granularity makes addressing rectangles vs points much more "chunky" but I think that's a tolerable tradeoff for a lot of applications.
I love the idea of geohashing systems, but brevity is the wrong goal IMO. Systems like what3words.com (while they have their own problems with openness), follow the CorrectHorseBatteryStaple [0] model—they’re longer, but more memorable because they tap into your imagination. A string like “u4uyegiu” is maybe more appealing than two floating point numbers, but lacks just as much in memorability and ease of communication.
What3Words is garbage from craven rent seeking sscammers. You can design a very similar system using the S2 projection and EFF word lists to address any point on earth with 2m resolution using 4 words.
W3W's entire business model is to make the mapping opaque so that you have to pay them money for lookups. Even worse, it means they intentionally destroyed any hierarchical locality in support of this. That's ducking absurd.
This idea can be an RFC anyone can implement for free. Even more craven is they successfully suckered a number of emergency services on paying for W3W.
Same. I used to actually do this back in 2009, and I actually encountered other people doing it in the wild. Closest thing to Pokemon Go from that era.
e.g. maybe have a cbor structure that encodes this? Then maybe encode as base32 or base64 or whatever. If we can get a spec going then maybe we could also get a CBOR semantic tag if it doesn't exist already.
{ "origin": "9bc1db2", "vertices": [ {"deltaLat": 0.0001, "deltaLon": -0.0002}, {"deltaLat": 0.0003, "deltaLon": -0.0001}, {"deltaLat": -0.0002, "deltaLon": 0.0004}, {"deltaLat": -0.0006, "deltaLon": 0.0003}, {"deltaLat": -0.0003, "deltaLon": -0.0001}, {"deltaLat": -0.0004, "deltaLon": 0.0002}, {"deltaLat": 0.0002, "deltaLon": 0.0001}, {"deltaLat": 0.0004, "deltaLon": -0.0004}, {"deltaLat": -0.0001, "deltaLon": 0.0003} ] }