I highly recommend the book The Tyranny of Metrics by Jerry Muller for an in-depth look at these kinds of issues (in addition to Taleb’s work). Mistaking the map for the terrain (or any similar metaphor) whereby you think you know how things work solely through the lens of easily quantifiable (and gamed) metrics seems to be the mistake of this era. The book really opened my eyes to this problem everywhere (including software), and it only seems to be getting worse.
Financial markets have no biological reality to tie them down
After years of reading various things trying to use past financial data to predict the next depression, I read a thing where a guy said that people get lazy and self indulgent during good times and then work harder during bad times and that explained the ups and downs. I'm sure it's more complicated than that, but I stopped trying to find a model that used past financial ups and downs to predict the future. It's nonsense.
There can be real world bits that are useful, like the Peak Oil model which is based on something real and has a real world proven track record. But lots of financial models are in the territory of a con game.
I have a certificate in GIS, which involves literally studying maps. Maps have huge inherent issues if only because land is 3D and part of an imperfect globe and maps are 2D -- a flat drawing trying to unfold the surface of a ball and say something useful about it.
Making good literal maps can be quite hard. I have a longstanding interest in award-winning graphics of various sorts because graphics are information dense and when they get it right, it's incredible. But maps often say more about the mind that created it than the physical landscape per se and it's a huge mistake to fail to recognize this fact.
Rather than Lazy/Work hard it might be more speculate/save. In a bull market you are trying to throw more money in. In a bear you are trying to save. It is self reinforcing too.
An interesting, related thing I’ve noticed is in many learn to program tutorials and exercises there’s a failure to be explicit about when we are creating a model, and that modeling is a skill, and that skilled model-builders first and foremost create a model to solve a problem. I’ve seen too many cases where students are left on their own to flounder about deciding if cars should be composed of four wheels and an engine, and what about the doors? etc.
I've been trying to find the name of this film for ages, but it's a scientist explaining taxonomy by lining up a bunch of pencils sharpened to different heights in order of height and then arbitrarily dividing them into sections.
Very much "the divisions are important but also decided on by some random guy, so keep that in mind."
OOP can be a minefield in this sense, yeah. It needs to be taught correctly, and even then, it brings the problem domain into the solution, which makes it difficult to reuse.
very surprised to not see Borges mentioned once, who addressed this question in the short story The Exactitude of Science by asking what would happen if the map was as large as the territory (and likely borrowed that concept from Lewis Carroll), in fact it's short enough to fit in a HN comment.
…In that Empire, the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province. In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it. The following Generations, who were not so fond of the Study of Cartography as their Forebears had been, saw that that vast Map was Useless, and not without some Pitilessness was it, that they delivered it up to the Inclemencies of Sun and Winters. In the Deserts of the West, still today, there are Tattered Ruins of that Map, inhabited by Animals and Beggars; in all the Land there is no other Relic of the Disciplines of Geography.
I like Chapman's piece on the subject -- "Maps, the territory, and meta-rationality" -- which is a deeper examination of what this classic saying really means:
I think this is an important idea, not new but probably not widespread or well understood enough.
One specific map that I think can lead to some reasoning errors is the way we abstract numbers.
Our most common and widely used abstraction of numbers has infinite at both ends, and this is a perfectly valid, practical and coherent way to represent real numbers, as long as we stay in the realm of the abstraction.
But real things like particles or planets might not be perfectly abstracted by this model.
Reasoning about real things with this model in mind (especially infinity) can lead to weird conclusions, like the 100% likelihood that we live in a simulation.
The extension of 'the map is not the terrain' is 'the simulation is not the universe'.
I'm not sure if that's an exact example of what you're saying, but, for me, if any conclusion is that we're living in a simulation, then it's simulations all the way down.
Or, we're living in a simulation because we understand and process the universe and our experiences through the filter of our brains, and our brains are, to some degree, an abstraction engine. Our physical bodies exist in the terrain, but the entirety of our conscious lives is lived through the map.
>(Another thing holding the company back was simply its base odds: Can you name a retailer of great significance that has lost its position in the world and come back?)
This seems significant. If you look at what's happened to most anchor department store retail at malls in the time since, it isn't pretty. You can look at the individual stores (Sears, Macy's, etc.) and find individual culprits to blame. But there's a reasonable point to be made that no one was in a position to turn JC Penney around even if they could have eeked out a bit more money for shareholders and debt holders.
Since you can sell a retailer brand to an entirely new company, it should be possible for anything to come back as long as you don’t count it actually being a different thing.
Target, Kmart, Woolworths are all totally different and healthy companies in Australia.
> If a map were to represent the territory with perfect fidelity, it would no longer be a reduction and thus would no longer be useful to us.
if a map perfectly represented the territory it would be very useful to everyone mentioned in this article. with a perfect representation of the territory, you can just simulate different strategies and deploy the best one. no need for risk management: your perfect map allows you to eliminate all risk.
a perfect map might not be possible if you’re an embedded actor, but that doesn’t mean one shouldn’t pursue the best possible map. the rest of the article is about recognizing flaws in your map. and guess what: when you identify a shortcoming in your map — which the author does and recommends others do — that’s identical in an information sense to just building a more detailed map.
> improbable and consequential events seem to happen far more often than they should based on naive statistics.
the author has quantified some thing (“consequential events”) and then stated that this thing occurs within some data set more frequently than would be consistent with that very dataset. i get what he’s trying to say, but when he phrases it this way it’s just a simple contradiction with an easy way out: build better maps.
so, yes: the map is not the territory. if you build a map without complete knowledge of the territory (which is the majority of maps), then it has unknowable error bars. but maps are unavoidable: you can either explicitly follow a map, or implicitly follow one. Warren Buffet uses a map when making sense of the world. is it good, or bad, that the map he follows is accessible to only a single mind and has not been digitized and shared more widely? the biggest case to be made for ditching digitized/formalized maps is because this allows you to retain more hidden information, which is the basis for gaining an edge in financial markets. but the author didn’t really argue the futility of maps based on embedded actors — it was mostly an argument that too many people are engaged in map-making without first understanding the boundaries of the territory. and that’s no argument that informal maps are intrinsically superior to explicit maps.
> with a perfect representation of the territory, you can just simulate different strategies and deploy the best one. no need for risk management: your perfect map allows you to eliminate all risk
Such a perfect map is impossible because it would need to have infinite accuracy, and its consequences would not be computable.
> if you build a map without complete knowledge of the territory (which is the majority of maps)
Which is all maps. Complete knowledge of the territory is impossible.
Even if the map is infinitely perfect, your understanding of it is imperfect. Because your mental model of the map is the actual map you follow, it is the actual map that you follow. Because you are imperfect, even if your map is perfect, the map you follow is imperfect.
all maps of a physical territory, yes. that’s a consequence of ħ, at the very least. but once you’ve encoded this best of all possible maps, the uncertainties in the map are quantified (like probabilities). you no longer have “unknown unknowns” so you can just carry through all the PDFs and create a true risk assessment. and it’s these unknown unknowns which were, AFAICT, the heart of this article.
> all maps of a physical territory, yes. that’s a consequence of ħ, at the very least.
Not really. If it's a consequence of anything, it's a consequence of relativity and the finite speed of light: you literally cannot know all of the information you would need to construct the "perfect map" you describe, because you only have access to information from a limited portion of the universe.
Quantum uncertainty just makes it worse, since even in the finite portion of the universe you have access to, you cannot know the exact state.
> once you’ve encoded this best of all possible maps
There is no such thing. Even leaving aside the relativity issue I raised above, non-commuting quantum observables are incompatible, so there is no single "best possible map" taking quantum uncertainty into account.
> you no longer have “unknown unknowns”
This is impossible. First, information outside your past light cone is always "unknown unknowns". Second, even if we limit attention to the data in our past light cone, someone can always measure a quantum observable that doesn't commute with the ones you have data on and invalidate your current model.
You can’t perfectly calculate the future of any situation ahead of time if you’re part of it, because it becomes chaotic.
This is part of the “computers are going to become AGI and enslave us” scam, they show people proofs AIXI is an optimal planner and don’t mention that it doesn’t include itself in the plans in order to make them computable.
If it was possible you could beat the stock market.
You're aware that the AI folks aren't trying to build AIXI, right? I find it hard to believe you'd thrash a theoretical infinite-compute chess model as silly this hard; it's a useful model as a boundary case.
There's some natural laws in the area. You can't win infinite money forever by one piece of good information, it's going to get priced in or the other traders are going to get wiped out.
This essay mentions the work of Alfred Korzybski. The book containing his life's work is called Science and Sanity. I've read about half of it, but I found it to be very challenging to read. For anyone interested in what he has to say, I would recommend the following:
I actually read their book that they produced about mental models. It was interesting, but I wish there were games or behaviors that allowed me to actually use the models taught
Territory? The map is not the land. Or the map is not the real world. Just as any model is just that; a model and not the real thing. And as the saying is, all models are wrong, but some are useful. And maps are certainly useful!