Hacker News new | past | comments | ask | show | jobs | submit login
Suppose I wanted to kill a lot of pilots (historyofyesterday.com)
374 points by stanrivers on July 3, 2021 | hide | past | favorite | 169 comments



I first heard about this in the context of software project estimation.

The author (forgotten to me) made the point that nobody has permission to think about disasters when estimating project completion time.

So why not explicitly ask them to? "So now we have our estimate, what would cause us to miss it?"

And in the course of giant monster attacks, you give people permission to talk about specific scenarios that are actually very likely to derail the schedule.

It was an interesting observation, vs the "just multiply the everything goes right estimate by X" method.

If everyone else is being positive, no one wants to be the lone (realistic) downer.


Spot on. They used to call me 'Dr. No' because that is exactly what I would do, but it saved us tons of money so it was tolerated. We also actually delivered on time and within the budget but I'm sure it cost us contracts to more optimistic competitors.


This. Most engineers delude themselves and plan for the happy path.

I was famous too for "being negative." In the planning of a complex project once a rather dull engineer, after I had just pointed out a major potential problem asked, "Why are you so negative?"

"I'm not negative, I'm planning for success."


I've seen a lot of engineers be negative in a destructive way. They tear down ideas, but fail to offer solutions. Usually this is about their ego rather than a desire to help the team.


I agree that's a dark pattern and something I hope I have never been (I realize you weren't accusing me of that).

I try to ask questions like, "What happens when X goes down?" What happens when network latency goes up?

I quit a job over this one. I had designed an API for a consumer IP/IR control type thing. You say, "I want to tell that thing over there to turn on." The API does the thing, and then exits. The API does retain state-- but only in as much as it wakes up when it gets a packet from a device, parses it, etc, and makes any internal state changes.

Well, management decided they wanted to demo it running continuously for days at CES. Now if you've never done a demo for CES-- it is the worst environment possible. Networks go up and down, there is so much radio traffic that WiFi, BT, anything are unreliable.

I told them I would need to harden the API, that it wasn't designed for that scenario and most of our tests didn't last more then a few seconds keep in mind that at this time, this was a skunk works sort of thing that had not yet been productized. Also, keep in mind there was an aggressive, aggressive development schedule.

They predictably lost their minds. I know its crazy right? Test for the exact scenario you plan to show to customers? They forbade me for doing any sort of test like that and charged ahead with the demo. A month later a manager dressed me down in front of my entire team. About the bug that they had forbade me to fix.

I walked out and never went back.

EDIT: That turned out to be more of a story about ruthless management and constructive dismissal.


My approach was to allocate 2 weeks for even the most basic things because unknowns always creep in and a tighter schedule tends to make those things creep in even more. Also, many of the requests had no particular urgency to begin with.


On the flip side, I'm sure you retained more old business. And employees: trudging through pre-doomed projects was a major root cause of talent and motivation drain at most B2B places I've worked.


I do exactly the same thing - it's all part of the project and system analysis and it helps you neatly sidestep potential pitfalls.

It reminds me of the whole "hero developer" or "hero team" myth; teams who do not do proper analysis, build a turd, but work insane hours stopping the turd falling to pieces.

The people who do that come out looking like heros, when they could have totally avoided the drama in the first place.


My inner nihilist thinks the more "optimistic" competitors might be making more money.


It can depend on accountability mechanisms. I would like to see more contracts that give bonuses to companies that come in under budget and under schedule and penalize the overly optimistic ones that never seem to hit their target. This is becoming more common in some domains.


What domains is it becoming more common in, in your experience?


My contract manufacturing company does this. Our standard contract has a 5% bonus for being less than 10% late on the delivery date (and penalties start around 25% late). After suppliers see the contract they will often revise their originally quoted schedule.


The predominant domain to use this type of contract is infrastructure construction. I haven’t personally seen it used in software development outside of control systems but I can’t immediately think of reasons why it couldn’t be extended to other domains as well


As Goldratt says in “Beyond the Goal”: Any project management methodology that doesn’t account for Murphy’s Law is not a realistic methodology.


You need a distinct pre-mortem where everybody brings a list of the ways a project could fail.

https://en.wikipedia.org/wiki/Pre-mortem


I just deployed something worldwide that took about six weeks to develop and our initial guess was about two weeks, so this is fresh in my mind. I was able to lay out a list of work items at the beginning that stayed like 80% the same from the beginning to the end, but didn't account for the possibility that each one of these could be push-button, or could lead to a blocking problem that needed a day or two of research to resolve. Based on that, I'm thinking that the way to approach the next one of these is to lay out the roadmap, but assume that something like half of the trivial steps are going to turn into complex problems that stop the train for a day or two, or lead to a change in the design.


The best framing I've heard for this problem is: minimum time for each component is bounded, maximum time is unbounded. I.e. there is effectively no answer to the question "If this component becomes a problem, what is the maximum amount of time it could take to solve it with the resources available?"

Ergo, in the worst case, any given single component's ballooning time can dominate the overall project schedule.

Which turns estimation into a game of "How certain am I that no component's time will explode?" To which the answer in any sufficiently complex system is "Not very."

I'm pushing my work to move to something more like a converging uncertainty plot, as milestones are achieved and we can definitely say "This specific component did not explode."

Our PMs aren't used to hearing an idealized minimum schedule time + an ever decreasing uncertainty percentage based on project progress, but it feels like less of a lie than lines in the sand based on guesses.

(Note: This is for legacy integration development, which probably has more uncertainties than other dev)


One interesting thing to also account for is the correlation between activities regarding cost or schedule impacts. Meaning the uncertainty analysis should also account for systemic effects where a ballooning of one item’s will also cause another item’s schedule to increase by some correlated amount


I do remember doing a risk assessment as part of a proposal for a client. They pushed back hard on a number of points and got the sales person to remove them.

The funny thing is the risks that were removed all occurred during the project. We should have stood our ground.


> So why not explicitly ask them to? "So now we have our estimate, what would cause us to miss it?"

I didn't used to take this seriously. Then one project I was on was delayed by several weeks when members of the team got jury duty, got injured, and had other medical emergencies in a rather short period.

We still got it done, but I got a lesson that month in planning.


> I got a lesson that month in planning

What a great perspective. Many would turn to self-pity, you've turned to learning.


nobody has permission to think about disasters when estimating project completion time

I've been a part of a few large scale system implementations and we documented every significant risk to the project at the beginning and as we went if new ones presented themselves. This seemed to be a standard part of the PM's playbook, and I used to to save critical organizational capabilities when things collapsed:

--------------------------

I raised risk on a must-not-fail deadline, but it actually failed and was the camel straw that brought the project down. (An Oracle product) PM's were didn't appreciate the risk. There's still probably an Oracle VP out there who really doesn't like me for using the established project protocols that allowed me to escalate the issue to the president where I worked and way above the VP in Oracle. On my own initiative I kept the legacy system-- supposedly partly decommissioned-- updated just in case so we could fail-over softly. This was unexpected: When failure became obvious I reminded the VP that I had raised the original risk, and then I had prepared for it. I think their plan had been to box us in to meet their increased $ demands.

It was part of another missed deadline required for a major milestone 25% payment on the contract, and it was a fixed-price contract so they should have absorbed the costs.

Instead, when the failure progressed to my area, we all showed up to work one day and everything was gone. No contactors, nothing was left, like a cleaning crew had cleared it out. A week of negotiations failed when they wouldn't budge from demands for extra $$millions to keep working on the project, plus their missed 25% milestone payment.

Ultimately I lost a years worth of work when things entered lawsuit land and sued each other for breach of contract. (we won-- or settled at least. The fixed price contract was clear, and the missed deadline were partly due to demos of fake functionality that didn't actually exist or components they knew would be EOL'ed after signing the contract & before implementation would begin. The case for outright fraud was pretty strong: We had videos of the demo.)

In case you're wondering, I don't like Oracle. Though with a different product we were still forced to use Oracle DB's. Those I actually don't mind and were a huge step up from the ~2000 MS SQL Server & much older DEC VMS file-based database that wasn't SQL compliant or relational.


I’ve struggled with Out of Life support from almost every database vendor. They create something that is very hard to change, upgrade or rip out, and then require you to upgrade it every few years. Lots of hidden costs for the buyer. They’re generally in a better negotiating position than understaffed IT departments.


Out of Life support was a huge issue in the case I outlined above. I kept things ticking over on my side, but there were no more updates from the original vendor except for bespoke work we had to pay them for to complete annual updates for federal compliance issues. Actually we pooled together with a few organizations in the same boat to do that until we rebooted the failed project.

To give the legacy vendor credit though, It was a legacy product 5 years past its "final" EOL and kept honoring the maintenance agreement and providing ToS updates for a long time. In terms of the database itself, it probably helped that it wasn't their own: It was native to OpenVMS and hadn't substantially changed in at least a decade. Ultimately that made data migration a bit easier since industry tools for migrating from VMS systems had reached maximum maturity by the time we got around to it.

I still have a soft spot for that old system though: It lacked any sort of modern functionality newer than about 1995 and the underlying application has it's nroots in the 60's. But it was fast & I had low level access to do some complex things much more easily than the upgraded system (installed about 6 years ago). You won't get much faster than a well-tuned decades-old system running on modern hardware, at least not unless you need something that can handle medium-to-big data.


I once worked on a team that did something along these lines and referred to it as a “pre-mortem.” We basically held a meeting where we imagined the project in its concluded state and discussed what “went wrong.”


PI Planning in SAFe explicitly calls for this. Risks to the plan are called out in front of everyone and each is discussed to see if it can be mitigated (and who will own that mitigation).

If anything happens due to one of those foreseen issues, everybody knew about it in advance and already discussed what, if anything, could have been done to prevent it as well as what action was actually taken.

I love the SAFe / PI Planning approach because it makes sure that everybody is on the same page and totally removes any blame game from deliverables. Far, far fewer surprises.


The tension I've seen at most places where this goes off the rails is due to mis-assigning responsibility.

PMs are responsible for keeping projects on schedule. Engineers are responsible for completing work.

Consequently, PMs are incentivized to compresses schedules, and engineers are pressured to do the same.

The end result is that "the people who do the work plan the work" goes out the window, because risks aren't fundamentally understood (on a technical nuance level) by PMs, so naturally their desire for schedule wins out whenever there's a conflict.

(That said, I've worked at shops that hew closer to how it it should be, and it works great. Current job just happens to be a dumpster fire of bad Agile)


Yea, I can totally see that happening. Hopefully in a room full of people, somebody will have the gumption to not vote low confidence if there’s concern about this happening.


Instead of doing more "post-mortems" after projects fail, try doing "pre-mortems" before projects begin.

Imagine it's 6 months from now, and the project we're about to begin is half complete and twice over budget. What factors will the post-mortem identify as causes for the failure?

Learn the lessons before the project begins, not after it fails.


I think this is an excellent approach, and I try to do it myself. But YMMV for getting teammates to do it in a meeting - even with a generally supportive manager (not me, I'm just scrum master), there is just a psychological resistance. I don't think that they have a list in their heads and are simply afraid to share it, I'd guess that it requires some effortful imagination, and would be unpleasant, so things just get stuck. I'd love ideas for follow up prompts that might help with this.


There's a book called "how to decide" by Annie Duke that has some good advice on how to inspire this type of thinking.

For example, one of the ideas is the "doctor evil" or "imposter" game.

(1) Imagine a positive goal.

(2) Imagine that Dr. Evil has control of your brain, causing you to make decisions that will guarantee failure.

(3) Any given instance of that type of decision must have a good enough rationale that it won’t be noticed by you or others examining that decision.

(4) Write down those decisions.

Someone elses notes from this book: https://wisdomfromexperts.com/why-you-must-use-negative-thin...


The reason no one does this is because thr answer is constant "something might fail and add an arbitrary delay"

once you hace committed to play the estimation game, you're committed to the delusion and reality checks dont matter. Just stop trying to estimate.


Or acknowledge that the "E" in "ETA" stands for "estimated", not "promised".


It's not only that. There seems to be a 'political' advantage to overconfidence even given the effect it ought to have on your track record. (This is not advice.)


Yes, and this surprises me each time it happens.

As a for instance, I was the operations lead a couple years ago for a large, customer-facing financial product rollout. The timeline was insanely aggressive, approximately 10 months ahead of my prediction and predicated upon several payment and banking vendors nailing their legacy integrations out of the gate with no delay (perhaps a first in the history of the world). Several of these vendors weren't committed nor was a spec agreed upon prior to the timeline. When raising these concerns everyone acknowledged them, but mitigations or deadlines revisions we not made as it would countermand previously set expectations with the executive team.

The project continued on for another 18 months past the deadline with a "launch" set every three months. Inevitably something would derail it a week beforehand that was unexpected but known months in advance by the project team (e.g. the mainframe the payment vendor uses has a fixed column length. It will take two months to build around this to support xyz).

In the end it got rolled out. Everyone forgot about the delays and the project team was praised as the implementation did better than expected. The same technique is employed once again.

While I don't like it, I now see the estimates and re-estimates as a method for an executive team to manage a project portfolio and prioritization. It's not a good way to do it but it's easy to express priority by simply ranking deadlines.

It's much easier to avoid this in high-trust environments (typically smaller organizations).


"just" is thrown around far too much on this site

I'm sure my boss and customers will appreciate me saying "it takes as long as it takes", nothing will go wrong, it's clearly just that simple :)

Why not instead of working at all just earn a million a year off the interest from a well performing stock portfolio instead?


You know at first glance I would say this isn't true, since SWOT analysis covers this pretty well. Then again, I didn't learn SWOT in any programming class, it was in business classes.


This sounds very much like formal failure-modes-effects-analysis (FMEA).

https://asq.org/quality-resources/fmea


I might be misunderstanding but I thought it was SOP to have a section/activity in your estimate where you mention risks and opportunities and their 'chance' if happening?


Proactive pessimism.


Before I do anything, I ask myself: 'Would an idiot do that?' And, if the answer is yes, I do not do that thing.

- Dwight Schrute


"Would an idiot eat breakfast?" Uh oh.


an idiot would eat an ordinary breakfast. I will eat an extraordinary breakfast.


Jokes on you I do intermittent fasting, only coffee until noon.


There are exceptions like breathing and taking space


DangitBobby, this is comedy.


The basic premise of TIPS was that one could train engineers on solving problems, pretty much like martial arts trainers do: by exercising and learning tricks.

Theory of Inventive Problem Solving nicely abbreviates as TIPS. That's how it has been known in the west for decades. ТРИЗ is a Russian acronym. TRIZ just replaces Cyrillic letters with equivalents. If the author digged a bit deeper, they would have known it and a few funny stories. Like the fact that Genrich Altshuller did not want his teaching to carry on once he dies, but fan groups of engineers existed until the fall of the Soviet union when many of their members emigrated and continued the TIPS cult following, selling courses and software to large enterprises (Dassault, Lockheed Martin and the likes).


For what it’s worth, theory of inventive problem solving is not a very good translation of теория решения изобретательских задач teorija rešenija izobretatelʹskih zadač: as far as I can see, it parses as [inventive [problem solving]] with inventive meaning something like “unconstrained by convention, prolific intellectually, embodying the spirit of invention”, for which the proper Russian is изобретательный izobretat-elʹ-n|yj; the original name has a different syntactic structure and uses изобретательский izobretat-elʹ-sk|ij “belonging to or characteristic of inventors”, so the proper translation would instead be something like theory of solving problems of invention, which is admittedly awkward (and doesn’t afford a snappy acronym) but at least successfully conveys the idea that the “invention” part pertains to the problems, not the solutions or the theory.


> (and doesn’t afford a snappy acronym)

TOSPOI even sounds Russian!


It still rolls off the tongue better than TANSTAAFL ("There ain't no such thing as a free lunch").


There is a good book employing the TRIZ mindset for self-improvement called "How to Be Miserable: 40 Strategies You Already Use"[0]. Although some of the outcomes the book lead you to can seem banal, if you approach them from the perspective of how to optimize misery, it can give you a new perspective. It certainly did for me in some areas.

CGP Grey also made a video based on the book, called "7 Ways to Maximize Misery"[1] that may be easier to digest.

[0]https://www.goodreads.com/en/book/show/25898044-how-to-be-mi...

[1]https://www.youtube.com/watch?v=LO1mTELoj6o


Read the book because the video got me interested, one of the most entertaining reads of my life and definitely the one book that could somehow count as self-help that I'd ever recommend.


I have always had trouble making short term attainable goals. For example I want to restart doing yoga. But it's been a few years since I've done yoga and now I can no longer touch my toes. Whereas before I was able to do the full primary sequence of ashtanga. Every time I think of my decline it kills any motivation to do anything to stop it. Anyone have any tips?


Lots of parallels to "know your enemy", if you're willing to broaden "enemy" to include the impersonal.

Quotes[1]:

> It is said that if you know your enemies and know yourself, you will not be imperiled in a hundred battles; if you do not know your enemies but do know yourself, you will win one and lose one; if you do not know your enemies nor yourself, you will be imperiled in every single battle.

and

> Thus, what is of supreme importance in war is to attack the enemy's strategy.

The part about both knowing yourself and knowing your enemy is similar to the idea of iterating back and forth between thinking about how to fail and how to succeed. The universe is not actually out to get you, but you can ask what its "strategy" would be if it were.

---

[1] see https://en.wikiquote.org/wiki/Sun_Tzu#Chapter_III_%C2%B7_Str...


Suppose I wanted to make a program inscrutable, hard to modify, hard to test, heavily coupled, and hard to reason about:

- hide information so that it's not queryable

- force information to flow through multiple hops

- make it hard/impossible to set the true state of the system in its entirety

- allow state to be mutated silently

- give interfaces roles and force certain state to have to flow through specific paths

- multiple concurrent access

- pointers

- spray the app across as many machines as possible

Sounds familiar, maybe like a scathing criticism of OOP? Well check this out. What if I wanted to make a program as slow and bloated as possible?

- put all state in one place. Bonus points if you can bottleneck it and force all changes to be well-ordered and sequential

- all data has to be immutable and copied over in its entirety when propagated

- use larger data structures than necessary. Lots of wrapping and indirection

- no caching. Can't trust data unless it's fresh

- read/write from disk/net tons

- use scripty garbage collected languages

- spray the app over as many machines as possible

The latter kind of feels like a criticism of FP, though really it's more critical of distributed monolith. What if I want to make my app as vulnerable to outage and overload as possible?

- concentrate the app in as few machines as possible

It's really interesting how all the different tradeoffs of approaches just kinda appear when you take this inverted approach. It's all too easy to get caught up on the positives of an approach.

(opinion - I still think FP comes out looking better than OOP, but it does suggest that you need a lot of techniques to safely hide optimizations to make FP more performant, which can make it harder to scrutinize all the layers)


>What if I wanted to make a program as slow and bloated as possible?

>put all state in one place.

>no caching.

Just as a counterpoint, both as a user and a developer I've found the main cause of performance issues and UI bugs come from a combination of distributed state and caching.

State being stored as a private member on the widget object itself is arguably more likely to cause a state desync bug or accidental O(N^2) behaviour than everything being global. IMGUI solves this but has only really seen a warm reception amongst the gaming industry (where, incidentally, UIs are always lightning fast despite being traditionally limited by older console/handheld hardware.)

Having a cache at the wrong level might reduce some function's execution time from 100us per frame to 10us, but makes it much more likely that the entire app will become unresponsive for 10s while some dumb bit of code rebuilds the whole cache 25,000 times in a row.

In a similar vein, I've found issues in multi-threaded apps where some function is sped up from 10ms to 2.5ms by making use of all four cores, but occasionally spins on a mutex, blocking the UI for 250ms if some slow path (usually network/disk IO) is taken.

Genuinely, I think the simplest way to make sure a program will have bad performance just requires one step:

- Make it difficult for a human programmer to reason intuitively about the performance of the code they write.


> Just as a counterpoint, both as a user and a developer I've found the main cause of performance issues and UI bugs come from a combination of distributed state and caching.

Isn't that like, the main cause of most software headaches? insert joke about cache invalidation and naming things.

It sounds like you're saying bad caching can be worse than no caching. I can absolutely see that being the case.

> - Make it difficult for a human programmer to reason intuitively about the performance of the code they write.

Totally agree.


>It sounds like you're saying bad caching can be worse than no caching. I can absolutely see that being the case.

More specifically, I'm saying that caches (implicit or explicit) can and do go wrong, and the simplest way to avoid invalidation errors (and the performance hiccups when higher-level developers start manually invalidating the cache to work around them) is to just implement them in as few places as possible.


> What if I want to make my app as vulnerable to outage and overload as possible?

> - concentrate the app in as few machines as possible

Well, sorta. Distributing an application across multiple machines only reduces downtime if the machines are independent of each other.

I've seen bad architectures where wider distribution leads to more outages, not fewer. I even consulted for a company who had their services in two data centers for redundancy, but they put different services in each one, so a failure in either data center would effectively bring down their entire stack.


It was mostly a quip that the first two anti-optimizations result in distributing the application across tons of machines. Since this is only the basis of the entirety of distributed computing, I thought it was only fair to include an instance where distributed computing patterns win out (fault tolerance and parallelization).


Absolutely, it was a very solid post overall, including the nits I picked.


For the most part I agree, but if you ask "suppose I want [a conjunction of deprecated outcomes]" then it is possible that something tending towards one of these deprecations may protect against another. For example, hiding information so that it is not queryable may make testing and certain modifications more difficult, but encourage heavy coupling, which in turn makes a program hard to reason about and therefore also inscrutable - and thus, one further step removed, harder to test and modify.


> The latter kind of feels like a criticism of FP

It may feel like that for somebody with little experience with FP, but half of those points are design decisions that aren't changed by the language paradigm, and most of the rest is a non-sequitur where the stated problems aren't caused by the characteristic that precedes them.

Immutability makes chaching easier, not harder, the same for propagating by reference. Besides, there's nothing on FP that asks for more IO operations or distributing your software. The item about larger data structures is on point, although the indirection is very often optimized away on practice, while imperative programmers often have to create their own indirection by hand, that the compilers have a harder time to put away.

Anyway, most of both lists apply perfectly to the modern "mandated microservices" architecture some place use, that I think was your point.


Because it's not meant as a criticism of FP, or a list of FP things, in fact I actually really like FP. It's just that if you anti-optimize for bloat and slowness, you coincidentally end up with a lot of features that FP has.

Centralized state, immutability, wrapping, indirection, and nested data structures with garbage collection are very FP things. They are also almost always slower than mutate-everything imperative style, and require a lot more under the hood to make them performant. You basically need a clever compiler to really reap the benefits. Contrast with C with a dumb compiler, easy to get a fast, albeit buggy, program.

IO, bad caching, etc are very much not in the spirit of FP. The other points are just other bad things that I've seen a lot of apps do.

> Anyway, most of both lists apply perfectly to the modern "mandated microservices" architecture some place use, that I think was your point.

That was exactly my main point.


Your FP example reminds me of the JS trend that picked up about 5 years ago. Not sure if they're still doing it, but you described the dogma near perfectly.


Because imperative programming in Javascript, even with OOP principles and patterns, generally leads to even more complexity and messiness.

FP is now almost the norm in frontend engineering. Certainly in the React world. Immutability and side-effect-awareness is highly valued.


Thé irony being that JavaScript doesn't add anything that Smalltalk didn't already provide, yet we have all this OOP backslash and how FP is somehow better.


Well, Javascript is the language that is able to run everywhere, so that's what frontend engineers have to target. There is some more flexibility now with Webassembly...

If you are interested, look up Fable, Bolero, Elm, PureScript, ClojureScript, Grain...


All of them don't wash away JavaScript semantics, just because the source language is something else.


That's actually not true, at least if your definition of "Javascript semantics" isn't extremely broad.

This was the case with CoffeeScript. But the examples I gave either work with Webassembly (Grain, Bolero), providing their own runtime, or do some work to hide Javascript's data model to a large degree (F#, Purescript, ClojureScript).

What stays the same is that you are dealing with the DOM, User events occurring there and a metric sh*t ton of asynchronous APIs.


Yes, that's exactly what I was getting at. It was less a criticism of FP (which is why I say it feels like it might be - it's not) and more a criticism of the bevy of techniques common in modern full-stack development.

Even then I'm not saying it's bad to do thing that way. They absolutely have their place. OOP has its place. FP has its place. Distributed computing has its place and vertical scaling has its place. The point is, dogmatically latching onto "X bad! Y good!" blinds you to potential solutions.


Do you mean Redux? If yes, it's still quite popular


> Sounds familiar, maybe like a scathing criticism of OOP?

No, not really. For example:

> - make it hard/impossible to set the true state of the system in its entirety

Strangely enough, environments like Smalltalk allow you to do exactly that, but other environments not so much.


The problem is, semantically and linguistically, there are two OOPs. There's Smalltalk OOP and Java OOP. I haven't worked with Smalltalk but from everything I've heard, it "does object-oriented right". Unfortunately, Smalltalk just isn't popular (not even in the top 50 on the Tiobe index, though it fares slightly better on Redmonk).

For better or worse, Java is massively popular, and thus the Java conceptualization of OOP, which is just frankly bad, is what most people think of when they think OOP.

OOP encapsulation works when you can't have objects with invariants violated, and when you can't cover the combinatorial space with tests. The problem is, Java-style setters and getters are an almost guaranteed way to get the above properties. That's why it's better to be able to just have a small number of course-grained state stores that you can interrogate easily (REST, Reactors, databases, and Kubernetes data models all exhibit this). Class Employee inherits Person, doesn't. Too fine-grained, too easy to mutate yourself into an absolute mess.


In which camp does Simula (arguably the first object-oriented language) fall?

In which camp does CLOS (the Common Lisp Object System) fall?


> What if I wanted to make a program as slow and bloated as possible?

That sounds like a very exact description of React/Redux.


If you are at war with your technology, your codebase will look like a battlefield. React and Redux require a functional, reactive mental model. If you approach it with an imperative mindset, you'll get a mess of a codebase.

Unfortunately the popularity of React means it is used by people who know neither Javascript nor FP very well. And the popularity of Redux is even worse.

In my opinion people should stay away from Redux until they know and understand intuitively why it is necessary. Until then, use the useState and useReducer hooks, then maybe something like "Zustand". When you start using Redux, use Redux Toolkit.


> If you are at war with your technology, your codebase will look like a battlefield.

That almost sounds like a criticism of people who developed React and Redux and went to war against their browser APIs.


Not really. Redux doesn't even touch Browser APIs as far as I know.

React uses a virtual DOM to avoid DOM unnecessary manipulation, which is slow and error prone. DOM manipulation could be seen as a week point of browsers, and by avoiding it, React is actually a very good ally.

Javascript is a dynamically typed language and offers quite a bit of functional programming functionality itself. React and Redux use that to their advantage, rather than insisting everything be modeled by classes.


> React uses a virtual DOM to avoid DOM unnecessary manipulation, which is slow and error prone.

While it may be the case that DOM is far from perfect, React is hardly the only way to avoid it. Even if for your use case avoiding DOM manipulation is necessary (which I strongly suspect is not true for like 95% of people who use React as the framework du jour), there seem to be significantly better thought out approaches, like for example Svelte, if you absolutely have to go down the "let's turn our browser workflow into a C-like one" road and churn out an opaque blob of code as a result. That also avoids unnecessary DOM manipulation, but unlike unnecessarily duplicating browser data structures is at least somewhat elegant, just like compilers are considered elegant compared to interpreters.

> React and Redux use that to their advantage, rather than insisting everything be modeled by classes.

Sure, but Javascript is not even based on classes. It traces its heritage back to Self which doesn't even have classes.


Avoiding direct DOM manipulation is a benefit in almost any case. Virtual DOM is now at the root of most popular UI frameworks libraries, including Vuejs and Angular, also less popular WASM ones like Blazor(C#) or Percy (Rust).

I do remember writing complex JQuery components. React felt like a liberation for me...


But because JQuery was bad doesn't mean that React was the answer. That would be a false dichotomy to make.


Interpreters are considered elegant, too. For example, Lua is a great little language and it runs everywhere precisely because it's interpreted.


Sure, there are many elegant interpreters. I'm not sure that patching DOM from the changes in a redundant data structure is one of them. Even Blink's idea to instead pull browser's own DOM into Javascript definitely looks saner to me.


It's not a redundant data structure if you need it to figure out the necessary changes.

You are free to choose other approaches for your frontend projects. Just don't expect to get hired into larger teams easily.


> It's not a redundant data structure if you need it to figure out the necessary changes.

And why exactly can't you "figure out the necessary changes" without it?

> Just don't expect to get hired into larger teams easily.

Honestly, I see that as a win-win.


Seeing how much you like trolling, that is not a surprise.


Sincerely held opinions are by definition not "trolling". I just fail to see that as relevant, just as I fail to see the number of Big Macs being sold globally as being relevant to food choice criteria.


We've built a React/Redux application[1] that people keep telling us is very snappy, and we definitely haven't optimized as much as is possible, so from my experience React/Redux is not inherently slow and bloated.

[1] https://my.supernotes.app


FWIW it takes 7-8 seconds to load on my computer as well, on the latest Firefox, with a good CPU.

This has finally convinced me to not waste my time learning React, at least for now.


Yep, first load isn't very quick, as there are a lot of things that need to be loaded which are never going to be very small. However, first load only happens once. After that the assets should be cached in your browser and loads should be much faster.

There is definitely something to be said for faster first loads, but unlike many other sites on the web, ours is of course optimized for consistent/repeated use, so in the scheme of things first load is negligible compared to making sure it runs fast while actually using it 100s of subsequent times, which (I hope) it does.

Definitely wouldn't let that discourage you from learning React. If you want a smaller bundle size, you can use Preact[1], which is nearly a drop-in replacement for React's runtime but much smaller.

[1] https://preactjs.com/


It seems pretty snappy aside from the first page load (and refreshes, ...). Not perfect (some actions take a few frames sometimes), but not anything I'd spend more dev time on.

The first page load is nightmarishly slow. I tend to avoid services that pull in that much data because they're nearly unusable in low bandwidth scenarios (e.g., lots of stores or other commercial buildings made from steel, if you wanted to check a note while travelling, if your customers are among the 5-10% of the USA without home access to top speeds of >1MBps, ...).

As something of an aside, you're hijacking keyboard shortcuts that you don't actually use (at least, they don't appear in the help menu and don't seem to have any effect).

Also, it might be worth considering the privacy of the friend finder. When I add a username you instantly tell me their name if they're on your platform, even before they accept my request. On the one hand that isn't much different from twitter showing your name and handle together, but on the other hand that seems like surprising behavior for a note taking app, even one with collaborative features.


Thanks for the feedback! After first load the assets should actually all be cached in your browser, so subsequent loads will be much faster (including full page refresh). But yes the initial bundle size is one of those things we could probably spend more time optimizing for.

We recently released desktop apps (and will hopefully release mobile apps soon) where this is of course a non-issue.

Could you tell me which keyboard shortcuts you are having problems with? Our intent is definitely not to hijack anything we don't use.

Thanks for the note on the friend finder. Unlike many other platforms, we don't actually require users to have a surname, so we felt that if privacy was a concern with regard to name the best solution is for a user to only include their first name. But I can see how that still isn't perfect from a privacy perspective. We are working on improving the way friends work on the platform and will try to improve that as part of it.

Thanks again for all the feedback, very helpful.


I appreciate the response! Sorry to only be pointing out problems by the way. Plenty of other components do seem well done. I just know that in your shoes I'd want to know where users struggle.

> Initial load

I just poked around a bit, and I think the other commenters on mobile or firefox might mostly just be noticing main.[key].js being slow. The inital load is structured as a couple of sequential requests followed by a lot of js (device dependent, 0.5s - 7s+) and then a lot more requests fired off mostly in parallel.

> Refresh bandwidth, caching

Your heaviest assets are set with max-age=0, and the server often responds to an if-none-match header by regurgitating those assets. Refreshing after doing stuff in the app (or just waiting) for 3min reliably generates nearly as much latency as a cold load.

If you expect most mobile users to prefer your mobile app it might not matter, but a workflow that looks like doing something on your site, navigating to another site or app for a few minutes, and then coming back can often trigger a mobile browser to refresh the page -- especially on lower end devices where the cold load is most expensive.

> Native apps

Awesome!

> Shortcut hijacking

Mostly a misunderstanding on my part. It looks like the following things are happening:

(1) Keyboard hooks apply across the whole app, even in screens where they do nothing (like using CTRL+SHIFT+I in the settings) or when the elements they would apply to don't exist yet (like navigating between cards).

(2) I falsely assumed the cheatsheet of commands was comprehensive.

(3) Nearly every navigation keystroke (space, shift+space, arrows, ...) is actually used by something in the app, mostly stuff that doesn't work till you have cards and friends and whatnot.

The net effect for me was that it looked like all my favorite shortcuts were being swallowed and not doing anything useful in return. I don't have much useful advice here, but easy discoverability of all commands might be nice.

> Privacy

That makes sense. Reflecting back on exactly why I found that off-putting in the first place, I don't necessarily think the problem is with the search feature (some people probably disagree), but with the fact that it wasn't clear the information would be public when I was first asked for it. That might lend itself to a simpler solution.


The main page is not snappy at all, on my phone when I click on a link on the top bar it takes at least 2 seconds before it changes the page


I'm all for bashing unnecessary usage of Javascript, but I work with React a lot and I think there is room to make it a lot slower and more bloated, i.e. for what it is React is reasonably lean and performant.


Sounds like a serverless app on AWS


I've been doing this my entire working career. I didn't realize it wasn't something everyone does!

When I worked in security, the first question I always asked was, "how would I defeat this system". When I worked in reliability, my first question was always, "how can I break this?". And now that I'm founder, I ask myself, "how will my business fail".

I know that the executives at a lot of large public companies have an annual exercise where they spend a couple of days just thinking about how the company could fail, what competitors could do that would be disastrous, and then what they can do to mitigate those things.

I also teach my kids these kids skills. When they build something I ask how it might break, or when they are about to do something risky I ask them how it might go wrong and what they will do to make sure it doesn't.


Yeah, I think this way by default. It really does seem a natural for security!


HN loves this idea of thinking in reverse. Is anyone here actually doing it? What problem did you solve?


On a small scale I find this way of thinking useful in software development.

Say that you are implementing a role-based access control system.

And then ask yourself how can my RBAC system fail?

Well one way it could fail is if someone without the necessary role can operate on something that they ought not have access to.

So then create roles A, B, and C, and create a resource and say that group A can rw, and group B can r this resource. Then create a user and give that user role C. And then try to use this user in that role and try to read the resource. If it can, fail the test. Then another test where you try to write using such a user. If the resource was modified, fail the test.

This is as opposed to only thinking about what you want to happen, in which case you might be writing tests that only ensure that those that should be able to read/write can do so. And of course you want to test that too. But ensuring that those that should not be able to access a resource cannot is the more important thing to be sure of, and also the type of thing that might slip by unnoticed.

A system where those that should be able to access resources cannot, this will be detected in normal operation anyway, by way of users performing their usual tasks. But accessing resources you should not is the most critical and could go by unnoticed for a long time.

And that is why the tests that are written in the backwards thinking fashion are the most important ones.


I'm not sure "negative test cases" and "thinking about the problem in reverse" are the same thing.

Thinking about larger systems is much more amenable to this approach. Taking your example if we extend RBAC to an auth/authz service we can say the following.

I want my service to be unreliable. In what ways?

I want it to be occasionally inaccessible or non-responsive, produce results that are non-deterministic and inaccurate.

Taking one of those, how would I produce non-deterministic results? I'd make every operation tied to a PRNG, or be a function of something external to the system I don't have control over. Or maybe it could look non-deterministic if I make my logic dependent on time.

How would flesh out an unreliable function based on time?

I'd offer functionality proposed to tie RBAC to the local clock of the user. This way I'd have to deal with timezones, differences in user localities, relativity, leap years/seconds etc. Functions built on that would likely give the impression of being non-deterministic across many users. (Even assuming correct implementation).

Ok what about being inaccessible vs non-responsive?

The easiest thing to do is just shut the system down of course to maximize this objective, but we want the worst possible system so that's one that may or may not be there and may or may not respond. I could set up my load balancer to include nodes that don't exist, I could run jobs on the service boxes at regular intervals that consumed 100% CPU preventing their responses. I could ...

...

And you can go on and on trying to find all the design and config choices that would make for a truly maddening service. Then you say OK from the product feature level down how can I avoid doing any of those things.


I'm in QA, and this is something like what I've been doing with running pre-mortems.

Basically, at the project planning stage, I get everyone together and ask how this shit is going to blown up in our faces. What are all the scenarios that are complete failures.

Then a few weeks before a release, I do the same thing. It's amazing the issues this catches from the cross team approach. For instance, something that the product manager was worried about, they never brought up because they assumed engineering knew about it was covered, but in reality, they had no idea that was important.

Or with two engineering teams, or DevOps etc. Normally we have more than a dozen action items out of these meetings.


Not really but kind of. An algorithm we developed to trade futures was so terrible, we inverted it and it became the base of our most profitable strategy.


Constantly, as the article states, some problems can’t be solved. But they can be incrementally made less problematic. I’ll use game networking as an example: there is no one way to solve networking between clients and servers. There is however a lot of ways to ruin the networking. So think about it it reverse. If I want the client to have a very poor experience I’d put them on an unstable network (cell), I’d have random packets dropped, I’d send at a rate higher than the clients download speed. Turns out these are really common problems with many options for addressing.

I think of it less of reverse the problem and more changing my frame of reference or perspective of the problem.


In security the forward problem isn't as useful - adding security features/crypto is easy but probably not going to be helpful on its own. We use threat modeling and the security mindset. What is the easiest way for me to break the system/program? Where can I manipulate the inputs to the system? Where were the developers most likely to have made a mistake and what kinds of mistakes are commonly made there?

Also similar in nature is safety engineering. You think about how to make the system fail easily and unexpectedly and you avoid those.

You also try to prioritize the highest risk/lowest cost issues in these. These all involve risk management, which is also critical in investing and is a major reason why Munger has been so successful - he only goes for low risk/high reward plays and bets big on the few opportunities that he gets.


I did this recently for a vacation. I asked what would make me miserable on this trip? And solved for those things. It's amazing how good of a time you can have by just avoiding the biggest and most obvious pitfalls.


Maybe not quite the same thing, but I recently won an innovation challenge for improving public bathrooms.

I started with what actions people take that make the bathroom as disgusting (pee everywhere, vomit, shoving food in the walls, etc) and then determining defensive mechanisms from there.


Could you please share more about what you have actually come up with?



Yeah, minimalism is great to reduce dirt/dust collection points.

By the way, I think toilets which are attached to the wall instead of to the floor are great because you can easily mop under them.


I like the stealth treatment you apply to the loo to reduce edges. Without wishing to imply my own bathroom needs a clean this would actually be handy in people's homes too (even though generally people are less wayward than they are in public loos)


Testing is a good example of a field that is ineffective if you don’t adopt this sort of adversarial approach. When I’m writing unit tests for something, as well as having some regular test cases of expected inputs and outputs, I’m always trying to find ways of breaking the code under test. This regularly surfaces problems that might otherwise have been overlooked and caused trouble (most commonly logic errors that can be a nightmare to diagnose if you don’t have a time-travelling debugger) later on.


Security architecture: Under what circumstances does this entire product or solution bankrupt the company and ruin customer lives?


All the time. When a politician, member of the media, or a client says "X causes Y" I'll usually follow up with one of the following:

- Does X ALWAYS cause Y? - Can Y happen WITHOUT X? - Could X cause OTHER things that cause Y?


I've worked with engineers who have used a premortem -- once they have the bare sketch of a project to solve a problem, they imagine that it failed and think about the most likely causes, then adjust their project to mitigate those risks.

The process described here happens a step earlier, when you're deciding which projects to tackle in the first place, which is an interesting angle.


There's a saying in my neck of the woods that captures this: If you can't optimize, pessimize!


Suppose you wanted to become as fat as possible, as fast as possible? How would you pull off that trick? You might stop to think about your answer before continuing.

One way that happens all over nature is to gorge on fruit, just like bears and other animals do every fall before they hibernate.[0][1]

An old article from Ken Hutchins entitled "So, Your Ambition Is to Become a Circus Fat Lady?", is essentially a TRIZ failure design for fat loss.[2]

[0] https://peterattiamd.com/rickjohnson/

[1] https://www.amazon.com/Fat-Switch-Richard-J-Johnson-ebook/dp...

[2] https://nebula.wsimg.com/e2e6c217edf4c5dd64bb0486df430804?Ac...


This is a common pattern, also found in proofs by contradiction and optimization techniques. If you know how to find the minimum, you may be able to use that knowledge to find the maximum.

The problem with using this pattern in real life is that it rarely applies. Unlike the one-dimensional toy examples, in real life there are many more ways to fail than there are to succeed. Eliminating or even enumerating all possible ways to fail may be prohibitively expensive. Even if you do that, you only found a way to maintain status quo, not necessarily to improve.

This can be said about the rest of TRIZ/TIPS. I learned about it a long time ago, but can't remember a single time I solved a practical problem by applying it. I may be able to find examples that can be classified under one of TRIZ rules, but only in hindsight. It's not like you can look at a new problem, apply TRIZ approaches one by one, and find a solution faster than you normally would.


Sounds a lot like threat modelling. It's great, if you are creative enough to imagine all the threat scenarios.


Or at minimum, the likely ones.


I did not like this article because it was superficial, clickbaity, and did not explain the main idea with a worked out example.

> Charlie inverted the problem in a similar way to the TRIZ practitioners — if he wanted to kill pilots, he could get them into icy conditions whereby they couldn’t continue flying, or put them in situations where they would run out of fuel and fall into the ocean. So he drew more applicable maps and better predicted the weather factors that were relevant by keeping in mind the best ways to do the exact opposite of bringing his pilots home.

This tells me nothing about how Charlie Munger avoided planes crashing. There has to be more to this than this trite summarization.

Can someone shed some light on this and explain why this is insightful.


I thought the article was well written. Can I suggest trying again?

The point was not tho explain HOW to avoid planes crashing, it was to explain how he saw his job not that of providing super-clear weather reports ,but focusing the reports on factors that would cause problems for pilots.


Imagine the worst possible outcome and do the opposite. That will light your focus on what matters.

I think that's the essence of the article.


> thinking about how to do the exact opposite of your goal is sometimes the best way to ensure you achieve it.

It also makes for a great clickbaity headline.


If you were a bit disappointed that the article, even though it was quite excellent, wasn't more about actually killing pilots, this might help [1].

[1] https://www.cracked.com/article_18839_7-planes-perfectly-des...


Is there a software invention/systems equivalent of TRIZ matrix?

Some of the dimensions/parameters we wish to analyze that I can think of may include:

  - latency  
  - throughput  
  - modifiability  
  - binary/data size  
  - memory size  
  - cpu cycles  
  - interoperability/compatibility  
  - accuracy  
  - precision  
  - staleness  
  - bandwidth
  - availability
  - etc
As a simple example, perhaps we want to reduce bandwidth while preserving data size. One of the resolutions may include compression (lossless/lossy).


> rather than solving for success, solve for failure. If you want to design a dust filter that works at 600 Celsius within a steel mill, instead design all the ways you could develop a filter that would fail miserably in those conditions.

That's a strange example. There are infinite ways of not accomplishing a task, and as a non-expert in filter design I have no trouble listing them: make the filter out of butter, make it just a big hole, pay someone to pluck dust out of the air with tweezers... how exactly is this helping?


I interpreted it as brainstorming ways to actively sabotage success. "Solutions" that appears superficially plausible, but undermine the outcome in real world situations.

So for the dust filter, maybe selecting a material that's only rated for high temperatures in short bursts. Or one that needs frequent, labor-intensive replacements in that dangerous environment. Or test that your filter survives at high temperatures, and test the filtration efficacy, but forget to test the efficacy at high temperatures.


The point is to make a filter that works in lower temperatures but breaks around 600C.

Using the other example - of course to kill the most pilots you coukd out them in a big meat grinder or just blow up their planes. But the point is to design planes so that they both fly and kill pilots.


Hahaha I love this response; yes, you can always change the rules of the game, so to speak, and make things worse; however, if you keep within the rules of reality then it might be a bit more useful.


I do this all the time.

At work, I'd consider what it would take to wipe all the servers, and all the backups (while creating enough of a distraction on the network to delay people from pulling the power in the datacenters). We had way too many people with global root access that could have done something about it, so the point would be to consider engineering to restrict access (this was before kubertnetes took over the world).

Walking down the street yesterday I saw someone had a little bucket full of dog treats for anyone to take and my first thought was the risk of someone spraying a bit of poison on them. I don't necessary "fear" things like that, but its just obvious to me that you're implicitly trusting the thousands of people who has walked past that box not to be pathological.

Probably good that not everyone in the world thinks the way I do.


> Probably good that not everyone in the world thinks the way I do.

It's funny because HN (in contrast to the world at large) seems to be frequented by a lot of people who are gifted with the forward-looking contingency mindset. I'll bet that's why you got upvoted, and it's probably why the article was posted.

One reason why a community like this would exist online is because it's effectively a saf...err I mean friendly place for people who are concerned about future events, but in a positive, problem-solving sort of way. The rest of the world doesn't always reward this or want to hear about it.

IT, operations, and even software development need people who can think this way (particularly the opportunistic side of software dev, as it is more COTS-components oriented and thus less NIH and thus theoretically more battle-tested/hardened against future events).

Sadly one of the things I've noticed about those with this mindset is that if they are not able to act on their perceptions constructively, for example if they were raised in an environment where it was considered a crazy way to think about things (focus only on past or present), it can turn into a more subjective, fear-based practice. In those cases it seems you get more outcomes like a stance where even just going outside feels to a person more like gambling with their life. Just observationally speaking...


I think the fear response comes from people who don't assign probabilities to things, so they just see that everything is a risk (leading to disorders like OCD in the extreme).

And at work you can see people focusing on mitigating the wrong security issues. There's a sort of maslow's heirarchy of security that needs to happen, and some people noodle way too hard on memory bit flipping attacks and not on just making sure you can't ssh into something with a guessable root password. Of course its more intellectually challenging to think about all the ways that the NSA could break into Google, but if you don't work at Google its all the boring shit you need to make sure you have in order.


It's an occupational hazard for us computer guys. It's the failure mode you didn't think of that brings the system down, so of course you learn to think of the failure modes.


As I was clicking this link I was thinking "Oh great, now I'm on watch list". Turns out, it's about thinking and problem solving and I was pleasantly surprised.


There’s a really interesting parallel here with evolution: exaptation. https://en.wikipedia.org/wiki/Exaptation?wprov=sfti1

This occurs when a trait that evolved to solve one problem also happens to solve (or almost solve) another problem — for example, perhaps feathers originally evolved for the purpose of keeping warm but also opened up an evolutionary path to flight.


At a previous job many years ago, after a project would fail my coworker and I would habitually remark, “we’ve learned another way not to do things!”

There’s just such an abundance of creative ways to screw things up! But that’s kind of the point of the mental model — our brains are pretty good at spitting out tons of ways that things will fail. So hijack that process by making that list and then invert it.


Is this another way to help define the problem you're trying to solve? It sounds like "Keep pilots alive" wasn't detailed enough. By thinking about all the failure conditions, you're building a more robust description of the true problem you're trying to solve. I also wonder where you draw the line and stop thinking of failure conditions.


Suppose I want to build a program that trades in the markets to make money.

So to do the opposite I would try to lose money as fast as I could, according to this?

But the way to do that is just to churn my book a lot and pay costs and spreads.

It's not clear how that illuminates how to make money.


You now have your first data point. You now need to come up with a second strategy that loses money while, simultaneously, avoiding those steps.

Repeat until you run out of ways of losing money.


It actually illuminates more than you might think. The market is mostly a zero sum game. When someone loses money, someone makes money, and vice versa.

Just think about it. I think the best way to lose a lot of money is to fall into scams. And avoiding scams is an important part of investing. It also tell you that scams can be profitable. So the next question is: how to scam and lose money doing it, which will hopefully put you back on the right track... or make you a really good scammer. Back to honest investing, losing a lot of money is not that trivial. It is easy to take risks but you can still get the occasional big payoff and if you think about ways of not getting that payoff, it is the same as thinking about how to get it, just with a different angle.

That kind of reverse thinking goes best when you pair it with regular thinking. That's a "meet in the middle" algorithm, kind of like solving a maze by going both forwards from the entrance and backwards from the exit.


I know nothing about trading, so forgive me in advance - but I assumed the point of the article is to break it down a bit more specifically. What causes you to lose money quickly, what's the program doing - how do you stop it from doing those things?

If you start to remove the roads to failure you're kind of forced onto the road to success - or at least a place where you're not falling into the places you're planning around.


In the case of trading, removing the known roads to failure is no guarantee that you are now on the road to success. There are many MANY ways to fail in finance and more are being invented every day.

I suspect (but cannot prove) that this is because financial markets are "PvP" instead of "PvE", you trade against intelligent humans who can and do adapt their strategies to what they observe to be the strategy of other players in the market. If there is any form of rock-paper-scissors dynamic, this means that there will never be a stable strategy that keeps winning and so the method from the article will not work.

In most engineering problems OTOH, you are basically fighting against the environment. This can still be very difficult (see rocket science for example) but at least the laws of physics don't change from day to day. So, any progress in solving the problem you made yesterday will remain.

Successful application of the TRIZ method requires that the problem remains relatively stable, so that the options you "chip away" remain poor options forever. The markets are not like this and neither, I think, is career design.


Forgive me, but I think you've illuminated the exact way to be a great investor repeated by buffet ad nauseam. Don't buy and sell repeatedly - you lose on the transaction fees. Act like you have a punch card with 10 holes for your entire life and you use one every time you buy a company. That way you'll really consider you decisions and won't lose money on transaction fees from repeated buying and selling.

Buy great companies cheap or at a good price. Only sell if you think they are technologically threatened (horse and cart vs car situation) or have grown to take up too much of your portfolio (like up 50%+ of your portfolio).

This is basically how buffet got mega rich. Amex, Coca Cola, Apple, Geico, etc. These few big bets made him a very rich man and he still holds them today.

Losing 1% per year to transaction/currency/ management fees per year really adds up over a lifetime. And it doubly matters when you are rich enough to pass the cap gains threshold. 20% lost on every time you sell at a profit in the UK - that compounds too!

Or - if you can't be bothered with the effort to try and beat the market - you've also illuminated another bit of buffets advice. Invest in and hold an s&p ETF through your whole life.

In fact Munger is one of the biggest proponents of "invert, always invert"


> It's not clear how that illuminates how to make money.

I mean you did already start to think of some interesting starting points:

> But the way to do that is just to churn my book a lot and pay costs and spreads.

Don't churn your books, reduce costs, reduce spreads


> the way to do that is just to churn my book a lot and pay costs and spreads

Iterate the question. Assuming a limited number of trades per quarter and a cap on spreads and commissions, how would one maximally lose money?


> But the way to do that is just to churn my book a lot and pay costs and spreads. It's not clear how that illuminates how to make money.

Well it suggests to invest long-term?


Strawman. Suppose you are market maker and never have to pay a dime in fees. Morover, anything lost on spread is immediately refunded.

If you have strategy that very quickly loses money in this harsh conditions, you have a winner.


> anything lost on spread is immediately refunded

This is unrealistic and produces fantasy strategies. Capping spreads over a unit of time is more realistic. That said, this way of thinking, iterated with discipline, does lead to a decent portfolio strategy. (Though not necessarily an outperforming one.)


Think of it like the weather map that was mentioned. Your job is to fill in all of the parts of the map that will kill the pilot. Once you’ve exhaustively done that, the clear path will be evident.


How will you know you have indeed exhaustively done so though? This seems like a strategy that will only work for extremely well understood fields and things like "career development" that the article talks about are definitely not well enough developed that you can exhaustively enumerate all the ways it can go wrong.


Using the weather map analogy again, you’ll only have accurate data for some parts of the map and you’ll need to rely on second hand information or predictive modeling to fill in the other parts and after all of that sometimes you just have to go look for yourself.


The interesting thing about the markets is that if you had a strategy that consistently lost money, oftentimes you can use the “opposite“ of that strategy, so to speak, to make money. This is one reason why it’s difficult to find these losing strategies.


> oftentimes you can use the “opposite“ of that strategy, so to speak, to make money

The opposite of Robinhooding might be buying a smattering of stocks and holding until infinity. It’s better than constantly trading options, a consistently losing strategy. But I wouldn’t call it a winner.


If buuying OTM options is a consistently losing strategy, the opposite would be selling OTM options. And that's a reasonable way to make money.


> the opposite would be selling OTM options. And that's a reasonable way to make money.

It's the original "vacuuming up nickels in front of a steamroller" trade. It looks like it works well for a while until it doesn't.

Option pricing has many degrees of freedom. Most result in value frittering away. As a result, most options trade participants lose money. (By design. It's a hedging tool.) The alpha bleeds into the underlying market through market makers' hedging.


First, a tangent. I've noticed recently that a number of websites aren't letting me do old-school copy-to-clipboard. They're hooking highlight events and only allowing copy according to their own method (e.g., Twitter or whatever). ScriptSafe causes them to hide the article completely. I've found that the "Absolute Enable Right Click & Copy" Firefox extension works with the article for this post, but there may be other extensions that unbreak copy-to-clipboard.

That said, from the article:

> Prioritize near-term income over long-term value.

In terms of one's career, I'm not sure what the author has in mind. I've found that, in the long run, work experience is work experience, and your skill in interviewing plays an outsized role in your future career prospects. If you're getting paid a lot of money to do something, that generally means there's going to be competition for your job, and it's noteworthy that you're the one who got it!

I'm sure there are some exceptions (and in my experience people on social media Internet forums like this one love to point them out), but I'd argue those exceptions are mostly outliers.

In addition, high income early in your career means you can start accruing compound interest earlier. Disproportionate quantities of money in an index ETF in your early 20's is a very, very powerful force function for your entire life. Play your cards right, and you can FIRE by your early-to-mid-30's, at which point you can pursue any work you want without the stress of needing to care about performance reviews, layoffs, toxic work environments, etc., etc., since you can always walk away. Even if you don't end up walking away, simply knowing you can easily walk away can have a significant impact on your overall quality of life.

Of course this can be easier said than done. I spent most of my 20's hardly paying any attention at all to what I was paid, focusing my efforts on the type of work I was doing and my education, and the company I worked for (a faceless Big Tech entity) was more than happy to take advantage of me. I had to spend my 30's catching up on savings. If I could rewind the clock and do it again, I'd aggressively hone my interview skills and spend a lot more time negotiating comp with a lot more companies once I had a year of two of professional experience on my resume.

I guess what I'm trying to say is that if you are fiscally responsible and invest wisely, short-term income can be a form of long-term value.


This almost is more of a testing methodology for engineered solutions to find failure scenarios, but I can see how using this would be perversely useful while searching problem spaces.


Isn’t this basically just a failure mode and effects analysis?


In some sense, it is taking "fail fast" and moving the failure closer in time by just imagining the failure rather than waiting for the real failure. This allows for a quicker feedback loop. It also allows for having the feedback without actually having failure.

I suspect the second feature is more important than the first. Feedback from real failure remains more valuable than feedback from imagined scenarios. Thing is, real failure can also be quite expensive.


Not exactly inversion as described in the article, unless an attempt was made to actually fail and then you study that.


“Happy families are all alike; every unhappy family is unhappy in its own way.”

Inverted problem solving, done naively, is an infinite task — you could kill all the pilots in infinitely and progressively improbable ways (e.g., use time-travel to kill someone in their chain of ancestry, or their pilot instructors, or the aircraft engineers, etc).

So inverted problem solving must identity the most probable modes of failure until their hypothetical resolution causes failure to becomes improbable.

This sounds a lot like unit tests, which in my experience are notoriously bad at correctly estimating probable modes of failure for complex units of functionality,

Contrast this to generative testing of simple units of functionality, which in my experience do in fact provide comprehensive coverage of failure modes.

Or contrast both of them to languages like Rust and Haskel which many claim to solve a set of discrete failure modes automatically by virtue of the limitations set upon problem solving itself.

It seems to me that it is better to set limits upon problem-solving that are known to often lead to failure rather than attempt to estimate the probabilities of an unknown infinite — i.e., identify types of failure modes across domains that have high failure rates and construct a framework of problem-solving that precludes those types.

This is done all the time in the form what I would call “worst-practices”. Cheating on your partner is almost always a bad way to increase marital happiness, and the framework of marriage excludes it (even “open marriages” require honesty as part of its framework).

Paul Graham puts cofounder disagreements at the top of his list of ways that startups fail — the most basic form of such failure being that cofounders disagree on whether to give up on their startup. Although there are infinite ways for cofounders to disagree, perhaps it is possible to proactively address the manner in which disagreements are identified and resolved before they become lethal. The only good solution I’ve seen to this problem is the advice to split equity equally — ensuring that the problem is more obvious and severe by amplifying its consequences.

This is similar to Rust — the consequence of a “poorly” written program is that it won’t run at all.

This is also why monopolies in America are sometimes broken up: a monopoly can succeed by failing, whereas a startup can only succeed by actually making something people want.

Perhaps a better framework for problem solving would be something like, Make Problems Catastrophic, or Burn Your Lifeboats, or Amplify Consequences of Failure.

It has always fascinated me that Leprosy doesn’t directly cause your body to fall apart — it indirectly causes failure by removing the feedback of pain. Without pain, we hurt ourselves unknowingly and profoundly.

Make Problems Painful?


I think this is the approach Boeing took with the 737 Max


I like how it references the Gulag Archipelago which two links to buy it, a pseudo-historical book. For no reason. Except for that a certain controversial person likes to talk about it a lot.

Of course 3 articles back they reference Jordan Peterson.


Oh good grief. Eat the meat and throw away the bones.


>Continue to be a rent-taker as opposed to adding value

One of the most pervasive falsehoods in existence. Take them out of the equation and play the scenario through. What do you end up with?


Ricin, anthrax, or polonium in FAA documents.


Convince them that the world is a globe, when it's actually flat, so they get their navigation calculations wrong and their plane's crash when they run out of fuel.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: