Hacker News new | past | comments | ask | show | jobs | submit | jumploops's comments login

Can you read the rules and explain them to others faster than a video?

Yes because I know the other players strengths and experience.

This is awesome.

My wife wanted a wooden engagement ring, and so I fashioned one (well ~10) out of a Pacific madrone burl.

Great material to work with, but wouldn’t recommend wooden bands unless your actual wedding is near!


Why not?

Unless you use an epoxy of some sort, they’re quite prone to breaking over time — I only used natural beeswax.

(Plus, quite a few broke while I was iterating on my technique…)

To be clear, this is one of the reasons my then-girlfriend wanted one, to ensure a speedy engagement!


If they are made by cutting a ring shape out of wood, the grain is too weak for long term wear.

I more common method for wooden rings is to cut a long thin rip at 1/16th”. Soak it water for 30 minutes. Wrap it around something finger size, put a rubber band around it and let it dry. You can get a good imitation of a glossy epoxy finish with CA/super glue. This gives a lot more strength than a cutout.


Why not just use epoxy? It’s pretty easy to work with.

> Why not just use epoxy? It’s pretty easy to work with.

CA glue is easier for me to work with than epoxy and has done a fine job for me.


Thin CA will wick into the grain of thin veneers acting as a stabilizer. Epoxy is thicker and doesn't penetrate as deeply.

There are methods to get epoxy deeper, but they require significant equipment. Search for "stabilized wood" if you're curious.


I don't think that is true- I build and restore both wooden and fiberglass boats with epoxy, and have used it in almost every possible way. There are different thicknesses of epoxy with different properties, but the ones specially designed for penetrating deeply into wood - such as clear penetrating epoxy sealer will indeed penetrate extremely deep into wood, the manufacturer claims 9-16". In practice, almost any epoxy will penetrate at least 1" into wood.

If anything, epoxy often has too much penetration, and I end up doing a first coat or two that disappear fully into the wood, and another thickened one so it actually stays on the surface or joint.


Fingers change size, but wooden rings can't be stretched.

They can be sanded, just get a thick ring!

Yes, but that's generally not something you want to be doing the week before a wedding. It's _very_ easy to forget to do, and hard for the best man to run around and fix while you panic.

I had enough trouble SHINING MY SHOES. :)


Curious how these numbers correlate to the estimates of the engineers behind the PRs?

For example, the first PR is correlated with ~15 "hours of work for an expert engineer"

Looking at the PR, it was opened on Sept 18th and merged on Oct 2nd. That's two weeks, or 10 working days, later.

Between the initial code, the follow up PR feedback, and merging with upstream (8 times), I would wager that this took longer than 15 hours of work on the part of the author.

It doesn't _really_ matter, as long as the metrics are proportional, but it may be better to refer to them as isolated complexity hours, as context-switching doesn't seem to be properly accounted for.


Yeah maybe "expert engineer" is the wrong framing and it should be "oracle engineer" instead - you're right that we're not accounting for context switching (which, to be fair, is not really productive right?)

However ultimately the meaning isn't the absolute number but rather the relative difference (e.g. from PR to PR, or from team to team) - that's why we show industry benchmarks and make it easy to compare across teams!


What a fantastic read, thanks for posting!

Also: this is a great reminder that “history” is oft in the eye of the beholder.


Indeed, a fantastically written article!


This reminds me of a couple startups I knew running Node.js circa ~2014, where they would just restart their servers every night due to memory issues.

iirc it was mostly folks with websocket issues, but fixing the upstream was harder

10 years later and specific software has gotten better, but this type of problem is certainly still prevalent!


The author mentions 2011 as the time they switched from REST to RPC-ish APIs, and this issue was related to that migration.

Kubernetes launched in 2014, if memory serves, and it took a bit before widespread adoption, so I’m guessing this was some internal solution.

This was a great read, and harkens back to the days of managing 1000s of cores on bare metal!


Perfect for LLMs!

We use a html->markdown converter for web scraping and sites like this make it even easier/more robust.

Side note: bring back the RSS feeds?


Curious, can you recommend a HTML -> MD converter?


pandoc is pretty good https://pandoc.org/demos.html


D'oh, thanks — I use Pandoc plenty to MD -> HTML I totally spaced it can do the inverse. I was briefly looking at [Turndown](https://github.com/mixmark-io/turndown) for projects in the JS ecosystem.


Why not just use the HTML ....?


We have a “poor man’s” version of this running as a GitHub Action on our PRs[0].

It basically just takes the diff from the PR and sends it to GPT-4o for analysis, returning a severity (low/medium/high) and a description.

PRs are auto-blocked for high severity, but can be merged with medium or low.

In practice it’s mostly right, but definitely errs on the side of medium too often (which is reasonable without the additional context of the rest of the codebase).

With that said, it’s been pretty useful at uncovering simple mistakes before another dev has had a chance to review.

[0] https://magicloops.dev/loop/3f3781f3-f987-4672-8500-bacbeefc...


Looks cool!


My favorite saying: “simple is robust”

Similar in spirit to Lehman’s Law of Continuing Change[0], the idea is that the less complexity a system has, the easier it is to change.

Rather than plan for the future with extensible code, plan for the future with straightforward code.

E.g. only abstract when the situation requires it, encourage simple duplication, use monoliths up front, scale vertically before horizontally, etc.

I’ve built many 0-1 systems, and this is the common thread among all of them.

[0] https://en.m.wikipedia.org/wiki/Lehman%27s_laws_of_software_...


Sure, but when applying "simple is robust" principle it is extremely important to understand also intrinsic complexity. Not handling edge-cases etc does not make for robust code, no matter how much simpler it is.


This is where the advice in the article is excellent.

If you start with code that's easy to delete, it's often possible to alter your data representation or otherwise transform the problem in a way that simply eliminates the edge cases. With the result being a design that is simpler by virtue of being more robust.

If you start with code that's hard to delete, usually by the time you discover your edge and corner cases it's already too late and you're stuck solving the problem by adding epicycles.


Yes, but I definitely also see the opposite quite a bit: Somebody several layers down thought that something was an edge case, resolved it in a strange way, and now you have a chain of stuff above it dealing with the edge case because the bottom layer took a wrong turn.

The most common examples are empty collections: either disallowing them even though it would be possible to handle them, or making a strange choice like using vacuous falsity, i.e.

  all [] == False
(Just for illustration what I mean by "vacuous falsity", Python's all correctly returns True).

Now, every layer above has to special-case these as well, even if they would be a completely normal case otherwise.


Your example perfectly illustrates oversimplification: attempt to stuff categorical variable into another of lower order. If a language has absence of value available as an expressible concept (nullability), then a list is at least 3-way categorical variable: absence of value, empty list, non-empty list. Any attempts to stuff that into a binary truthy value will eventually leak one way or another.


Failing to account for this gives you Wayland (which at this time is more complex than X11)


Is it actually more complex?

I find it more understandable, it’s just that DEs need to write their own compositors.


X11 has plenty of warts, but Wayland has more.

Example: screenshot. X11: "please tell me the pixels in the root window". Wayland: "please tell me the extension number of the portals extension so I can open a portal to pipewire so I can get the pipewire connection string so I can connect to the pipewire server so I can ..."

Example: get window position on screen.

Example: set window title.

X11 is a protocol about managing windows on a screen. Wayland is a protocol about sending pixel buffers to an unspecified destination. All the screen-related stuff, which is integral to X11, is hard to do in Wayland with a pile of optional extensions and external protocol servers which do not interact.

X11 is also more standardized, de facto, because there are fewer server implementations, which is not just an accident but is by design.


X11 is far more inclined towards the idea of clean separation of policy and mechanism, which I think is becoming more and more evidently correct across the board of programming. When you start talking about libraries and layers, a policy/mechanism split is part of how to write layered code correctly:

base mechanisms that interpret the raw problem correctly (e.g. pixels on a screen, mouse position) -> some policy that is in some ways mechanism with slightly more abstraction (e.g. drawing shapes) -> some more policy-mechanism abstraction (e.g. windows) ...

until you get to your desired layer of abstraction to work at. This goes along with modularity, composition, code reuse. X11 itself has many design flaws, but Wayland's design is untenable.


X11's separation of policy and mechanism was a mistake. Maybe it made sense at the time - I don't know. GUIs were new at the time. Now that we know how they're supposed to work, the flag should really be called "I am a window manager" rather than "root window substructure redirect", and "I am a special window" (e.g. combobox drop-down) rather than "ignore substructure redirect" for example. (Even better, define some kind of window class flag so the window manager CAN see it and knows it's a combo box drop-down).


> and "I am a special window" (e.g. combobox drop-down) rather than "ignore substructure redirect" for example. (Even better, define some kind of window class flag so the window manager CAN see it and knows it's a combo box drop-down).

I think X11 has had that for a very long time. In the late 2000s when Beryl was still separate from Compiz, it was almost trivial to target things like dropdowns by a descriptive name and give them different effects than regular windows. Mine had an accordion effect while windows would burn up.


My point is that X is in the right direction more than Wayland is, in the spirit of its design, and major pain points of X are largely due to its specific design/implementation. Perhaps an outgrowth of having a lot of tiny policy-mechanism components is lack of standardization, which did strike X, but I think that's an orthogonal concern and not better served by overly large, inflexible components.


There will always be edge cases, and yes they will make the code more complicated, but what really helps is automatic testing to make sure those edge cases don't break when making changes.


Setting up automatic testing alone tends to add its own layer of complexity. At least it's worth it.


It doesn't have to be difficult, for example when developing user interfaces I have a secret key combo for triggering the latest test, and another for running all tests. And I make mock functions that will trigger user interaction automatically. I inline the tests so they are next to the code being tested. I also place them inside comments so I can regexp remove the tests for the release because I don't want my program to be more then two MB, but if you don't care about size you could just leave the tests there so that they can be triggered by users in a prod environment as well. The problem with modern development is that the frameworks makes everything more complicated. Just ditch the leaky abstractions and increase your productivity 100x


> encourage simple duplication

A rule I like to follow:

- first time: write it

- second time: copy it

- third time: maybe refactor it


All such rules seem designed for a person not engaging their brain.

Is this "the same" thing? If so - extract and reference. Or is it "a different" thing which is superficially similar? Then don't.

Knowing when two things are one thing or one thing is two things is most of our job, right?


DRY is a terrible, terrible, principle because it’s correct but requires programmers to make this decision. Which they won’t because DRY has thought them that all duplication is bad. The flip-side is what you’re saying, where there are simply things it wouldn’t make sense to duplicate. I’m a strong advocate against basically every Clean Code principle, really anything, which isn’t YAGNI. That doesn’t mean I think you should create datetime services every time you need them. It doesn’t mean I don’t think you should make a “base” audit mixin/abstract when you want to add “created_at”… to your data model in your API.

I think a better way to look at it than “third time - consider refactor” is to follow this article and ask “will this ever need to be extended?”. If the answer is yes, then you should duplicate it.

This way you won’t get a flying dog in your OOP hellscape but you also won’t have to change your holiday service 9 million places when your shitty government decides to remove one of them (thanks Denmark). Between the two, I would personally prefer working on the one where I have to do the 9 million changes, but I would obviously prefer neither.


> Knowing when two things are one thing or one thing is two things is most of our job, right?

Yes, but often we don't know the domain enough but "this feature must be available yesterday". So add tests, copy, release. And when you have to either do it again or have to update this code and its original you should know more and be able to refactor and give good names to everything.


Everything in balance. While I agree with this philosophy, I've also seen lots of duplicate bugs because it wasn't realized there was two copies of the same bug.


Agreed! I'll usually go one step further for early projects and lean towards 3rd time copy, 4th time refactor.

Example: So much early code is boilerplate CRUD, that it's tempting to abstract it. 9 times out of 10, you'll create a quasi-ORM that starts inheriting business logic and quickly grows omni-functions.

Eventually you may actually need this layer, assuming you're system miraculously scales to needing multiple services, datastores, and regions.

However this doesn't just apply to the obvious, and you may find omni-logic that made a feature more simple once and is currently blocking N new features.

Code is cheap, especially today. Complexity necessarily constrains, for better or worse.


Hence why I am rather looking if two pieces of code change together, opposed to just looking the same.

If I need to introduce the same feature in multiple places in roughly the same way, that's a decent indication code wants to be the same and wants to change together. That's something to consider extracting.

Fixing the same bug in several places is a similar, but weaker indication. It's weaker, because a bug might also occur from using a framework or a library wrong and you do that in several places. Fixing the same business logic error in several places could mean to centralize some things.


It’s so easy to accidentally write an ORM or a database. I constantly stop and think; is this piece of code secretly a database?


change it, fix it, upgrade it.


+1, but I'm not sure if the "simple is robust" saying is straightforward enough? It opens up to discussion about what "simple" means and how it applies to the system (which apparently is a complex enough question to warrant the attention of the brilliant Rich Hickey).

Maybe "dumb is robust" or "straightforward is robust" capture the sentiment better?


Copy/paste is robust?

As a biomedical engineer who primarily writes software, it’s fun to consider analogies with evolution.

Copy/pasting and tweaking boilerplate is like protein-coding DNA that was copied and mutated in our evolutionary history.

Dealing with messy edge cases at a higher level is like alternative splicing of mRNA.


The usual metric is complexity, but that can be hard to measure in every instance.

Used within a team setting, what is simple is entirely subjective to that set of experiences.

Example: Redis is dead simple, but it's also an additional service. Depending on the team, the problem, and the scale, it might be best to use your existing RDBMS. A different set of circumstances may make Redis the best choice.

Note: I love "dumb is robust," as it ties simplicity and straightforwardness together, but I'm concerned it may carry an unnecessarily negative connotation to both the problems and the team.

Simple isn't necessarily dumb.


Dull?


Indeed, simple is not a good word to qualify something technical. I have a colleague and if he comes up with something new and simple it usually takes me down a rabbit hole of mind bending and head shaking. A matter of personal perspective?


Is my code simple if all it does is call one function (that's 50k lines long) hidden away in a dependency?

You can keep twisting this question until you realize that without the behemoths of complexity that are modern operating systems (let alone CPUs), we wouldn't be able to afford the privilege to write "simple" code. And that no code is ever "simple", and if it is it just means that you're sitting on an adequate abstraction layer.

So we're back at square one. Abstraction is how you simplify things. Programming languages themselves are abstractions. Everything in this discipline is an abstraction over binary logic. If you end up with a mess of spaghetti, you simply chose the wrong abstractions, which led to counter-productive usage patterns.

My goal as someone who writes library code is to produce a framework that's simple to use for the end user (another developer). That means I'm hiding TONS of complexity within the walls of the infrastructure. But the result is simple-looking code.

Think about DI in C#, it's all done via reflection. Is that simple? It depends on who you ask, is it the user or the library maintainer who needs to parametrize an untyped generic with 5 different type arguments?

Obviously, when all one does is write business logic, these considerations fall short. There's no point in writing elegant, modular, simple code if there's no one downstream to use it. Might as well just focus on ease of readability and maintainability at that point, while you wait for the project to become legacy and die. But that's just one particular case where you're essentially an end user from the perspective of everyone who wrote the code you're depending on.


Can’t upvote enough. Too much dogshit in software is caused by solving imaginary problems. Just write the damn code to do the thing. Stop making up imaginary scaling problems. Stop coming up with clever abstractions to show how smart you are. Write the code as a monolith. Put it on a VM. You are ready to go to production. Then when you have problems, you can start to solve them, hopefully once you are cash positive.

Why is your “AirBnb for dogs” startup with zero users worrying about C100K? Did AWS convince you to pay for serverless shit because they have your interests in mind, or to extract money from you?


I am not sure on that. But I am certain the article Amazon published on cutting AWS bill by 90% by simplifying juvenile microservices to a dead simple monolith was deleted on accident.


You can't wish the complexity of business logic away. If it is vast and interconnected, then so is the code.


Idea: select a data center by default (i.e. us-east-1) to make it more clear.

Bonus: select the nearest data center based on the user’s IP :)


Nitpick detail: us-east-1 (and all other availability zones) are also not a single datacenter by definition. The can also spend several


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: