Hacker News new | past | comments | ask | show | jobs | submit | gmfawcett's comments login

Sure, it's basically JSON with semicolons... that's used to define a graph of nested lambdas which are evaluated recursively until a fixpoint representing the stable configuration is reached. Also, you can add comments, which makes it nicer than JSON!


Remember: if you're not certain that your matrix meets the prerequisites for applying the Schur decomposition, you can always apply the Unschur decomposition instead.


Found the dad :)


The undad


Bravo


Whenever someone mentions building a billing system, I feel compelled to share Mark Dominus' delightful article, "Moonpig: a billing system that doesn't suck":

> Sometimes I see other people fuck up a project over and over, and I say “I could do that better”, and then I get a chance to try, and I discover it was a lot harder than I thought, I realize that those people who tried before are not as stupid as as I believed. That did not happen this time. Moonpig is a really good billing system. It is not that hard to get right. Those other guys really were as stupid as I thought they were.

https://blog.plover.com/prog/Moonpig.html


> does not require the strict discipline required in a monolith

How so? If your microservices are in a monorepo, one dev can spread joy and disaster across the whole ecosystem. On the other hand, if your monolith is broken into libraries, each one in its own repo, a developer can only influence their part of the larger solution. Arguably, system modularity has little to do with the architecture, and much to do with access controls on the repositories and pipelines.


> Arguably, system modularity has little to do with the architecture, and much to do with access controls on the repositories and pipelines.

Monoliths tend to be in large monolithic repos. Microservices tend to get their own repo. Microservices force an API layer (defined module interface) due to imposing a network boundary. Library boundaries do not, and can generally be subverted.

I agree that modularity has nothing to do with the architecture, intrinsically, simply that people are pushed towards modularity when using microservices.


People make this argument as though it's super easy to access stuff marked "private" in a code base--maybe this is kind of true in Python but it really isn't in JVM languages or Go--and as though it's impossible to write tightly coupled microservices. The problem generally isn't reaching into internal workings or coupling, the problem is that fixing it requires you to consider the dozens of microservices that depend on your old interface that you have no authority to update or facility to even discover. In a monolith you run a coverage tool. In microservices you hope you're doing your trace IDs/logging right, that all services using your service used it in the window you were checking, and you start having a bunch of meetings with the teams that control those services to coordinate the update. That's not what I think of when I think of modularity, and in practice what happens is your team forks a new version and hopes the old one eventually dies or that no one cares how many legacy microservices are running.


I was there, and I definitely could do those things.


Yes! Tatham's collection is wonderful. The puzzle that he calls Signpost is one of my absolute favorites. Impressive from a programming point of view, too:

> All of them run natively on Unix (GTK) and on Windows. They can also be played on the web, as Java or Javascript applets.

I use the Android version: https://play.google.com/store/apps/details?id=name.boyle.chr...


> Become an expert in something hard. Fix bugs nobody else can.

This is risky advice. :) Context matters. It's going to sound very appealing to a self-motivated, mobile IC, and could work very well in orgs that value individual effort. But in many orgs, solo experts playing non-fungible roles are very big red flags, and can draw negative management attention. Maybe I could build a team around you... but I might not have the resources, or the problem might not warrant it. So I just might end up making your work life harder -- mitigating the risk by devaluing the service you're working on, just moving it to SaaS, or maybe keeping it but making you train up all your colleagues on the risk area (turning you into an unhappy tutor), etc.


It might be better to phrase it as an architectural expert. Being in a position where you understand how everything works together better than anyone else is valuable. You can't necessarily train any other person to have such diverse knowledge, nor outsource it. This might lead to you being able to solve otherwise unfixable bugs, but that's more of a side effect.


Maybe. I get that being the best source of information might be a job-security feature. But it doesn't necessarily translate into growth opportunities. From a manager's POV, I might view you as a hoarder of information, a silo builder, a poor communicator, or poor team player. You might get job security out of this, because your knowledge will cement you in place. Why would we promote the linchpin out of the system they are holding together? Or it might just draw scrutiny to the larger business risks of your system, as I mentioned earlier.

If you could leverage your knowledge in a business-strategic way -- e.g., empower a team to work better by clarifying, simplifying, and standardizing the architecture -- you might draw attention as a future leader. (If you have a selfish boss who will steal credit, a little more political work might be needed alongside this.)


> if a computer could perform operations on a linked list as quickly and with as little memory as it could for an array, nobody would ever choose to use arrays

You can start from a false premise, and draw any conclusion you like. If a computer could perform operations on a linked list as quickly and with as little memory as it could for an array, purple unicorns would be grazing on my lawn right now. :)


My assertion is that linked lists map BETTER to the human and WORSE to the computer. The thought experiment is simply a way to prove that fact and is not a premise let alone a false one.

If arrays and linked lists had the same performance characteristics, we'd pick linked lists every time because they map better to how humans think about problems while arrays map better to how computers think about problems.

Thus we can conclude the exact opposite of the assertion I was responding to. People naturally gravitate toward linked lists then get forcibly re-educated to use arrays because that's what makes the computer happy.


My assertion is that linked lists map BETTER to the human

That's your assertion, but you didn't back it up by anything other than saying you can insert into the middle of a list (which is rare to need and can be done with a sorted map).

Thus we can conclude the exact opposite of the assertion I was responding to

You can't conclude anything from someone making the same assertion with no evidence over and over.

Pragmatically what people use over and over are arrays and hash maps. There's a reason perl, python, lua and javascript all thrived by having arrays and hash maps built in to the core of the language.

If you want to loop through something or index by an offset you use an array. If you want to access by key, you use a hash map. In the rare scenario you need to insert into the middle of a specific order, use a sorted map. Modern programming has no place for basic linked lists of granular data and they aren't missed in any sort of real scenario.

If can back up what you're saying with any sort of evidence or a specific scenario, go ahead.


> Does the CL spec require the implementation to use linked lists or merely to behave as if it were a linked list?

"list: a chain of conses in which the car of each cons is an element of the list, and the cdr of each cons is either the next link in the chain or a terminating atom." -- https://www.lispworks.com/documentation/HyperSpec/Body/26_gl...

Interpret that as you will. But what "world of difference" optimizations are you suggesting here? Give an example of a significant optimization at the implementation level that preserves cons semantics.


So sidju says to themselves, hey, let's add syntax highlighting to ed. Kids, this is how you break the eternal seals on the hellmouth, and is also why we can't have nice things.


The greatest crime is adding a `help` command and printing errors by default. Fools who stumble into `ed` unprepared should find no solace, it is a crime against the world itself to offer them aid and escape.


So say we all.

(And in case there were any doubts, I applaud your hellmouth seal-breaking invention. Scratch those itches, eternity will take care of itself!)


That’s the same as bat:[1] one of the features is syntax highlighting. Kind of unexpected to find a concatenation program… which also does that.

[1] https://github.com/sharkdp/bat


You wouldn't believe how often I run `bat` to quickly read some source code. It is nearly constantly open on my computer.

It is just so easy to run, easy to use, can open multiple files and is so readable. Why would I open an editor when I'm already in a terminal and it's right there?


I believe you. :) That’s I guess what most of us do; quickly look at a single file. Not really concatenating multiple files.

So the feature makes perfect pragmatic sense.


I wrote a little script for that, and I am surprised now at how often I use it instead of opening an editor:

https://drive.google.com/file/d/1l13UdgRJy2XvEzI4bYYFU09MW20...


I use `cat $(which ...)`. Makes sense to have a dedicated script for it. Cheers


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: