Hacker News new | past | comments | ask | show | jobs | submit | tylerhou's comments login

You may be interested in a paper which appeared at SIGMOD: https://dl.acm.org/doi/10.1145/3639257

As of a few years ago (not sure about now) the backtrace frame info for anonymous functions were far worse than ones defined via the function keyword with a name.

(As the article claims) even with computed goto register assignment of the most frequently used variables is fragile because the CFG of the function is so complicated.

Register assignment is much less fragile when each function is small and takes the most important variables by argument.


It's also fragile, in a different way, if you're threading state through tail calls.

In my experience writing computed goto interpreters, this isn't a problem unless you have more state than what can be stored in registers. But then you'd also have that same problem with tail calls.


Fallback paths most definitely have more state than what can be stored in registers. Fallback paths will do things like allocate memory, initialize new objects, perform complicated fallback logic, etc. These fallback paths will inevitably spill the core interpreter state.

The goal is for fast paths to avoid spilling core interpreter state. But the compiler empirically has a difficult time doing this when the CFG is too connected. If you give the compiler an op at a time, each in its own function, it generally does much better.


I get that and that’s also been my experience, just not for interpreters.

In interpreters, my experience is that fallback paths are well behaved if you just make them noinline and then ensure that the amount of interpreter state is small enough to fit in callee save regs.


Mike Pall makes an argument that interpreters are especially susceptible to this problem, and I find it convincing, since it matches my experience: https://web.archive.org/web/20180331141631/http://article.gm...


There are a bunch of arguments in there that don't match my experience, which includes the JSC interpreter. JSC had an interpreter written in C++ and one written in assembly, and the main reason for using the assembly one was not raw perf - it was so the interpreter knows the JIT's ABI for fast JIT<->interpreter calls.

Mike's argument about control flow diamonds being bad for optimization is especially questionable. It's only bad if one branch of the diamond uses a lot more registers than the other, which as I said, can be fixed by using noinline.


Exactly. Computed goto helps with branch prediction, but does not help w.r.t register allocation & other compiler optimizations.


As I mentioned in another part of the thread - the way you get that under control in a computed goto interpreter (or a switch loop interpreter) is careful use of noinline.

Also, it probably depends a lot on what you’re interpreting. I’ve written, and been tasked with maintaining, computed goto interpreters for quite a few systems and the top problem was always the branches and never the register pressure. My guess is it’s because all of those systems had good noinline discipline, but it could also just be how things fell out for other reasons.


There is also a difference between suggesting a price to an uninformed, individual participant vs. highly informed participants that control large segments of the market.


Re: the last paragraph, C++ has temporary materialization — space for temporary objects is not actually “allocated” until the object needs storage (commonly when it binds to a reference). The problem is that push_back takes by rvalue reference, which forces a materialization of the temporary. I don’t see a way around forcing materialization because taking a reference requires some uniform representation for the reference — the temporary could have been constructed by any one of its possibly many constructors. Forwarding arguments makes the parameters explicit, and the language already has support for this, so I don’t see a huge need for adding more magic.


Is this true with respect to the ABI? I get how a temporary is blown away at the end of a statement with a `;` if it wasn't bound to a reference that extends its lifetime, but wouldn't the code need to allocate a return slot regardless?

What I'm thinking of would be

```c++ struct MyBigClass { /* lots of members */ };

MyBigClass makeABigOne();

auto main() -> int { // We still need to construct a return slot here, even though we don't use the value, right? makeABigOne(); } ```

Perhaps this is tangential, but I'm wondering if maybe I'm missing a subtly based on what you mean by "allocated" (in the abstract machine, or in the ABI).


Apparently temporary materialization also occurs when “when a prvalue appears as a discarded-value expression.” https://en.cppreference.com/w/cpp/language/implicit_conversi...

So yes, storage for the (to-be discarded) object must be allocated, and the object is constructed into that storage.

I don’t know enough about the ABI to comment about the last point.


Generally no. They could refund their donations and ask their donators to donate to the new campaign. Or they could donate to a superpac.


As the problems become harder, you can’t just Google for solutions. Really great engineers often build things that nobody has ever built before — or at least not documented how they built it publicly. If you don’t have fluency in the fundamentals, you won’t be able to piece together the parts that you need to build novel systems.

Second, part of hiring junior engineers is evaluating their growth prospects — e.g. new grads are often completely unproductive for up to a year, and firms make large investments when hiring them (maybe up to $200,000 in mentorship and wages). People with the attitude “I don’t need to learn/understand things, I can just Google them” are unlikely (IMO) to reach that level of seniority.


In my experience, it's very rare that you're in a job that requires you to come up with a solution to a problem no one has ever dealt with before. Custom solutions are often a sign the engineers in question didn't do the appropriate research to find the standard solution for the problem.

I've been a software developer for 10 years, and I've never worked on a problem that someone else hadn't come up with a solution for somewhere. And if they haven't, alarm bells go off as to why I'm the first to do this, and where down the pipeline did I deviate so horrifically from the norm.


I strongly agree with this. I worked on low level algorithms in bioinformatics circa 2010. Writing mapping algorithms and variant detection in C/C++. Most/all of what we did was adapt known compression and database algorithms. The "best" aligner is still BWA (Burrows-Wheeler Aligner), which uses the Burrows-Wheeler Transform, popular in a lot of compression utilities.


Could you please give a firsthand account of an instance when a great engineer built a novel solution? I feel NIH syndrome is way more common cause for building things from the ground up


I've seen it at least ~10ish times in my pretty short career. I think you're maybe imagining someone building, like, "Linux from scratch". Novel solutions don't have to be that big; they just have to be novel.

Someone I worked with once went off on their own and implemented a test framework that solved a lot of the problems we've been having. They could have just written tests the normal way; they did it a different way; it was great. Someone else made a debugging tool that hooked into Python's introspection abilities and made flame graphs of the stack traces. Not exactly groundbreaking science but it was entirely "innovative" in the sense that no one expected or wanted that tool, yet it solved a real issue. Someone else made a JS library that abstracted out the problem of organizing these dynamic workflows on an internal-facing took. Small, but novel, and it organized the ideas in a way that made it possible to build on the abstractions later. For my part we had this chunk of business logic that was a pain to debug and I had the thought to factor it out into a standalone library that was completely testable at the interface. Not groundbreaking, but no one had thought to do it and it obsoleted the issues from before immediately. Etc.

If your job is anything more complicated than "feature implementation", there are chances for innovation left and right, and good engineers see and pursue them.


An engineer on the Search team at Google designed some novel way to serialize parts of the search index so that it could be updated more easily.


> As the problems become harder ...

What percentage of engineers are working on truly hard technical problems?

I can only speak from experience but the vast majority of us are doing the same shit with a different name signing the checks.

The world doesn't need millions of brilliant engineers. It needs construction workers that can assemble materials.

I am fatigued by every tech bro in the industry that thinks they need to find the next genius with their ridiculous hiring process.


I’ve come up with some of the core solutions in my org to solve massive big data problems and had to depend on intuition and theory instead of the web. I still failed a merge sort whiteboard challenge in an interview. Some people just can’t deal with these inane questions in an artificial environment.


https://www.youtube.com/watch?v=zqmOSMAtadc

Related video from Practical Engineering video on rail thermal expansion


Was just thinking of this.

Here's a video on thermite welding to go with it: https://youtu.be/5uxsFglz2ig


There is a big difference between one VCS handling 1,000 QPS and 1,000,000 VCS instances each handling 0.001 QPS. A system built for one is not necessarily suitable for the other.


See my response to bananapub below.


Data in Piper (actually, even in SrcFS…?) is stored forever. So the cost of a “small” amount of data like few gigabytes can add up over time (cost for storage, replication, transmission, etc.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: