The words "virtual machine" and "interpreter" are mostly interchangeable; they both refer to a mechanism to run a computer program not by compiling it to machine code, but to some intermediate "virtual" machine code which will then get run. The terminology is new, but the idea is older, "P-code" was the term we used to use before it fell out of favor.
Sun popularized the term "virtual machine" when marketing Java instead of using "interpreter" or "P-code", both for marketing reasons (VMware had just come on the scene and was making tech headlines), but also to get away from the perception of classic interpreters being slower than native code since Java had a JIT compiler. Just-in-time compilers that compiled to the host's machine code at runtime were well-known in research domains at the time, but were much less popular than the more dominant execution models of "AST interpreter" and "bytecode interpreter".
There might be some gatekeepers that suggest that "interpreter" means AST interpreter (not true for the Python interpreter, for instance), or VM always means JIT compiled (not true for Ruby, which calls its bytecode-based MRI "RubyVM" in a few places), but you can ignore them.
You could contribute to a goal that they care about, or give them something they need, or help find somebody who will. Or you can promise to do so in the future.
Using "value" as a medium is problematic because we increasingly don't value the same things. It worked ok back when food took so much effort to grow that securing it represented a significant portion of our mindshare. Then money was reliably a proxy for our shared agreement that food was good.
But now that it's so much easier to make the necessities that we agree on, we spend more time perusing contradictory outcomes. Which is more valuable, if my endeavor succeeds and yours fails, or visa versa? Whose agenda am I furthering when I decide to value a dollar? It's hard to participate in because I can never figure out whether I'm hurting or helping.
Better would to let people be explicit about what they want so that we can the things that have consensus and work towards those. As it is we're letting the ownership of scarce abstractions determine what gets done, which is just bonkers. It was once the best we could do with only the laws of physics at hand to enforce the rules (re: the scarcity of gold), but now we can do better.
I'm much more inclined to believe these guys have something to actually deliver but that something will be a lot less exciting to the average HN reader than the vague marketing implies.
Basis also drops quality on the floor by targeting subsets (ETC1s and UASTC) in hopes of having a "common transcodeable subset". In practice, a number of runtime transcoders seem to just decode to RGBA and then fully re-encode anyway, but on the CPU.
leetcode seems to agree with your definition [0]. the meme isn't to say that inverting a binary tree is particularly difficult - anyone familiar with coding challenges and trees could trivially produce a solution. the meme is more pointing out how ludicrous it is that senior/staff/principal interviews can hinge on these types of problems, despite the engineer proving their proficiency by doing something like running DOOM in typescript types or writing homebrew [1].
I think those challenges (especially leetcode) are heavily misused.
When my team conducts technical interviews, we are asking for a couple simple programming solutions - but we're asking because we want to hear the candidate talk through it and see what their problem solving process is like.
If you aren't evaluating based on conditions like those, I don't really see the value of coding questions.
I agree with this. I got to experience both sides when I interviewed at FB/Meta. I practiced the leetcode and Cracking the Code Interview stuff of course and one of my interviewers asked something like that. I guess it was insulting and pointless but whatever - I just did it.
Another interviewer asked a much more interesting question: you are writing an IM client. How do you build the client-server communication?
That was a great conversation I enjoyed on its own without regard for the interview. Asking questions: do we have online/offline status? (yes) What are the target devices? (mobile).
IIRC I said I'd want to optimize for bandwidth and latency. Cellular networks can be spotty at times and stall out in really annoying ways. I'd design the protocol to use an efficient encoding with a pre-shared dictionary (the list of friends doesn't change that much after all and lots of the same words/emoji are used frequently). I also said I'd make a flexible format that would let things like online/offline status or you have a new message from X ride along with an actual message in the current conversation and explore options like QUIC or other UDP-based eventually-consistent options given how a cellular dead band can put you in TCP retransmit jail for minutes at times.
For closure I was offered a position but went to a different company.
> If you aren't evaluating based on conditions like those, I don't really see the value of coding questions.
The way I think about it, you're really trying to evaluate a candidate on about 10 different metrics all at once. Metrics like programming skill (writing & debugging), communication skills (listening and explaining), capacity to learn, domain knowledge (eg if you're hiring a react dev, do they know HTML & react?), likeability, and so on.
A good interview gives the candidate the chance to show their worth in all of those different areas. But time is limited - so you want some different challenges which will show many capabilities at once.
Asking a candidate to talk through how they'd solve a "leetcode problem" sort of does that - you can see their CS knowledge and their communication skills. But if thats all you ask, you end up overemphasising the candidate's CS knowledge. Most people aren't very good at thinking and talking at the same time. And you don't learn about other stuff. How good are they at debugging? At reading code? Do they have domain knowledge? Can they talk to clients? Are they good at design? Its also quite easy for the interviewer to be distracted by the question of whether or not the candidate solved the problem you gave them. - Which isn't really what anyone is there for.
As part of a larger interview, and especially for systems engineering roles, I think they're still fine questions to ask. But if thats the entire job interview, its a bad interview - because it won't let you evaluate a candidate properly. Especially in product roles where CS knowledge isn't very relevant anyway.
Our technical questions typically stay within the realm of the position we are hiring for, so technical usually revolves around “would you use X or Y in this scenario? Why?”
Understanding how someone thinks is more core to evaluating candidates, so questions like “let’s say you own a window washing company and you’ve been hired to wash every window on every skyscraper in New York City - how do you do it?” provide a much better insight into how someone goes about approaching a challenge.
A coworker has a simple diagram they use outlining a tech stack: backend, cache, frontend, and they give a brief overview of the “application.” Then they explain that there’s a big report that “customer says X isn’t working - how would approach fixing this?” It’s less technical on details and more about again how they would approach finding the issue.
This is absolutely the way. My interviews are conversations with someone that I want to work closely with, and while leet code might be an interesting lunch conversation it’s not going to be part of any of our day to day work (c/c++/swift/obj-c)
Maybe the catch was in saying that left right labels are arbitrary, could be called node1 and node2 as well, inverting is not necessary per se, just visit it in node2, node1 order if needs to be flipped - ie. no physical rearrangement is necessary.
Also the best answer to "how do you reverse an array". You don't. You just read it in the opposite order. Especially in any iterator-based language it should be trivial.
In a pure ASCII world, this doubles as "how do you reverse a string". In a Unicode world, the answer to "how do you reverse a string" is "you should never want to do that".
I can't remember the last time I reversed an array. Honestly it's at least a code smell. Why can't your code just read it in the other direction? Or have it in the correct order in the first place (e.g., sort it correctly instead of sorting it backwards and then reversing it). It's not that hard, even in a C-style for loop language.
Because usually the code to handle it is not mine and it expects to go through the list in forward order. Nobody makes APIs that take a list of, idk, files to delete and lets you pass in a parameter that is “please actually run your for loop over this in reverse”. They expect you to flip your array before you hand it in.
That's what it means, and people use it as an example not because it's like, some sort of super difficult unreasonable challenge, but because it's completely unrelated to the work you'd be doing on the job like 99.99% of the time. It's like interviewing for a line cook and asking them to make a spatula.
Depends on how you look at it, I guess. A binary tree has a couple of properties. Usually there is some ordering on the type. Something like: the element on the left is smaller than the element on the right. (e_L < e_R). Invert would be than turning the ordering into e_R > e_L. I guess this is the left and right branches exchange answer.
If you see a binary tree T e as some kind of function, it can test whether an element exists, a typical set operation. So f : e -> {0,1}, where (e,1) means the element is in the binary tree. (e,0) means it is not in the binary tree. All those (e,0) creates some sort of complement tree, which might also be seen as inverting it.
What would be really weird is seeing it as a directed acyclic graph and invert the direction of every edge.
metal-tt/metal-nt require you to specify the exact architecture, and that's not forward-compatible unless you update your application for every new device release. Even minor SKU revisions like applegpu_g13p/applegpu_g13g/applegpu_g13s/applegpu_g13c are different compilation targets, and that doesn't help you when Apple releases applegpu_g14.
Microcode was only used for the RSP, which was a modified MIPS coprocessor and could only realistically be used for T&L. After that, the RSP then sends triangles to the RDP for rasterization, pixel pipeline, and blending, all of which are fixed-function, admittedly with some somewhat flexible color combiner stuff.
I appreciate the correction. Still, programmable T&L was kind of a big deal. PC GPUs didn’t get hardware T&L until the DX7 era, and it didn’t really become programmable until DX9/Shader Model 2.0.
Ubershader actually has three different opposite meanings, unfortunately.
The classic usage is a single source shader which is specialized using #define's and compiled down to hundreds of shaders. This is what Christer uses in that blog post above (and Aras does as well in his ubershader blog post)
Dolphin used it to mean a single source shader that used runtime branches to cover all the bases as a fallback while a specialized shader was compiled behind the scenes.
The even more modern usage now is a single source shader that only uses runtime branches to cover all the features, without any specialization behind the scenes, and that's what Dario means here.
Ah, then my correction probably does not stand, and I'll need to look deeper into it. Thanks for the explanation! This jargon really gets out of hand at times. :P But I don't mind being wrong if I learn from it.
Thank you. This is my favorite kind of comment. There are lots of "technical" terms which manage to acquire similar but distinct uses (today I was contending with "agent" and "prompt"). Keeping them straight in your own head, and recognizing when others don't is as valuable as it is unappreciated.
It happened all the time. I've played plenty of D3D11 games where clicking to shoot my gun would stutter the first time it happened as it was compiling shaders for the bullet fire particle effects.
Driver caches mean that after everything gets "prewarmed", it won't happen again.
Sun popularized the term "virtual machine" when marketing Java instead of using "interpreter" or "P-code", both for marketing reasons (VMware had just come on the scene and was making tech headlines), but also to get away from the perception of classic interpreters being slower than native code since Java had a JIT compiler. Just-in-time compilers that compiled to the host's machine code at runtime were well-known in research domains at the time, but were much less popular than the more dominant execution models of "AST interpreter" and "bytecode interpreter".
There might be some gatekeepers that suggest that "interpreter" means AST interpreter (not true for the Python interpreter, for instance), or VM always means JIT compiled (not true for Ruby, which calls its bytecode-based MRI "RubyVM" in a few places), but you can ignore them.
reply