Hacker News new | past | comments | ask | show | jobs | submit | caesil's comments login

Would be a very difficult transition for the first generation to live under the new normal where an A is now a C.

Would employers accept that and consider it fairly when judging against their grade-inflated predecessors? I doubt it.


I've been through a change is grade range, others have too. Some countries moved from letters to numbers or to percentages to break with the previous system. It's not a difficult transition really - not compared to almost everything else happening at college level school.

And if the employers actually care about the grades, they'll learn about the change too. (But that's a minority)


> Would be a very difficult transition for the first generation to live under the new normal where an A is now a C.

In 2019 Germany, tens of thousands of pupils protested against what they thought were hard math tests on the "Abitur" – the last exam in school before university. Bad grades there would worsen their chances to secure a spot on a prestigious university or a desireable subject of study against pupils who got good grades from the old regime. (Or, if the next exam was easier, even against the next age cohort!)


None of my employers have ever cared about my college GPA, just the degree.


Mine have. I've not gotten selective jobs (that I really wanted) and they were blunt about that being the reason.


Was about to post the same point. As an example, a notation change of this kind was done recently for England's GCSE qualifications.

https://en.wikipedia.org/wiki/GCSE#Grades_and_tiering


Tailwind is CSS though? Like, it much more strongly maps to individual styles than most other popular style frameworks and systems.


This is the core fallacy though, imho: css ITSELF is already the tool for "mapping individual styles" to DOM elements.

Oldschool "CSS Frameworks" are libraries intended to bootstrap UI creation by providing abstractions of those mappings for a set of universally required UI patterns.

Tailwind pioneered the idea of "functional css" as a means to fix the "one-off classes" style bloat for anything that is NOT a specific component pattern (think "this specific archive view should be presented as a 3 column grid")

But when you end up aliasing all of CSS with such utilities, and then writing your styles to markup atts directly ... you aren't using a "framework" anymore, you simply "wrapped" most of CSS itself in a bunch of rainbow tables and littered it all over your markup, resulting in a highly illegible and unstructured style system


My son, it is time for you to embrace the good news of our lord and savior web scraping.


You reminded me of this "meme" about APIs vs scraping: https://i.imgur.com/koPo3M0.png


"his business is profitable" is a little too close to home.


The dot-com bubble was still very much a bubble even though the value creation potential of the internet was incomprehensibly immense


Sure but Nasdaq is still more than 4x higher now than peak dotcom.

But we’re not peak AI yet in my opinion. Companies aren’t IPOing at dizzying valuations with just an idea and a few html developers.

The vast majority of the AI boom is centered around companies that were successful before the boom. Apple. Nvidia. Microsoft. TSMC. Google.


Don’t necessarily take IPOs as a signal; the early IPO is not as attractive as it once was, for a wide variety of reasons. Companies with no product taking hundreds of millions in private funding should arguably be seen as the modern equivalent.


If those companies fail, will it matter? NASDAQ had had a huge impact when it crashed, however.


But retail investors generally don't get hurt from those companies - only VCs.


Disagree.

There is a question of what is even a bubble. John Cochrane did a study that showed that the value of Amazon alone justified tech stock index valuation even in 1999.

What is a "bubble" even? For me it would be self-driven cycle of upward valuation whose fundamental value never justifies the market cap in it. An industry that rises in price just a few years later to meet and then far exceed the previous valuation does not need to be a bubble. A bubble is not just a high valuation that decreases in price sometime in the future.


I think one of the core features of a bubble is that the median market participant loses money. I think that definitely occurred in 1999. I don't think it has occurred in 2024.

I'm not convinced it will happen either. Venture Capital operates on a business model that is really hard for human intuition to deal with. 1/1000 success rates require so few break out successes that, yes -- they can fund an entire sector that will, with extremely high probability, fail and still be profitable.


There have been plenty of these retroactive reviews of the 1990s tech boom that have questioned whether it was a bubble, by changing the definitions that everyone else uses to define valuation ratios and bubbles. I've heard it argued that the dotcom bubble wasn't a bubble, because the valuations were justified in the moment due to future expectations, until they weren't, which just sounds like the definition many others use to define a bubble.

As to Cochrane, even if the value of Amazon is argued to have justified the tech sector valuation, the fact was that the tech sector valuation was not all, or even mostly, concentrated in amazon, which is why it was a bubble.


That is a definition of a bubble, one that I have never heard anyone use ever.

For each Amazon that was undervalued there were 100s of companies with stupid ideas and no revenue.

You're free to redefine words, but then a lot of people will disagree with you.


Why is this a link to a tweet by a guy whose entire public persona is "AI bad! AI bad! AI bad!" rather than the actual article?


Probably because theinformation.com is hardwalled and thus, unfortunately, off topic for HN. If there's a more substantive article that people can actually read, we can switch the URL to that.


Here's a non-paywalled article that was posted recently: https://news.ycombinator.com/item?id=41063097


Thanks!


By the way this article is complete BS. OpenAI partners with MSFT, AAPL - impossible to run out.


There will be consequences, no? Billion dollar lawsuits that cause real and severe damage?


> There will be consequences, no? Billion dollar lawsuits that cause real and severe damage?

Those lawsuits will be directed at the corporation - which well insulates those who dictate corporate action.

The author seemingly wants to extract the coders from the company structure and have them bear the full weight of the consequences, instead. This would insulate not only the decision makers but the corporation as well.

If there was a gauge like this Evil <---> Ethical, the author's suggestion would land well to the left.


Don't mean to be rude, but does the job posting list a competitive salary?

Now that they're required to list salaries in coastal states I filter for that while searching and don't even consider jobs paying under $[threshold]k.


I don't find this rude at all. You get what you pay for, to an extent, and the going rate for a strong senior engineer is quite high in some markets.


Very competitive. In the top 10% of developer salaries. This is in Poland.

Btw your question was fair. Thank you for asking to clarify the context.


This explains it! Ive been doing the same, if job postings aren’t in the range Im looking for, Ill skip over the posting altogether. This is fantastic, saves time for both.

I wonder if it goes both ways. Low numbers repel the competent engineers but will certainly attract others for whom the salary represents an upgrade.


>According to all the interviews I’ve failed over the years (I don’t think I’ve ever passed an actual “coding interview” anywhere?), the entire goal of tech hiring is just finding people in the 100 to 115 midwit block then outright rejecting everybody else as too much of an unknown risk.

As a (now) senior/staff-level engineer back out on the job market for the first time in a while, I'm begrudgingly coming to accept that coding interviews might not actually be all that bad. Mostly because I find myself passing them due to having picked up skills in the past few years rather than spending a ton of time studying, which suggests they might actually be picking up some signal. I once thought they were purely hazing with zero relevance to day to day work, but as I get more senior I drift further away from that opinion.


There's coding interviews and coding interviews.

Asking basic questions that will be directly applicable to the job? Sure

Filtering for basic knowledge to make sure the candidate isn't lying about their experience? Sure.

Examining my thought process and producing working code is a nice-to-have? Sure.

Asking me to solve an extremely esoteric problem that has zero relevance to my day-to-day and if the solution I come up with on the spot under time pressure is incorrect or even just not the most efficient I'm rejected? At that point you're just filtering for starry-eyed recent grads you can underpay.


I run coding interviews. I would never give an esoteric algorithms question, or even really an algorithms question.

I have prompts that test very basic concepts and nearly everyone fails. Resume fraud is rampant.


> Resume fraud is rampant.

So is interview fraud. The remote-interviewee-answers-questions-while-her-face-reflects-windows-popping-up-on-her-screen is tiring at this point. So, I decided to find a way to inform me if someone was being fed answers in a tech interview.

Behold, the low-tech whiteboard. Also known as a piece of paper and a pencil. With the candidates I've run into that do not pass the "smell" test -- where I think they are being fed answers -- I ask them to draw some things, on paper. It's not a true validation, but it gives me something of a clue.

I ask for a simple diagram. Different services in a network, for example. Or a mini-architecture. For their level, I'll ask for something that should be drop-dead easy.

I ask them to show me their drawing.

The responses I've received run the gamut of "I don't know" (after 5 seconds of deliberation) to "I don't understand the purpose" (after 5 minutes of silence) to "I need to shut off my screen for a while" (while refusing to explain why) to "it depends if your cloud is AWS" (not in any way remotely related to the question.) I did have a candidate follow-up with a series of questions about the drawing, which were feasibly legitimate.

This hand-written diagram is not an absolute filter (I've only used it maybe four times), but rather it can confirm some suspicions. I think I can generally gauge honesty from questions/tasks like this. And that's really what I'm after -- are you being honest with me?

It's imperfect, but it has been helpful.


Maybe easier is to just ask that they show their hands while you ask a short question until they gave the answer. Could even be up front about it and say you suspect they're looking up the answers, since it's not like you care much if they get upset at a false suspicion, or just say "to avoid looking up answers, our standard procedure involves this".

The drawing approach also sounds like a good idea, though it's not like software is not going to evolve to be able to draw answers graphically which the candidate could copy down. By having them not able to input something into the machine, the only remaining option is someone listening in and feeding the answer on screen. Plausible, but that's a level of being prepared to cheat that the helper could also prepare to draw stuff out. Or they type with their feet but that's also a scenario where I'd be happy to have them come in for a final interview and demonstrate this amazing ability!


A while ago I ran across some team members so bad, I could virtually guarantee they would not have passed even the fizzbuzz phone screens we use before the stricter interview gauntlet. It made me wonder if they got a friend or paid a stand-in to do the interviews. When you think about it, who will check that it's the same person? The only person who might see the candidate in different contexts is the hiring manager, who doesn't do the actual interview.


All the places I've interviewed for, you talked either to the person who was going to be your boss (or teamlead or whatever the word is), or at least someone who would be a direct colleague on a daily basis. If a sibling or cousin could do my voice and mannerisms reasonably (as well as the job I want to get), perhaps that could pass, but otherwise I don't really see this happening.

Hm. Unless the employees don't want to ask because it would be so awkward if they're wrong about the candidate being a different person from who shows up for the job?


Chat gpt can ace any pre interview sadly. You really need video on the person with back and forth questions to detect if they’re copying and pasting from AI.

All of this could be mitigated with in person interviews, but I’m forced to hire abroad for cost.


If an interviewer asked me to "show me your hands", I'd laugh in their face and immediately disconnect.


Can I interview at your company? :P

I wish interviews were like this, instead most I've found are either trying to read the interviewer's mind on how to approach a vague situation and answer the way they want or have to reimplenent a full library in 30 min without any resource available that normally you'd look up, solve in minutes and move on.

I wish more took your path and literally just tested for actual industry experience: general architecture, asking questions when the situation is unclear and explaining unexpected/interesting findings from a previous project. And anyway, if they end up actually being a fraud, get rid of them after the initial probation time is up.


Glad to hear it. Whiteboards remain the ultimate interview tool, even remotely.


So I feel I strongly fall in a poor performer interview category any time any code problems come up. How would I convince you I do not have a fraudulent resume?

I study hours every day for many years now. I know many complex systems however studying algorithms bore me to tears.

I've built HPC clusters, k8s clusters, Custom DL method, custom high performance file system, low level complex image analysis algorithms, firmware, UIs, custom OS work.

I've done a lot of stuff because I can't help wanting to learn it. But I fail even basic leetcode questions.

Am I a bad engineer?

There seems to be no way for me to show my abilities to companies other than passing a leetcode but at the same time stopping learning DL methods to learn leetcode feels painful. I only want to learn the systems that create the most value for a company.

I imagine if you interviewed me you would think I wrote a fraudulent resume. Not sure how I am supposed to convince someone otherwise though. Perhaps I've been dumb in not working on code that can be seen outside of a company.


There are people who literally say this and then you hire them -- they turn out to be complete duds. I'm genuinely curious because I'm hiring right now: by what mechanism would I discover that you have these skillsets and are good at what you do?


I was somehow reminded of this guy who once wowed me in an interview by coding up a small graphics demo rather quickly. Turns out later that that was the exact one program he could code without being hand-held EVERY SINGLE MOMENT. I laugh whenever I remember that incident (from early on in my career, in my defense).

You have to build a repertoire of questions that defeat rote memorization, prove real experience, and show genuine ability to solve unseen problems...


I remember TA'ing African exchange students in Haskell.

They could remember the exact type signature of standard library functions.

They could define a Monad instance from memory at the speed they could type.

But you couldn't ask them a single question outside of what was presented at lectures.

They couldn't solve a single assignment. They were stuck on rote learning.

I'd blame their educational system, because it was quite consistent (sample size = 3).

The classical example: Teacher says X, the whole class repeats in choir X.


There's an interesting bit in "surely you're joking, mr Feynmann" where he is amazed how primary school kids are learning physics. Only, as it turns out, nearly noone in the entire country has any actual understanding of physics, because from elementary school upwards all they do is memorize the material.

https://v.cx/2010/04/feynman-brazil-education


I've had a lot of success asking about fuckups, worst thing they've had to debug, and general "fuzzy" questions that specifically do not have a single answer, or the answer is so relatable/specific to a person's experience. Then you have to watch them as they answer.


What's so bad about getting a feel for someone and hiring them on a probationary basis? See how they go on discrete - but real - tasks for a week or two, and then make the call on going forward or not. It's how most hiring has always worked (and continues to work) in almost all other fields.

Also, re technical questions, I don't think anyone is saying that you can't ask any technical questions whatsoever, I think the concern is about giving people abstract, theoretical CS problems that will never actually come up on the job, on the very iffy assumption that their performance while being asked to dance for a committee in high-pressure job interview situation is going to be reflective of their actual skills. (And more broadly, that a good programmer must be quick on their feet with schoolboy-style CS puzzles that are basically irrelevant to most roles.)


I have no idea if this is accurate or not, but there is this general stigma from both sides that "no decent programmer worth hiring fulltime in hotter markets/companies would/should agree to a contract/temp/trial job".

I think de-facto fulltime jobs i've seen end up kind of like that - if you don't work out in X months, nobody will shed a tear about kicking you out - but this is still perceived as a more expensive operation that should be avoided by having a "proper" interview process. On which nobody can ever agree, but that's the sentiment.


A trial period is part of every permanent employment contract herearound. No matter the job.

Both parties can terminate the employment agreement within a week for (usually) the fist three month of employment.

No stigma involved.


I think it’s fair for both parties, they both need evaluate each other.


IMO this is where a comprehensive System Design interview weeds out the hopeless. Minimum 2 hours, don't just use a "design facebook/twitter/insta" scenario because anyone can memorise those, dig into the weeds as and when it feels appropriate, and keep the candidate talking through trade offs, thought process etc so you can really peak inside their head. The catch is that you also need to be very competent and know the design and all permutations of it inside and out.

Leetcode et al. are just testing rote memory, there's no need to have candidates actually type out solutions its a waste of time. So long as they can articulate what solution they would use, why, and what other solutions they considered that's all you really need to be concerned with.


So to preface, I'm not looking for a job (trying to build my own company)

When I do interviews (probably limited compared to you but some) I do it like I wish someone would interview me.

I focused purely on curiosity. how many things disparate things are they interested in, the things that overlap with my knowledge I probe deep. I believe in Einsteins quote.

"I have no special talents. I am only passionately curious."

If someone knows about how RDMA and GPU MIGS work they are probably pretty damn interested in how HPC clusters function. More importantly can they compress this information and explain it so that a non technical person could understand?

There are so many endless number of questions I could ask someone to prob their knowledge of a technical field it kind of upsets me that most of the time people ask the most shallow of questions.

I believe this is because most people actually study their fields a very limited amount, because most people are honestly not truly interested in what they do.

The biggest implication of this is that I may be able to tell if someone has this trait but I understand that the majority of people could not as they literally don't know the things they could ask.

Asking system designs of me if you aren't knowledgable of the field would probably be the easiest to see the complexity of systems I can build.


It depends on the role. In web development, for example, there are lots of non technical things people can be good / great at that a coder might not be. Things like CSS, HTML, accessibility, semantics, things like that. Lots of JavaScript people have no idea how to leverage CSS and over engineer things or find it impossible to implement certain features or if they do, it's 1000s of lines of code when it's really just a few lines of CSS.


Take home project limited to 2 hours.


You can pay them to demonstrate it with a take-home problem. I'm not saying this is a great solution, I'm just saying at least you'll figure out what's wrong with your interview process once the indignity of paying an incompetent person sets in.


Chat gpt can solve any take home problem. You need to be on video making interactive feedback and demands to see that someone can actually write code and understands programming even a little bit.


Hey, what are you hiring for exactly? May I email you? My email is in my profile.


Are you self-taught? How did you write the on-the-job code? Do you do well with take-home exercises?

Do you struggle with work projects that feel boring or abstract?


Self taught, come from EE background.

Was originally building firmware and hardware for previous company while testing DS on their systems. They liked my work and I switched to DS and ended up a team lead.

Honestly have never had a take home exercise but would love it if I could. I basically make my own work, if I don't get work I will build other projects for them and try to sell it to the company.

I normally make good value projects and can sell it. It's how I went from hardware to DS lead of a team in a few years.


You sound pretty cool and useful to companies that get it. My suggestion is to find ways to present on some of the things you’ve done. Then you can point to the presentation video/slides, plus you’ll probably receive inbound inquiries from companies that could actually use creative thinking and building.

The take home exercise/work sample companies are out there. Here’s a list I helped contribute to. https://github.com/poteto/hiring-without-whiteboards


Yeah I'm EE and CompE background, and been in software for 20 years, it gets old with the 26yo "senior" dev interviewing you rolling his eyes like you are some big liar for not wizzing through his CS quiz questions (oh and for front end positions, lol). Studying EE was much harder and well-rounded engineering program compared to CS degree at the time I was in school.


Why are you doing all those things and what jobs are you applying for? Can you solve fizz buzz in an interview setting?


I'm not bad at getting jobs, just feel when I am looking I believe people think I am lying about my project experience.

I do these things because I see a place in the company to grow value and do it.

I can write almost all basic coding and my true skill set is in custom complex DS pipelines at scale.


Fizz buzz is certainly not a replacement for expertise, which is what the above commenter seems to be emphasizing.

Naturally it's a "low bar" but it's an awfully low bar for a job that isn't entry level.


That's why it's so concerning when you have people who confidently say they're great engineers who are experts in firmware, UIs, operating systems, and networking, yet when you ask them to write a 3sum or remove from a binary tree they freeze up and blame it on an unfair interview structure.

I'm a bit reminded of a chef who claims he can cook pasta, sushi, and pastries, yet when asked to fry an egg says that it's not fair to expect him to do so on the spot. A chef who can't cook when needed is about as useful as an engineer who can't code when needed.


There's a common double standard here though - when I've asked backend-type devs to do a UI sample/question (these are often "full stack" roles), the leet code geniuses dismiss my concerns as that not mattering if they flop it.


Would you consider inverting a binary tree a basic question? Some may, but many developers have never inverted a binary tree in decades (because it’s something that doesn’t pop up in a normal job).

Just because it’s such a classic topic in CS, that doesn’t mean I need to remember it after decades of seeing it in uni.


Can't you expect a halfway decent coder to derive how to invert a binary tree from first principles? It's literally just swapping the left and the right field in each node...


The thing is when I’m the interviewer I’m not looking for coders. I’m looking for people who can understand the business, find a solution that is business oriented and produce (if needed) good enough code (we are not Google). So, if you cannot invert a binary tree from scratch but are good at the other skills I’ve mentioned above, I want to work with you.

What good is someone who can code the best algorithm but cannot understand the business? Unless you are working in the top 1% of the companies out there (where you may have the luxury to invent new ways of doing things), for the rest of us our main skill is: to solve business problems with zero or minimal good enough code. We (99% of the tech companies) don’t need a Messi, just an average Joe.


This is the reality that most engineers that spend their lives online don’t seem to get.

We don’t need you to be the best engineer using the most cutting edge tools that a YouTuber told you about. We need you to be a good colleague who people can trust, can communicate with, and isn’t going to cause a ton of stress for others through maverick behaviour or major levels of incompetence.


Then don't ask this question if it's not relevant for your position? Presumably those who would ask it don't want to hire engineers that have never heard of recursion. It's a basic level CS concept, not rocket science.


Out of 100 people I interview, maybe 1 would know what recursion even is.


Does the role you're hiring for require knowledge of recursion?


You need to talk/complain to your pipeline, sounds like the 20 something non-technical "recruiters" are wasting your time.


What a team I worked in did, is we reminded the interviewee what a binary tree is. It's a pretty simple concept, much simpler than most businesses, so it makes a good test for basic general logical thinking skills.

We were hiring for a C++ role though. I can imagine people used to higher level languages might find the very need to deal with pointers confusing


Pointer vs reference: What's the difference? Practically, not much. All "higher level" languages (that I know ... er... except Perl) use references instead of pointers.

    > I can imagine people used to higher level languages might find the very need to deal with pointers confusing
I'm confused by this remark. What do you mean? Is this meant to be condescending? What do you use pointers for in your C++ project that references cannot do in a higher level language?


My point is that in higher-level languages you might spend your entire career not thinking about how your data is laid out and what points to what, while in C++ you are exposed to this reality daily even if you are working on trivial problems.

That is my hypothesis on why some people might consider inverting a binary tree an unrealistically complex problem. If you are not able to solve this problem without thinking too much you are probably not being able to write working C++ at all, but might still be able to even solve real business issues with, say, Javascript.


Never worked in tech so I find this mindset really alien. Why no specialisation and separation of duties? Why not have someone really good on the business side and someone else who is really good at coding?


Finding good solutions often require simultaneously considering both the business side and the programming side.

The business-only guy won't see the limitations and possibilities of code, while the programming-only guy won't see where the business side is flexible or where it is rigid, thus the optimal solutions can take a long time to find.

The business side might also be prone to specifying X-Y problems, which can be very difficult to spot if you're not into the business side.

At least in my experience, having significant domain experience in our dev department has been a super power of sorts, allowing us to punch well above our weight.

edit: I didn't have any domain knowledge when I started, but I was willing to learn.


That's already the starting point. People who really understand the business, and developers who work with them. The point is that the developer also needs to understand the business well enough. You cannot separate that from coding if you want good results. A good technical business analyst that sits in between the business and the developers can really help, but that is no substitute for the developer having good knowledge of the business and what it is actually trying to accomplish.

Sometimes the business only has a high level goal in mind and it is up to the developer to invent a solution, keeping in mind how the end user will interact with the system in their day to day work, and foresee potential issues in how the changes would impact other systems and business rules and so on. The developer has low level visibility of systems that are not readily apparent to business people. It is necessary for us developers to hide some of that complexity from the business. Some backends are very complex webs of business rules.

The developer needs to have some bigger understanding of it all, otherwise they are reduced to a code monkey who implements spoon-fed acceptance criteria from Jira tickets, makes stupid mistakes that break other functionality and systems, and won't recognize various issues that were not apparent to the business person who originally wrote the AC's. Basically we need developers to collaborate with the business to design and develop solutions properly.


Why not both in the same person? It's cheaper to pay one really good person 50% more than paying two people. It is quite common in domain-heavy tech areas (financial markets, etc.).


Exactly this


What if they can't do it from first principles but you tell them the trick, and they bang out the code in 5 minutes? Does that count? They've shown they can code, right?

Why is the former so much more important than the latter, for 99.9% of programmer interviews?

(protip: it isn't, but I guarantee you will get a "no hire" far more often if you need a hint. It's just nerd in-group hazing, coupled with a big helping of cargo culting Google.)


I think it translates to solving day-to-day problems. I don't think I could trust someone who couldn't figure out the binary tree thing cold, to figure out that some Spring bean is not request scoped by looking at the object reference, or that openapi code generator is looking in the wrong folder, or a dozen other things that require some mental effort and can't be googled easily.


> I don't think I could trust someone who couldn't figure out the binary tree thing cold

You can't trust them to figure it out under pressure of securing a job, but that doesn't mean they couldn't do it when their job is secure.


What about the pressure of a launch deadline or a system outage?


If I’m responsible for it then I should have some experience with it and understand the code already. I can call upon other resources outside my own brain.

It’s certainly not being forced to recall the implementation details of a trie within 30 minutes, when I haven’t seen one in 5 years, unable to reference any docs or knowledge base, or use Google, knowing that if I fail I will remain unemployed.


The context of this thread is a very basic question that any good programmer should be able to bang out in 15 minutes. It's much closer to something you "should have some experience with" than "implementation details of a trie"


I would love to have an interview like that! I think the "context" here is unrealistic for most?


If by "most" you mean people in this very thread who are claiming that binary trees is some crazy obscure thing that nobody can be expected to be familiar with, then sure.


Maybe. But you don't ask someone to "figure out a binary tree cold"...you ask them to do it while talking and coding on a video call (sigh).

Go solve a hard puzzle while talking on the phone. Does it make you better or worse? I'm just saying...if someone needs a hint for the trick, but then, post-reveal, bangs out the code under the same circumstances, isn't that telling you something interesting?


Just wait until the OP freezes during a "easy programming problem" as an interviewee. I have been interviewed more times in my life that anyone I know. I have failed sooooooo many different ways -- all unique. Some days, you get lucky and can solve a brutally hard algo problem. The next day, you fail trying to reverse a string, or something equally as embarassing. Tech interviewing is a numbers game; that's it. I do it enough until I have a very good day and someone gives me an offer that I like.


It is prep game. You train all common interview questions, do mock interviews with friends, prepare answers for common culture fit questions, research company interview process online, tailor cover latter and ask people for recommendations from previous jobs.

What people do instead is spray and pray CVs and hope that hiring manger uncover their brilliance under surface level incompetence.


If the winning candidate has to "train all the common interview questions" (of which there are thousands), then what are you actually learning by asking the questions to the winning candidates?

(hint: it rhymes with "mesmerization")


I think the main issue is that people would much rather spend their free time learning or building something useful, rather than the "prep game." A more skilled, knowledgeable workforce would benefit the corporate overlords as well, but we can't have that due to the broken interview system.


You change job every few years, spend majority of walking hours in job and your financials directly depends on job. Why would you not spend time on prep.

I have no sympathy for people posting "jobless for months and reject coding test interview" in one sentence.


what is "the trick" for inverting a binary tree?


Leaving the interview to skip the question.


Dealing with binary trees is covered in introductory data structures courses in the sophomore year of college. One would hope that the average senior developer is more capable at coding than a college sophomore.


So are red-black trees, but almost no one has the rotation/rebalance algos memorized. Hell, even look at binary search: There are so many ways to go wrong. There is a famous blog post from Josh Bloch about it.

What will you say when you fail an interview over an "introductory data structures" problem?

The lack of humility makes me wonder...


Someday I want to interview someone who really believes in this stuff, and ask them to regurgitate AVL tree node deletion from memory.


And, make sure they write unit tests for all of the tricky edge cases!


so much this, I applied recently to a tech company which gave me 2 questions to complete within an hour.

I knew the solution to the first was the min-cut algorithm, but no way I could code it straight up and down let alone in ~30min. Plus these leetcode-esque questions always come with millions of test cases making the need to be 100% exact and double check the constraints.

I even had my data structures/algo book nearby on the shelf that I could've used to cheat this but that would've been a new low for me, especially considering this was for a "mid"-level position working on APIs/JS/SQL.

I can understand if you want me to be an algorithm specialist but for web development for what im assuming would be a run of the mil SaaS application... this is absurd.


Sorry for interview experience. It is definitely a buyer's market in US tech right now, so companies can wait as long as necessary to find their unicorn to write CRUD apps for less than 100K USD!

The real question: If the tables were turned, could the interviewers pass their own tests? Probably not.


Yeah, you're making the exactly point of the parent comments: remembering sophomore-year CS courses != being a good developer.

You might as well be quizzing people on Calculus. That's another thing you study in a freshman-year CS course, so you must remember it, right?

(I'll say this: I've had far more practical occasion to use Calculus than binary tree manipulation in the 20+ years of my professional coding career. Particularly with AI.)


To give you a vignette in my job - I am right now debugging a piece of open source code that had some auth changes at sometime in the last several months with no changes in the docs to suggest a config change is needed. I have to figure out how 3-4 different frameworks are resulting in this not working.

If my boss messaged me right now and handed me something that required a similar data structure change, I'd stumble on it for about an hour or two until I understood that actual problem.

I'm sure I could "invert" a binary tree if I had to but it'd probably take a far more than reasonable time as that just isn't something I'm dealing with nor most rank and file devs.


Not necessarily. There are a lot of really good, self-taught engineers out there who have never had to do that before and likely don't even know what "invert a binary tree" means.

In my experience, it's much better to give them a problem to solve and have them walk you through the thought process as they try to figure it out. Even if they fail to solve it, the actual process gives you insight into how they think and how they'll perform on the actual job.

Unless your job regularly involves inverting binary trees, that's not a great question because it's basically asking if you've encountered something before. If you have encountered it, you likely know one a half dozen ways to solve it and everyone will give you roughly the same answer. If you've never encountered it, then you just fail even if are a good fit for the job. It doesn't reveal anything except you know one specific thing. In most cases, engineering jobs are problem solving for unique instances that arise day-to-day; a good interview should reflect that day-to-day reality more.


Oh no, that bit's easy.

But you don't ask them "swap the left and right field in each node". You ask them to "invert a binary tree".

Why. The Fuck. Does "invert a binary tree" mean "swapping the left and right fields"?

To me, it means inverting the relationship so child points to parent, which would end up badly-defined, there would be no way to uniquely specify a root. But if you don't know that knowledge, of what the jargon means, you're screwed. So that's what you're testing for.


Are you imagining a scenario where the interviewer writes "invert a binary tree" on the whiteboard, refuses to elaborate and leaves the room for half an hour?

In real life they would either give you an example so you can better understand the requirements or expect you to come up with an example yourself and ask clarifying questions until you're on the same page as to what "invert" means.


Sure, in that case it may be a bit unfair. I was looking at the leetcode thingy where they just give you an existing data structure, some sample in- and outputs, and just ask you to write the inversion algorithm. I assume it would be like that in most job interviews.


That's a high bar - in my experience (New Zealand) 1/3 of candidates can't even write any form of code to retrieve a random element from an array. 1/3 have to be nudged along and asked some leading questions, and 1/3 are "what the heck are you asking me that for, I can do that in my sleep, that better not be indicative of the quality of coders... "

There is just so much misrepresentation out there it's insane.


I hear these complaints a lot - and feel like the unspoken problem is your HR or recruiters are non-technical and shovel garbage candidates at devs to sort out. Whenever I've had (technical) engineering managers involved with reaching out and reviewing candidates, we rarely have misrepresentation or fakers.


What is the purpose of this knowledge?


Graphs and recursion are fundamental CS concepts, a candidate that doesn't understand them would be a pretty bad fit for a lot of software engineering jobs.


They would be a bad fit for a few software engineering jobs. I don’t deny that those concepts are fundamental, but the vast majority of software engineering jobs out there are about: designing apis, calling apis, writing sql queries, fixing perf. issues, writing yaml config files, etc. You can spend years working for companies in which you would never touch the topic of graphs, for example. That doesn’t mean one doesn’t need to be aware of the concept of graphs in case you need to deal with them. Another story is whether you need to know from first principles (or from memory) how to traverse a graph in different ways.

I think the most underrated skill is to know what you don’t know. To know your limits. I know that graphs exist and that they are useful in certain circumstances. I don’t know how to implement the associated algorithms. Same goes for almost any other important topic like virtual memory management , security, performance, etc.


Do those majority of software engineer jobs never need to traverse a directory or process a complicated multi-layered JSON object?


I actually did that last week (parse a json object from an http response). But I used a well known library for the language we use around here. I mean, I also rely on automatic GCs and compilers… and while I saw the theoretical side of such topics years ago in the university, I cannot implement a single algorithm related to them (because I don’t do that on daily basis)


Citation needed.

We don't even know what defines a "good engineer" come yearly performance review time, but claiming that it's the knowledge of graphs and recursion seems rather suspect.

Inb4 "obviously there's so much more to being a great engineer," but then why are we not testing for that iceberg, instead just scratching the surface of "CS fundamentals?" And in fact, how many times does a great engineer need to prove that they understand fundamentals? Forcing engineers to go through that every time they want to switch jobs is inefficient and, frankly, disrespectful.


    > X and Y are fundamental CS concepts
In 2024, the list of X and Y are fucking HUGE. I would like to see OP sweat in an interview on some "fundamental CS concepts" they have not used in the last 5-10 years. The lack of humility in some of these comment is simply stunning.

I worked with a guy who was absolutely a first class engineer. Very well paid; I guess about 300K USD. He had almost no experience in C++ for more than 20 years (mostly Java and C# for last 10 years). During a discussion, I mentioned that the original C++ std::map used a red-black tree. He was well-surprised that lookup was not O(1); instead: O(log2(n)). (My point: He knows about red-black trees, but was surprised the original C++ foundation library did not include a hash map with O(1) lookup!) Really: He would have failed an interview from this person based upon "fundamental CS concepts". Any software engineer, no matter how smart or experienced, has some weak spots in their fundamentals.


This is a strawman. Implementation details of a specific language library are not "fundamental CS concepts", unlike the specific topics I was talking about.

If you're hiring a C++ expert, then yes, not knowing the difference between map and unordered_map would likely be a disqualifying condition. We are not talking about C++ expert interview though.


> why are we not testing for that iceberg, instead just scratching the surface

Typical interview loop includes more than 1 question.

> how many times does a great engineer need to prove that they understand fundamentals?

This is a logical fallacy. The interviewer doesn't know if the interviewee is a great engineer or not, that's the whole point of doing an interview in the first place.


    > Typical interview loop includes more than 1 question.
In my interview experiences, programming problems are "fail fast". If you fail one, you fail the whole interview. There is no "round 2".


Nope, too difficult, and if you know the solution off hand that makes you look better than someone who doesn’t. For a question like this I would explicitly allow Google and inclusion of libraries, but I wouldn’t ask a question like this.

Less than 5% of our work involves understanding computer science and math, and for those 5% I only need a couple people to really, really get it. For everything else, it’s responding to business feature development, which means asking questions when things are vague, spotting inconsistencies in business asks, and breaking up a big problem into small problems that can be tested and built independently.


>I have prompts that test very basic concepts and nearly everyone fails. Resume fraud is rampant.

It is crazy how many people will fail a question that boils down to 'write a for loop' despite going to college for 4 years in CS.


I'm a little bit more demanding. I want people to write a loop with a loop in it. I've had too many candidates that can write a single for loop, but get so beyond mixed up when there's a loop inside a loop.

I do a fairly simple encode/decode problem (run length encoding). I describe the basic encoding concept, provide a sample input that should have byte savings with any reasonable encoding, and have the candidate come up with what the output should be. There's lots of ways to do the encoding, mostly anything works (and I'm clear with the candidate about that)... I allocate about 15 minutes for this stage; I've got lots of hinting strategies to keep clients from getting stuck here... but if it's not clicking, I'll give them a simple format to move on.

Then the candidate writes the decoder; decoding is easier; some candidates get really stuck on the encoder and I'd rather have a code sample than a stuck candidate. Some of my worst candidates have already forgotten the encoded format that they just designed, and they write a decoder that might work on some other encoded input, I guess. Hopefully this takes 10 minutes or less, but if you can't write a loop in a loop, you might get pretty stuck. I don't care about the language used, it doesn't even need to be a real language, it just needs to be self consistent and reasonable; i/o comes as easiest for the candidate.

If we've got 15-20 minutes left, the candidate can work on the encoder. The encoder trips up a lot more people than the decoder; so I stopped having people work on that first.

There's plenty of options for discussion at any point. Could you make the format better for some cases, could you make it work in a resource constrained system.

The specific problem isn't really day-to-day work, but it's approachable and doesn't require much in the way of data structures or organization or prior domain knowledge. Some candidates express that they had fun doing my problem, and especially for junior candidates, if they've never done compression exercises, I hope it is an opportunity to see that compression isn't always magic; simple compression is approachable.


I'm genuinely curious about what this challenge looks like exactly. I know you probably can't share it but are there any analogous code challenges/problems out there that you can? It sounds interesting.


It's very similar to this https://www.geeksforgeeks.org/problems/run-length-encoding/1 Although I present it a little differently, and don't provide a format / don't require the value, count format (although it's fine enough)

I'm happy to answer more questions via email in the profile.

Example input would be something like

    55 55 55 55 20 32 56 3F
    3F 3F 3F 42 42 61 61 61
Values chosen "randomly" by me, not intended to have any meaning. Pattern is more intentional, although this isn't my best pattern; I haven't run my interview in a long time, I had a nicer pattern I think, but can't remember it. All of my input happens to be bytes less than or equal to 7F (although in the interview setting, I don't mention that unless asked...) and not equal to 00. If candidates don't understand hex bytes, that's fine, even if it makes me a bit sad inside. When they get to coding, I'll give them in() that returns the next value from the input, and out(...) that outputs the next value. Etc.

To set people up, I tell them we're going to do some compression today with an encoding called run length encoding; in run length encoding we encode the length of a run and its value. (If they're familiar and ask questions, I tell them today, we're focused on bytes... you can do it with bits or larger than bytes, whatever, but keeping it focused on bytes makes the problem fit into a 45-minute interview slot)

There's lots of ways to make an RLE format, if you go with the basics like (count, value), or (value, count) and we have time to explore, I'm going to ask about the sequence 20 32 56; it doubles in size with that format, so what can you do to make that better while still doing a good job overall on our example stream. Lots of options, all of them are fine as long as you can decode it correctly. The basic one is fine too, if we don't have time to explore.

When using this problem, I feel like I give more thumbs up than others on my teams; but I feel really confident in my thumbs downs, and I feel I gave everyone a fair chance. I've had a few people do really poorly that just seemed nervous, and I tried to pivot towards more of a buddy problem / build their confidence / try to get them less nervous so they can do a bit better for the rest of their day.


Thanks, I'm going to work through this just because I'm curious about it.


Sounds good. If you get stuck for more than a few minutes, take a break and come back or send me an email. There's lots of small hints available to get you unstuck. (Benefit of using the same problem for a long time)


I have a pet problem that is also solved with a pair of nested loops. I had one candidate use infinite loops and a bunch of convoluted logic to manually move the index around (a single index).


This problem sounds interesting. Is the implementation language C? Given the constraints that’s my first guess.


Whatever language you want. I don't care as long as the syntax makes sense, or you can explain it enough to get me through.


Why does the implementation language matter?


If a CS grad can't write a 'for' loop you should write to the CS department chair or dean and the career services office at the department and let them know that you're disappointed in the quality of graduate that they are turning out and that you'll be thinking twice before hiring their graduates in the future.

Colleges and universities are lowering standards and accepting anyone with a pulse to get their tuition dollars.


Speaking about Germany: This is because many CS degrees do not include sufficient practical projects. If you get some degree concluding practical project, you can already be happy. The real practice most CS students get, they get "off the job", in side projects. Or on the first job they somehow manage to get.


Netherlands also, although it depends a bit on which master's they did.

If you want people able to do stuff off the bat, hire those who did MBO or HBO (in DE, HBO=Fachhochschule, but DE doesn't have an MBO equivalent I think: that would be Ausbildung level afaict, except MBO doesn't require you to have a job at the same time). In English, my HBO translated their name to "University of Applied Sciences"; my MBO did not give a English translation of the degree


There should be a 4 year Computer Science degree and a 2 year programming trade school, both equally difficult but in different directions, and companies should be aware enough to know which graduate they need.

Of course the best hires would be those willing to do both, if they're capable of a 6 year commitment.

Apprenticeships and on-the-job training is also an option, but nobody seems to be willing to do that anymore.


There are a (very) few coding boot camps that offer something resembling the practical model.

Check out Turing School. 40 hours/ week of classroom instruction, including 6 weeks of just programming before you get introduced to a framework.


I wonder how much the context of being in an interview screws this up.

They might be able to solve it perfectly if encountered in the wild, but if you see it in an interview part of you is thinking “there’s got to be a trick to this that I’m not seeing”


Yeah, the first time I interviewed after working at the same place for over ten years I was totally thrown off, because I was expecting to have a discussion about my experience and not have to whiteboard a method to list all files in a directory.


What’s the question?


FizzBuzz?


We found that doing both worked very well.

Overall interview is "write code to solve this puzzle." But first, do this very basic thing that is needed to solve the puzzle.

80% of candidates get hung up on the basic part of the interview and never even get to the point of looking at the rest of the problem. But of those that did, we got some great people.


I usually ask candidates to do example questions related to everyday stuff like log parsing. They won’t need anything fancier than a hash map. Many people are stuck after writing 4 lines of boilerplate. Some don’t even know the syntax of the language of their choice.


> Some don’t even know the syntax of the language of their choice.

I still struggle with this. I don’t find it a blocker, though. The bottleneck is usually to understand and parse business requirements. If you know about good code practices as well, then the least of your problems is to know whether you can use ‘in’ or ‘[]’, ‘var’ or ‘let’, ‘foreach’ or ‘for’, ‘def’ or ‘fun’, etc.


Could you give me a concrete example of what that looks like?


Sure.

Here's a log file of page accesses on our server. It's a CSV. The first column is the user, the second column is the page, and the third column is the load time for that page in milliseconds. We want to know what is the most common three page path access pattern on our site. By that I mean, if the user goes to pages A -> B -> C -> A -> B -> C the most common three page path for that user is "A -> B -> C".

    user, page, load time
    A, B, 500
    A, C, 100
    A, D, 50
    B, C, 100
    A, E, 200
    B, A, 450

    etc.
So for this first question you should give an answer in the form of "A -> B -> C with a count of N".

We would have two files, one simple one that is possible to read through and calculate by hand, and one too long for that. The longer file has a "gotchya" where there's actually two paths that are tied for the highest frequency. I'd point out that they'd given an incomplete answer if they don't give all paths with the highest frequency.

The second part would be to calculate the slowest three page path using the load times.

In my opinion it's a pretty good way to filter out people that can't code at all. It's more or less a fancy fizzbuzz.


Is there a point in the log where there is a time cutoff for a viewer of a page? By that I mean: in your sample user A goes B > C > D, then there is a view by a different user, and then we are back to user A. What if the time difference between user A going to page E is like 10 minutes...is that a new pattern?

I feel like this is a fun thought experiment, but instead of thinking about "gotchas" I would be more open to having a discussion about edge cases, etc... The connotation of gotchas just seems to be like a trap where if you hit one, you've failed the interview.


The “gotchya” isn’t a way to fail the interview. But for candidates that ask about that edge case right away they get extra points.


On a quick glance I don't understand your example. Are you sure there is no mistake in it? I would ask which user has shown ABC page path, because I don't see any. Perhaps you made up the lines on the fly while writing it here, and the actual example is clearer? Already a bit dumbfounded by this. Such things can easily throw people off for the rest of the interview. Keep in mind the stress situation you put people into. Examples need to be 100% clear.


Yeah. I BS’d the example. I don’t have the materials for the question on hand.


Ok, I'll bite... without having googled it, is there some trick to solving this besides enumerating every three-page path and sorting them? This reads like some one-off variant of the traveling salesman problem.


This seems to be nothing like tsp. You'd partition the table into a single table per user, extract the page columns, map that sequence to the asked three-page-sequences (ABABA would get mapped to ABA, BAB, ABA), and count them.

That's probably doable in like 5 lines of pandas/numpy; a straight forward o(n) task really. The hard part is getting it right without googling and debugging, but a good interviewer would help you out and listen to the idea.

Maybe using Pandas is cheating since it gives you all the tools you'd want but I'd argue it's the right tool for the task and you could then go on how you'd profile and improve the code if performance were a concern.


> probably doable in like 5 lines of pandas/numpy

Yeah, that's what bugs me about this type of question... he might be looking for that specifically, or something that can scale to exabytes of data (so some sort of map/reduce thing). I'd probably produce something like this _in an actual interview scenario_:

    users = {}
    
    count = 0
    
    for line in open('input.txt'):
      count += 1
      if count == 1:
        continue
      (user,page,load_time) = line.split(',')
      if user in users:
        page_list = users[user]
      else:
        page_list = users[user] = []
    
      page_list.append(page.strip())
    
    count = {}
    max_count = 0
    max_seq = None
    
    for page_list in users.values():
      if len(page_list) > 2:
        for i in range(len(page_list) - 2):
          seq = ''.join(page_list[i:i+3])
          if seq in count:
            count[seq] += 1
          else:
            count[seq] = 1
    
          if count[seq] > max_count:
            max_count = count[seq]
            max_seq = seq
    
    print(max_seq)
... and it would really depend on whether the interviewer just liked me personally whether he'd say, "yeah, that's reasonable" or rip it apart for using too much memory, taking too much time, etc...


I agree with your sentiment. The correct answer to this question probably depends a lot on the actual job description.

Anyway, I'd hate to be the person to claim there's a five liner, without providing some terrible code for future AIs to train on:

n = 3 # length of path

for user in (df := pd.read_csv(io.StringIO(input)))["user"].unique():

    counter = Counter([seq for seq in zip(*[df[df["user"] == user]["page"][k:] for k in range(n)])])

    equal_lengths = sum([1 for i in counter if counter[i] == counter.most_common(1)[0][1]])

    print(f"most common path for user {user}: {counter.most_common(equal_lengths) if len(counter) > 0 else 'input too short'}")


I really gotta learn how to use Pandas lol.

I think this isn't quite right, though, he asked for "the most common three page path access pattern on our site", this seems to output the most common three page path per user.

(for the future AI to mull over)


Nothing so complicated. It’s supposed to be a level or two above fizzbuzz


Are these records assumed to be in order?


Yes. That would of course be included in the problem statement


That’s not obvious. If you are including “gotchas” this may be another one.


Its only a gotcha to anyone who has never looked through a log file.


I have seen a lot of log files, never one in CSV format or without timestamps.


Since there is no timestamps, it being in order is a requirement because otherwise it's unanswerable. Since chronologicalness is indeed virtually universal for any sort of log file, it's also a fairly safe assumption, but sure, if you want to double check assumptions then it's a valid question to ask. I do think it was obvious enough, though, and the question that came to my mind was rather about scale, like: can I assume the number of users and unique paths will both fit in RAM?

Btw, if you want CSV log files, look no further, and not all my data logs have timestamps either! :D The particular timestampless case I'm thinking of, I wanted to log pageload times for a particular service so it logs the URI (anonymized) and the loading time, though I think that's not csv but just space separated, one entry per line


Or citing the previous “gotcha” this is a trick question and I am meant to describe a change to the system in which useful logs can be captured.


Candidates that handle this in a streaming fashion get extra points, but it’s not required.


Okay I tried it. I got interrupted twice for like ~12 minutes total, making the time I spent coding *checks terminal history* also 12 minutes. I made the assumption (would have asked if live) that if a user visits "A-B-C-D-E-F", then the program should identify "B-C-D" (etc.) as a visited path as well, and not only "A-B-C" and "D-E-F", which I felt made it quite a bit trickier than perhaps intended (but this seems like the only correct solution to me). The code I came up with for the first question, where you "cat" (without UUOC! Heh) the log file data into the program:

    import sys
    unfinishedPaths = {}  # [user] = [path1, path2, ...] = [[page1, page2], [page1]]
    finishedPaths = {}  # [path] = count
    for line in sys.stdin:
        user = line.split(',')[0].strip()
        page = line.split(',')[1].strip()
        if user not in unfinishedPaths:
            unfinishedPaths[user] = []
        deleteIndex = []
        for pathindex, path in enumerate(unfinishedPaths[user]):
            path.append(page)
            if len(path) == 3:
                deleteIndex.append(pathindex)
        for pathindex in deleteIndex:
            serializedPath = ' -> '.join(unfinishedPaths[user][pathindex])
            if serializedPath in finishedPaths:
                finishedPaths[serializedPath] += 1
            else:
                finishedPaths[serializedPath] = 1
            del unfinishedPaths[user][pathindex]
        unfinishedPaths[user].append([page])
    
    for k in sorted(finishedPaths, key=lambda x: finishedPaths[x], reverse=True):
        print(str(k) + ' with a count of ' + str(finishedPaths[k]))
Not tested properly because no expected output is given, but from concatenating your sample data a few times and introducing a third person, the output looks plausible. And I just noticed I failed because it says top 3, not just print all in order (guess I expect the user to use "| head -3" since it's a command-line program).

I needed to look up the parameter/argument that turns out to be called "key" for sorted() so I didn't do it all by heart (used html docs on the local filesystem for that, no web search or LLM), and I had one bout of confusion where I thought I needed to have another for loop inside of the "for pathindex, path in ..." (thinking it was "for pathsindex, paths in", note the plural). Not sure I'd have figured that one out with interview stress.

This is definitely trickier than fizzbuzz or similar. Would budget at least 20 minutes for a great candidate having bad nerves and bad luck, which makes it fairly long given that you have follow-up questions and probably also want to get to other topics like team fit and compensation expectations at some point

edit: wait, now I need to know: did I get hired?


At a glance it seems correct, but there's a lot of inefficiencies, which might or might not be acceptable depending on the interview level/role.

Major:

1. Sorting finishedPaths is unnecessary given it only asks for the most frequent one (not the top 3 btw)

2. Deleting from the middle of the unfinishedPaths list is slow because it needs to shift the subsequent elements

3. You're storing effectively the same information 3 times in unfinishedPaths ([A, B, C], [B, C], [C])

Minor:

1. line.split is called twice

2. Way too many repeated dict lookups that could be easily avoided (in particular the 'if key (not) in dict: do_something(dict[key])' stuff should be done using dict.get and dict.setdefault instead)

3. deleteIndex doesn't need to be a list, it's always at most 1 element


> there's a lot of inefficiencies, which might or might not be acceptable

This is exactly what irritates us about these questions. There's no possible answer that will ever be correct "enough".


Just like in real life, there's no perfect solution to most problems, only different trade-offs.


Thanks for the feedback!

I realized at least the double-calling of line.split while writing the second instance, but figured I'm in an interview (not a take-home where you polish it before handing in) and this is more about getting a working solution (fairly quickly, since there are more questions and topics and most interviews are 1h) and from there the interviewer will steer towards what issues they care about. But then I never had to do live coding in an interview, so perhaps I'm wrong? Or overoptimizing what would take a handful of seconds to improve

That only ever one user path will hit length==3 at a time is an insight I hadn't realized, that's from minor point #3 but I guess it also shows up in major points #2 and #3 because it means you can design the whole thing differently -- each user having a rolling buffer of 3 elements and a pointer, perhaps. (I guess this is the sort of conversation to have with the interviewer)

Defaultdict, yeah I know of it, I don't remember the API by heart so I don't use it. Not sure the advantage is worth it but yep it would look cleaner

Got curious about the performance now. Downloading 1M lines of my web server logs and formatting it so that IPaddr=user and URI=page (size is now 65MB), the code runs in 3.1 seconds. I'm not displeased with 322k lines/sec for a quick/naive solution in cpython I must say. One might argue that for an average webshop, more engineering time would just be wasted :) but of course a better solution would be better

Finally, I was going to ask what you meant with major point #1 since the task does say top 3 but then I read it one more time and...... right. I should have seen that!

As for that major point though, would you rather see a solution that does not scale to N results? Like, now it can give the top 3 paths but also the top N, whereas a faster solution that keeps a separate variable for the top entry cannot do that (or it needs to keep a list, but then there's more complexity and more O(n) operations). I'm not sure I agree that sorting is not a valid trade-off given the information at hand, that is, not having specified it needs to work realtime on a billion rows, for example. (Checking just now to quantify the time it takes: sorting is about 5% of the time on this 1M lines data sample.)

For anyone curious, the top results from my access logs are

   / -> / -> / with a count of 6120
   /robots.txt -> /robots.txt -> /robots.txt with a count of 4459
   / -> /404.html -> / with a count of 4300


> As for that major point though, would you rather see a solution that does not scale to N results? Like, now it can give the top 3 paths but also the top N, whereas a faster solution that keeps a separate variable for the top entry cannot do that (or it needs to keep a list, but then there's more complexity and more O(n) operations). I'm not sure I agree that sorting is not a valid trade-off given the information at hand, that is, not having specified it needs to work realtime on a billion rows, for example. (Checking just now to quantify the time it takes: sorting is about 5% of the time on this 1M lines data sample.)

You need the list regardless, just do `max` instead of `sort` at the end, which is O(N) rather than O(N log N). Likewise, returning top 3 elements can still be done in O(N) without sorting (with heapq.nlargest or similar), although I agree that you probably shouldn't expect most interviewees to know about this.

As for the rest, as I've said, it depends on the candidate level. From a junior it's fine as-is, although I'd still want them to be able to fix at least some of those issues once I point them out. I'd expect a senior to be able to write a cleaner solution on their own, or at most with minimal prompting (eg "Can you optimize this?")

FYI, defaultdict and setdefault is not the same thing.

  d = defaultdict(list)
  d[key].append(value)
vs

  d = {}
  d.setdefault(key, []).append(value)
useful when you only want the "default" behavior in one piece of code but not others

  >   / -> / -> / with a count of 6120
  >   /robots.txt -> /robots.txt -> /robots.txt with a count of 4459
LOL


Your solution looks alright. I think you could use a defaultdict() to clean up a few lines of code, and I don't fully understand why you have two nested loops inside your file processing loop.

Here's my solution in TS.

    const parseLog = (input: string) => {
        const userToHistory: {[user: string]: string[] } = {}
        const pageListToFrequencyCount: { [pages: string]: number } = {}

        for (const [user, page, ] of input.trim().split("\n").map(row => row.split(", "))) {
            userToHistory[user] = (userToHistory[user] ?? []).concat(page);

            if (userToHistory[user].length >= 3) {
                const path = userToHistory[user].slice(-3).join(" -> ")

                pageListToFrequencyCount[path] = (pageListToFrequencyCount[path] ?? 0) + 1;
            }
        }

        return Object.entries(pageListToFrequencyCount).sort(([, a], [, b]) => a - b);
    }
It could be slow on large log files because it keeps the whole log in memory. You could speed it up significantly by doing a `.shift()` at the point when you `.slice(-3)` so that you only track the last 3 pages for any user.


"everyday stuff like log parsing"

Unless youre talking about using tail / grepping some linux logs I've never done this once let alone on a daily basis...

But if needed frankly I would look up any needed regex / pattern for this on the job and on the fly. The topic of "log parsing" is massive anyway. What types of logs? Is this something like request logs from NGINX? Custom stderr / stdout from some cron job? Some sort of horrible XML based system dump? I could go on and on...

When you know the solution already of course its obvious.


Codebase fraud is rampant too.

The company has very excellent coding and test interviews but the codebase is just shit.


> The company has very excellent coding and test interviews but the codebase is just shit.

It is also possible that this is because of all the fraud developers they've hired. A "company" is not a person. It doesn't produce shit code itself. While management can also be a source of bad codebase, it is usually the developers.


Oh? Do you not blame the front office of a sports time for failing to win a championship or assembling a shitty team?


No, because resume fraud is very rare in sports. What is expected from an athlete is very clear. If they can't run x metres in y seconds then they are out. This is not so clear cut in software development, so yes, I don't blame management when they accidentally hire an incompetent, fraud developer.

Obviously it is a different story when management hires good developers but can't maintain a good workplace culture, apply unnecessary time pressure, don't pay them enough etc.


> While management can also be a source of bad codebase, it is usually the developers.

Plot twist: the devs who are good at leet code quizzes are frequently crap product developers on teams.


Algorithm questions are overrated, but asking a real life question where a naive solution is n^2 but basic knowledge of standard tools brings it down to log n is always a good idea.


Nice idea. Do you have a good example?


Yup, I've developed a workflow that starts with writing a brain-dead easy fizzbuzz and gradually adds features and complexity. The way I've done it, it gives you a way to judge levels as well as basic competency.

If you can't, or can just barely, complete fizzbuzz in the allowed interview time with a lot of coaching in your language of choice, then you definitely aren't ready to work as a SWE. If you breeze through all my extra sections in half the time, then you're great. Partway through, and you're probably a decent junior to senior engineer.


> fizzbuzz in the allowed interview time

Which would be how long? I haven't practiced it specifically so it would be a good test I guess


We have a hour total per interviewer. The basic FizzBuzz is intentionally incredibly simple, no reason anyone qualified to be a full time software engineer should take more than 10 minutes to build it in a pre-set-up environment in their language of choice.


I haven't done it in a while and never practiced it specifically, 10 minutes sounded quite short for writing working code. Looking up the description again and timing myself now (starting the timer after reading the description, since I didn't know how long it'd take to find a good description whereas that's a given at the interview): yeah okay, that took 63 seconds for printing the results for the range 0-100. Good interview question, quick and easy enough (though I don't use n%x==0 a lot in real coding, it has come up and it's a basic enough task).

Thanks for the answer!


> I don't use n%x==0 a lot in real coding

I use it all the time, especially when logging the progress of a long running process at every x records.


True, I use it there as well -- and then often wish afterwards that I had done a more expensive call to get the current time to print every X seconds instead of after an arbitrary number of records. Combining the two (with a fairly low %x) is probably the right answer


Yup, that sounds about right.

I'd also note that one of the reasons I prefer doing such things in person is (in addition to not taking up too much of the candidate's free time) that if they misinterpreted something or are going off in the wrong direction or something, I can correct them right away instead of letting them waste a bunch of time. This also lets you see how they tend to interpret instructions and take criticism and corrections.

It's also IMO a failure mode for a candidate to get a simple task that should be done in 5-20 lines of simple code and try to build some over-complex super-modular and extensible thing for it, and resisting correction on it not needing to be that over-engineered.


The one piece of help I will offer in FizzBuzz is a reminder of how you can use the modulo operator, for exactly that reason.


Exactly. Most of the medium difficulty interview questions are just typical cs algorithms that you are supposed to know. If you are a competent software engineer, it doesn't take long to just brush up and get enough practices for all of them.


If you're "supposed to know", I would assume enough use to not need to brush up. Brushing up implies not used enough to keep in the mental model.


Knowing != passing the interview. They are in different layers. Any cs student would know about things like the hashmap, stack, queues, linked list. You encounter those data structures everyday especially when you are doing debugging the source code.

That's still different compare to interview though because you need to explain, code, solve a question under 20-30 minutes. It's not hard if given enough time. But most people would need to practice some questions before going into those questions so that they can solve those under the time limit.


Can you give me a couple of examples? I'd like to see where I stand with my knowledge.


One of the first questions I ask is "create a dictionary with three elements in Python and assign it to a variable"

The amount of insane answers I've seen to that one alone...

Then if they pass, I test proficiency by having them loop over the dict and update each value in-place.


I wonder if people get spooked by the simplicity and think it’s a trick question.


I give them every opportunity to ask questions and even use search/LLM, as long as they acknowledge it. Most candidates just fundamentally aren't practiced enough.


I’m divided. I can do what you ask, but not without googling it. I can produce performant and robust code, but not without double checking on google. I’m unable to deliver code that compiles in any language without checking the documentation. Pseudocode, yeah sure.

So, I wouldn’t pass these kind of interviews. In over a decade I’m never being asked these kind of questions though (I have done take home assignments and leetcode, but always with google opened)


Reality check: if you say on your resume that you know python, then you should be able to make a dictionary with three items and assign it to a variable without googling anything.


Fair point. I don’t like resumes in which people state that they know X or Y. I prefer the ones focused on what problems were resolved using what technologies.

I have used Python to solve average business problems, yet I cannot produce non trivial code without looking at the documentation. Same for the other dozen programming languages I have used in the past.


>yet I cannot produce non trivial code without looking at the documentation

    hello = {1:1, 2:2, 3:3}
is about as trivial an ask as someone can make.


I know enough python to read Calibre's code and understand it, but I keep forgetting syntax details and the actual name of methods and properties, because I'm influenced by whatever language I've been writing in lately. I know what to do, but it will be pseudo-python-code.

That can usually be solved by a quick read of the reference documentation (2-5 mn?).


That's fair. After you know more than a few languages, it's easy to know what you want to express in it and the limitations it has, while the particular name they happened to give those concepts is pretty arbitrary and quickly peeked at if you haven't used it in a while.

For my part, I've written enough python that I doubt the literal syntax will ever be far from my fingers.


One of the things that can be tricky about this happens when you’ve legit worked in a few languages and the semantics are perfectly clear in your head but the syntax for any language you haven’t used recently is crowded out by those you have.

I needed a small perl script recently (perl 5’s feature set & stability plus availability in the environment made it the right fit) and realized after 15+ years of no perl much of the specific syntax was fuzzy to outright gone from my head even though I’d contributed to large perl projects for years.

Python work is much more recent, but I’d bet I would accidentally mix in some JS or even PHP syntax doing the dictionary assignment, at least w/o a cursory lookup. I’d like to think it’d come through that I know what a dict is and what it means to set one up and operate over it, but who knows, I might be interviewing with someone who is evaluating skill on the basis of immediacy of syntactic recall.


dict_variable = {1: 2, 3: None}


And you work full time as a software engineer, or some other role? Honestly blown away if you work as a programmer that this sort of request would require looking at documentation.


I think it gets harder to remember exact syntax details the more experience you have and the more you have worked with different (but very similar) programming languages. I get what OP means: if you have worked with Ruby, JS, Python, Go, PHP, Kotlin, etc., you can easily misremember things like the order of parameters for a given function, whether if conditions require parenthesis, to use {} or [] for maps, etc.

If you have just started your career and are full invested in 1 or 2 programming languages, sure this may sound alien to you.


I get it. I've done a ton of languages too. But, like, that's so ridiculously easy to handle in an interview, right? "I think it's like this [show example], but maybe the hash rocket style is Ruby and it's actually colons. Either way, you get the idea."

If your interviewer finds that problematic, well, that's on them.


Not who you asked, but I work full-time as a Ruby dev. Off the top of my head, I don't remember the order of arguments of the #reduce method block (it's the opposite of what Elixir uses), the exact syntax of the throw/catch mechanism (in Ruby this isn't exception handling), the methods and parameters for writing into a file, bitwise operators, I always ask a LLM about packing/unpacking bytes between arrays and binary strings and many other things. I also mix up #include? and #includes? because these differ between Ruby and Crystal, and there's also #includes in Rails (AR).

So, the equivalent of creating a dictionary, yeah, sure. But there's loads and loads of stuff that I only use maybe once a week (and someone else maybe uses daily) and that I'd have to awkwardly Google (I use Kagi btw) even during an interview.


Same reply as above, you'd easily be able to speak to this in an interview and not hit the "fraud" alarm. "I think it's accumulator, element here on reduce, but I may have them transposed."

Your interviewer is probably also questioning if it's (a, e) or (e, a), but you passed the fraud filter.


An interview question I got (for a security role): "You type www.$company.com into the address bar and press enter. What happens?" After jokingly clarifying they were not interested in the membrane keyboard interactions, they were more than satisfied with an answer explaining recursive DNS resolution, TCP and TLS handshakes, the HTTP request itself, and I think from there we got sidetracked. They also asked about document file upload risks because that was a particular concern in their application. I didn't think of the specific answer they wanted to hear, but after giving me the keyword XXE, I could explain it in detail which was also sufficiently satisfactory so far as I could tell. Fun interview overall.

In interviews I've done, we only looked for culture fit because the technical part was a coding assignment they had already done. Honestly too big an assignment since it's uncompensated (not my decision), but to my surprise nobody turned it down -- and everyone got it wrong. Only n=3 or n=4 iirc but those applying for a coding position could not loop through a JSON-lines file too big to fit in RAM (each line was a ~1kb JSON object, but there's lots of lines) and sum some column from each JSON object into a total value. The solutions all worked to some degree, but they all chopped up the file, loaded the first chunk into RAM, and gave an answer for that partial dataset only.


What were you expecting them to do instead: use mmap instead?


You see this sort of interview incompetence at every large company. There's simply no way to force software engineers to be fungible, but that's what the processes of many large companies expect.


I have prompts but I give the solution away. It's basic shit like factorial or fibonacci. People still fail. Resume fraud is rampant.

EDIT: Another thing: about 80% of the candidates I interview wouldn't be able to pass our Product Manager SQL interview. It's basic shit, but not as basic as the stuff I ask. All the PMs in my current job have better skill than 90% of the backend engineers I interviewed in the last two years. Resume fraud is rampant.


> our Product Manager SQL interview

Your what now? (Well, unless you're a database company.)

In all my time as a PM I've never had to know, let alone prove competence in, my SQL knowledge.

In all my time as a PM, the vast majority of my "SQL" usage is crafting slightly more advanced filters and queries in Jira or similar.

There are analytics platforms, telemetry, metrics, you name it.

I've worked as a PM on fairly complex products (integrating and manipulating healthcare data, managed platforms atop Kubernetes, etc.) and never needed SQL knowledge.

As a general rule in terms of how I allocate my PMs and their efforts now? "PMs have far more valuable things to be doing than lovingly crafting handwritten SQL for god-knows-what reasons."


If you have analytics, a PM being able to answer their own behavior/business questions by querying logs is really valuable.


Yes, I would love for my PM to do any analysis from our logs.


I know right? Sounds a bit crazy. But we are successful in hiring great PMs, so I can say it's working.

But it's super basic SQL. Select and joins, mostly.

Developers with several years of experience in their resume still fail it.


FYI I wouldn't know how to do fibonacci sequence because I don't know its definition. I could make a guess because it came up as a toy problem before, but because I never actually needed it for anything I'm not super familiar. Compound interview stress and I'd potentially get factorial wrong as well because that's also not something I'd normally implement.

I might recommend, when asking this question, to give the definition with a few example inputs and outputs. That should avoid these types of issues where people are perfectly capable of coding the requested algorithm but aren't mathematicians / toy problem experts


> FYI I wouldn't know how to do fibonacci sequence because I don't know its definition

OP here. As I said, "I give the solution away".

I basically tell the definition/formula at the beginning, and give examples and test cases for you to check your results against.

I also help people along, like I would in pair programming.

A lot of people still fail. Some with allegedly 25 years of experience. Luckily this is at the beginning, so I don't have to spend the whole interview with them.


ohh come on. Nobody ever needed Fibonacci or Factorial for anything, but if you bomb those after a few clarifying questions (like not knowing the initial values of fib or it's definition) I'm not sorry.

Factorial is just a for loop and with Fibonacci you might want to talk a bit about recursion and caching. That's it.


Things like these are rote memorized solutions that fill what could otherwise be useful problem solving or software knowledge space in my brain.

They're dumb.


Then don't memorize. I'm OP and as I said I give the solution, candidate only has to code. Like I would with FizzBuzz.

People still fail.


> I wouldn't know how to do fibonacci sequence because I don't know its definition

You know you can ask the interviewer about this, right?


Yes, and I would because if I don't know then that's my only option (short of sitting there and going "no can do"). However, given their phrasing "basic shit like factorial or fibonacci" I got the vibe this is supposed to be known by the candidate and they'd judge the candidate negatively (before having written a single line) for needing to ask


I'd assume that the purpose of a simple question like that is to test basic programming skills (can you write a loop?). Testing knowledge of a specific math concept is doesn't really fit into this goal, so you're unlikely to be penalized for it.


You are correct.

We do give the definition of Fibonacci, and we steer the candidate in the right direction if they make math errors.


I agree that Resume fraud is rampant. Simple questions that should be easy for “20 years of experience doing that thing” get bad answers >50% of the time


> Asking me to solve an extremely esoteric problem that has zero relevance to my day-to-day

I'm always surprised how useless something is when I don't know it, and suddenly once I do know it, I solve lots of problems with it!

I've heard programmers grumble about how useless calculus is, before I learned calc I used to grumble about that too. After I learned it there were countless problems I unlocked solutions for by applying the thinking I learned in calculus.

I've heard programmers say that you'll never need to implement your own sort for mundane tasks, but, it turns out that after really grokking topological sort I used it countless times for fairly mundane problems like creating plots.

I've heard programmers say that learning the lambda calculus is a waste of time, and nobody uses functional programming. Yet it was people that understood these things that transformed Javascript from a useless browser oddity into one of the most widely used languages. It was seeing that Javascript was essentially a Scheme that unlocked it's true potential.

Over my career it's remarkable how many "esoteric problems" have lead to me solving hard tasks or even shipping entirely new products. If you're only focused on what is required of your day job today you're only going to be at best a mediocre engineer.


> after really grokking topological sort I used it countless times for fairly mundane problems like creating plots.

I'm interested in learning more - in what scenario was topological sorting essential for generating plots, and what specific problem did it solve?


Essentially a funnel report where you want to know the total percent of the population that has reached a given path, but you only know the output probabilities of each step in the funnel (node). This is a fairly common situation.

As a simple example: you know after signup 20% of customers purchase, 80% don't, but what you want to trivially add in is the fact that of the users in a marketing campaign, 10% of them signed up, which means for the marketing funnel 2% purchase. Now consider that you have 20 or more such events in your funnel and you want to combine them all with out doing all the math by hand. Likewise you want to be able to add a newly discovered step in the funnel at will.

Using a topological sort you can take an arbitrary collection of nodes where each node only knows that probability the next nodes are, sort them and then compute the conditional probabilities for any user reaching a specific node fairly trivially, given the assumption that your funnel does represent a DAG.

If you don't perform the topological sort then you can't know you have calculated all the conditional probabilities for the upstream nodes, which makes the computation much more complicated. Topological sort is very useful any time you have an implied DAG and you don't want to have to worry about manually connecting the nodes in that DAG.


That section was less complaining about the nature of the problem and more about the harshness of judging the solution. The irrelevance to day-to-day work merely emphasizes the unfairness of the judgement.

If I'm put on the spot, under time pressure, to solve a problem I've never seen before and will likely never see again on the job, AND you reject me because my solution was slightly incorrect or naive, well it's obvious what the nature of the job is at that point. You're filtering for candidates that can and will devote dozens of hours to Cracking the Coding Interview and LeetCode. Sorry, I have a full-time engineering job and two young kids, and you clearly don't value my capabilities or experience or time, you value my willingness to spend my extremely limited free time studying to ace your half-baked engineering IQ test, for the honor of possibly working for you.

I once had a company cancel a scheduled interview when I informed them I had received an offer from another company, but was more interested in them and was wondering if I could step up the interview schedule. They told me unless I was willing to reject the existing offer and submit to their multi-week interview process we couldn't move forward. Esoteric, irrelevant algorithm questions with strict judgements are just a different version of that same arrogance.


The point is never to have a working graph clusterization algo. It's to see that you not only theoretically know the difference between deque and heapq, but also know which is the right thing for a particular task you're working on.

I've had people claiming 15 years of dev experience fail to find the 20th last comma in a file and not even being close after an hour.

You only start getting it once you see how bad 9 out of 10 people really are.


Is it that they're bad, or is it that your tests stumble upon a small area in the huge realm of software engineering that those particular engineers struggled with on those particular days? Or, if you're asking these people the same group of questions and seeing a 90% failure rate, maybe your chosen questions have little overlap with common engineering tasks?

Why is it so hard to believe that even after 15 years of producing useful, quality software someone might still struggle with a random problem in an interview setting? What level of arrogance leads to this holier than thou thinking?


> Asking me to solve an extremely esoteric problem that has zero relevance to my day-to-day and if the solution I come up with on the spot under time pressure is incorrect or even just not the most efficient I'm rejected? At that point you're just filtering for starry-eyed recent grads you can underpay.

Eh I think it's fine to try to find people who are interested in solving problems and will give any maths problem a go, no matter how hard it may seem.


I'll give anything a go, I just expect my solution to not be harshly judged in the context of a job interview.


Imo, there are two kinds of programmers: people who can write code to build stuff, and people who can write code to build stuff and are also conversationally fluent in the theory behind writing code. The second group is 5x more useful than the first, and coding interviews are testing which group you're in. Often the first group doesn't think the extra skill of fluency is important, which is fine, think what you want, but they're definitely wrong, and I wouldn't want to work with those people; when there are actual problems to solve I'm going to go looking for people in the second group to figure them out. A terrible situation is to end up with a team of entirely people who can code but can't theorize about code, because they'll build a mountain of crap that other people have to rebuild later.

(Now it's true that some people can't theorize quickly, or in front of someone else, or especially in a stressful interview where there's a lot on the line. Those are real issues with the format that need solving. Not to mention the "esoteric trivia" sorts of questions which are pointless.

But the basic objection that "coding tests aren't testing the skills you need in your day job" is absurd to me. They're not the skills you use everyday, they're the skills you need to be able to pull out when you need them, which backstop the work you do every day. Like your mechanic doesn't use their "theory of how engines work" every day to fix a car, but you wouldn't want a mechanic who doesn't know how an engine works working on your car for very long either...)


I think there’s another group: people who can come up with solid code by using search tools.

I code, sure, but I will never come up with a custom solution for any non trivial problem. I know where to find appropriate solutions (the best ones) because I’m aware of what I don’t know (I read a lot of tech books). You cannot test this in the classic tech interview (because I would googling 75% of the time).

The final result is: you want good code or not? How I come up with it should be secondary.


As the problems become harder, you can’t just Google for solutions. Really great engineers often build things that nobody has ever built before — or at least not documented how they built it publicly. If you don’t have fluency in the fundamentals, you won’t be able to piece together the parts that you need to build novel systems.

Second, part of hiring junior engineers is evaluating their growth prospects — e.g. new grads are often completely unproductive for up to a year, and firms make large investments when hiring them (maybe up to $200,000 in mentorship and wages). People with the attitude “I don’t need to learn/understand things, I can just Google them” are unlikely (IMO) to reach that level of seniority.


In my experience, it's very rare that you're in a job that requires you to come up with a solution to a problem no one has ever dealt with before. Custom solutions are often a sign the engineers in question didn't do the appropriate research to find the standard solution for the problem.

I've been a software developer for 10 years, and I've never worked on a problem that someone else hadn't come up with a solution for somewhere. And if they haven't, alarm bells go off as to why I'm the first to do this, and where down the pipeline did I deviate so horrifically from the norm.


I strongly agree with this. I worked on low level algorithms in bioinformatics circa 2010. Writing mapping algorithms and variant detection in C/C++. Most/all of what we did was adapt known compression and database algorithms. The "best" aligner is still BWA (Burrows-Wheeler Aligner), which uses the Burrows-Wheeler Transform, popular in a lot of compression utilities.


Could you please give a firsthand account of an instance when a great engineer built a novel solution? I feel NIH syndrome is way more common cause for building things from the ground up


I've seen it at least ~10ish times in my pretty short career. I think you're maybe imagining someone building, like, "Linux from scratch". Novel solutions don't have to be that big; they just have to be novel.

Someone I worked with once went off on their own and implemented a test framework that solved a lot of the problems we've been having. They could have just written tests the normal way; they did it a different way; it was great. Someone else made a debugging tool that hooked into Python's introspection abilities and made flame graphs of the stack traces. Not exactly groundbreaking science but it was entirely "innovative" in the sense that no one expected or wanted that tool, yet it solved a real issue. Someone else made a JS library that abstracted out the problem of organizing these dynamic workflows on an internal-facing took. Small, but novel, and it organized the ideas in a way that made it possible to build on the abstractions later. For my part we had this chunk of business logic that was a pain to debug and I had the thought to factor it out into a standalone library that was completely testable at the interface. Not groundbreaking, but no one had thought to do it and it obsoleted the issues from before immediately. Etc.

If your job is anything more complicated than "feature implementation", there are chances for innovation left and right, and good engineers see and pursue them.


An engineer on the Search team at Google designed some novel way to serialize parts of the search index so that it could be updated more easily.


> As the problems become harder ...

What percentage of engineers are working on truly hard technical problems?

I can only speak from experience but the vast majority of us are doing the same shit with a different name signing the checks.

The world doesn't need millions of brilliant engineers. It needs construction workers that can assemble materials.

I am fatigued by every tech bro in the industry that thinks they need to find the next genius with their ridiculous hiring process.


I’ve come up with some of the core solutions in my org to solve massive big data problems and had to depend on intuition and theory instead of the web. I still failed a merge sort whiteboard challenge in an interview. Some people just can’t deal with these inane questions in an artificial environment.


yeah, that's wrong. I don't only want good code. I want a smart person who can write code and also do a bunch of other things, like make good decisions about code and mentor other people to write good code and fix problems before they happen and keep everything maintainable and clean. How you come up with your code per se is secondary, yes, but I'm testing for a bunch of other things that are not secondary as well.


Curious. What skills from the "return all elements from a matrix in a spiral order" make you a good mentor? Or say something about your skills keeping code clean?


None, but a) if you can't write that trivial code I don't want to be on a team with you anyway because I'm going to be teaching you how to basically think, and b) the part where you talk about the code, not the part where you write it, is the part where I try to detect if you're any good at communication or abstract thought.

(disclaimer: all of this is notwithstanding the fact that some people's brains shut down specifically during interviews/places where they feel under pressure, which I have nothing but sympathy for. Afaik that's an unsolved problem with coding interviews. I would always try to lower their stress but it is not a sure thing.)


I don't know what "elements from a matrix in spiral order" is supposed to mean. If it is that for the matrix

    A B 
    C D
    E F
you are supposed to return A B D C E F, then if you cannot do this, I don't care about how clean your code is.


https://leetcode.com/problems/spiral-matrix/description/

If you ask me a question like this, I'm going to judge your company pretty harshly.


Thanks for the link, but I don't see a problem with that question. If you find it difficult, I wouldn't want you anywhere near the code base of my (hypothetical) company. So I guess the question would be doing its job just fine, for both of us.


I didn't say it was difficult, but man you sound it with that additude/arrogance.


Isn't the answer "sure, seems trivial, let me do that in ten minutes for you"?


I'm really against leet code interviews in general as a concept.

Having said this, I'd never hire a person that's not able to reason about this. Actually writing the implementation is really secondary


Could you expand on what

> conversationally fluent in the theory behind writing code

means?

It might be my insufficient command of the English language, or I might be outing myself as being outside said group, but I'm unsure what that means. Is this just referring to a vocabulary for discussing the structure and creation of software, or is there a deeper mystery I have not yet grasped?


I mean that if someone asks you questions about code, you can respond intelligently and "think on the fly" about the subject in question. For instance you haven't just memorized something like e.g. the big-O time to access a hash table, but you have reasoning behind it: you know how it works in a few cases, your knowledge about it comes from an understanding of the implementation, and you can extrapolate that knowledge to new cases or variations of the problem, etc. Maybe your knowledge ends at some point but you could keep going if you had to: like maybe you don't know how hash tables interact with page tables or CPU caches but if that starts to matter you would be able to understand it and keep going.

The same way of thinking applies to design patterns (single responsibility principle->but why, and when is it okay to break?) or to architectures (OOP / dependency management -> yes but why? can you make a version yourself? can you work around problems with it?) or to libraries (React components->what are they trying to do? how do you keep that contract simple?) or to languages (JS->what are the tradeoffs? what features do you need? how important is upgrading or polyfilling?) etc.

All beyond-basic intelligence takes this form: not memorization but having a working understanding of how something operates that you can use and apply to new situations and investigate and drill into and wieldy flexible. I would call that "fluency". To be conversationally fluent in a subject is not necessarily to be an expert but to be able to "think" in terms of the concepts, and usually it means you could become an expert if the situation demanded it.


This is much more basic than what I thought you meant. What you're outlining are critical thinking skills. And I agree, lacking them makes a programmer far less valuable.

But there's a whole other level of fluency around the theory of software development, and it comes from experience with different architectural patterns, and being able to see into the different futures of each architectural pathway,and being able to converse with other people who understand software at this level.

Although, calling it a level really undersells it. Multiply the potential capacity for this talent by every dimension of software building, and you start to see how people having even a little of this skill, but being able to work with others who have a bit of it in a related dimension can form a team that is more than the sum of its parts.


Yes, the ability to have critical thinking skills is the key differentiator between the two types of developers mentioned.

I think that is what a lot of these discussions seem to be miss: the issue is not really the hard tech skills/knowledge. It is more about the softer critical thinking abilities or personalities that allow someone to become skilled at something or solve a problem easier/better.


I agree, I was quantifying with some examples off the top of my head, but I do mean 'this skill, but for everything'. Architecture is certainly a big part of it.


I’ve been having quite a bit of luck at these coding assessments by simply memorising solutions to leetcode problems. This feels not very different to studying braindumps to get a vendor certification.


And this is exactly the problem. Being a software engineer is 1000 things more than just rote memorizing some toy problem that solves exactly 1 single toy use case.


Oh yes you'll do find with that but you'll be a bad programmer when you're done. Better to work on the art of programming, at least once you've met your immediate needs like getting a job. It will get you a lot further in the career, plus it is morally better to be good at something and contribute meaningfully compared to doing just enough for a paycheck.


Grinding leetcode doesn't convert you into a "good programmer", and conversely memorising leetcode solutions doesn't make you a "bad programmer". The leetcode assessments done by companies are encouraged to be gamed by the companies asking for them anyway.

> It will get you a lot further in the career, plus it is morally better to be good at something and contribute meaningfully compared to doing just enough for a paycheck.

I've met virtually nobody that has said leetcode has got them further in their career except for passing a gated interview. Honing your craft and being good at something has nothing to do with leetcode.

To me leetcode is simply a means to get past gated interviews; if memorising solutions does the trick then I'll continue to do that. Honing my craft as a programmer and being a "good programmer" is something I work on in which leetcode bears no relevance.


I think the best coding interview is to test some fundamental CS knowledge.

For example: given a scanner, write a simple calculator that deals with precedence and only needs to support +-*/

It shouldn't take a huge amount of time to get a parser done, with BNF or not.


Questions like these are hit and miss tho - I can do this because I worked in a sub-field where “write a parser for that” was a common tool to reach for. In my current field I haven’t seen a single parser in any company codebase; a dev that grew up here could be deeply skilled but have a gap around parsers..


That's okay, but it is testing what it says: facility with a particular part of CS that some people have studied and some people haven't. Can't hurt, though, and it's the sort of think that ought to be in everyone's toolbox, although it isn't.


When starting to type this comment, I was going to write that I could not do it and I think of myself as a decent coder. While typing that, I had an idea and I started a stopwatch...

Made this test set in the first minute: "1 + 1", "1 * 2", "1 + 2 * 2", "1-1", "1/2", "1+2/2" which I think should cover the requirements generally. Then I took 9m58s to come up with 77 super ugly lines that, to my surprise, after fixing that I forgot to cast the inputs to floating points (lines 67 and 76), gave the right answer straight away.

Code and output: https://pastebin.com/C3RVqpjE

The correct answer would probably have imported ast but, while I know of the general existence of such tools, I never needed this in my life. It's not like I work on JSON parsers (a minefield) or wrote my own coding language. An old colleague did the latter for a commercial product by using str.split on operators (yes, strings were a major feature in the language), which went about as well as you expect... I know to stay away from these problems or, if tasked with it, where to look to solve it, but that doesn't mean I can do it in an interview.

While I'm pleasantly surprised to have gotten a crude version working and in a faster time than I expected...

...if you're not hiring specifically for a parsing job, please don't use this as a coding exercise. It could be an interview question if you just want to hear "something something AST, and not by string-splitting on operators when the language has strings with said operators potentially in them". That could demonstrate knowledge sufficient such that the coder would do the right thing if this task were to come up in their job


You forgot to test for associativity: ((1-1)-1) vs (1-(1-1)).

Your implementation does handle it correctly though, but it requires some care when using parser generators :)


Nice work. Yeah, writing a parser without knowing parsing idioms is really hard. I can't remember the idioms anyway so mine would look like yours. There's a reason there are whole classes on this in colleges.


Agreed, but the question is how to reliably test for those skills, any freaking desperate idiot could have managed the interviews I've been through.


Most mechanics I know have long forgotten how to "connect the dots" and troubleshoot issues. Everything became computerized there and all they do is plug in a code reader. They literally don't do that "could it be spark, could it be fuel" kind of thing anymore. Most branded garages follow company instructions, "IKEA"-style, aka use a 10 socket and use it here.


Could be! Doesn't mean those are the mechanics we want to be using though...


I find the second group more often than not so pendantically afraid of building something even a few lines of any sort of "anti pattern" that when they meet the messy qualities of reality they fail to build anything, or at least take 20x as long as the first group.


Eh. That's not inherent to the second group. I think that's what happens when the second group is disempowered---both by the organization and by the hellish landscape of technology they have to work with. It can be very paralyzing to try to do something right when the tools are all broken.

Or maybe a good team is a mix of the two. I dunno. But I know that not having theory gets you only so far, and then everything becomes awful.


plenty of mechanics or other professions get by using "mechanic" or "engineering" know how vs know what that you're describing group B to be.

let's look at empirical evidence: old building - do you think masons back then understood compression forces ? but those buildings still stand. what they knew was simply a matter of probability that doing a->b->c results in this predictable outcome based on what they had done or what others had done.

scientism is not engineering. scientism is knowing why things work. engineering is knowing how things work or don't work.


There is also a third group, who can't do well at either task.


Oh sure but would you even want to call them "programmers" then?


I've never understood why people hate them so much. From the employer side of things it only makes sense to get a feeling for someone's abilities other than an impression based on words alone.

You can't believe the amount of shit solutions we've gotten from candidates. We just let you make a very simple kata. A tiny program that generates some console output, you have to refactor it to make it prettier and you need to add one feature. Literally half of the people fail to make it work. Many others just show zero effort for code cleanliness. That's all we ask, make it work and make it look pretty.


> From the employer side of things it only makes sense to get a feeling for someone's abilities other than an impression based on words alone.

I'd like to believe this is true, but it fails to explain why candidates for other business functions don't receive the same scrutiny.

I'm not aware of analogous evaluations to get hired to other business roles (e.g. marketer candidates aren't asked to demonstrate a working knowledge of the Google ads dashboard, accountants aren't expected to clean up a fake P&L on their own time for review by hiring managers, etc).

I could be wrong and always welcome correction, but from anecdotal experience talking to friends and work colleagues, the bar for SWE hiring is much, much higher, even controlling for compensation.


I'd look at it the other way: Other high-difficulty jobs have mandatory licenses and certifications that weed out the chaff. Lawyers have the Bar exam, engineers have the Professional Engineering exam, doctors don't have a specific test but they have all of med school, EMTs need to get an EMS license/certification. Software engineers can get their foot in the door with a javascript coding bootcamp.


This explanation works for entry-level candidates but fails to explain why senior candidates are often expected to do similar exercises _in addition to_ any work experience they have.

New lawyers, doctors, and CPAs have to demonstrate textbook mastery to pass a handful of exams once in their career. Engineers are expected to demonstrate textbook mastery for every job they apply to _for their entire career_ (and often multiple times per application!)


> New lawyers, doctors, and CPAs

everyone you mentioned here has some kind of an ongoing public tally going on - Yelp/Google reviews, customer referrals that lead to new business or lack thereof. If I'm looking at a crappy lawyer or accountant, they probably have a 2* average of public reviews and/or out of business because noone wants to refer to them. Is there an equivalent of that for a mid-career programmer?


I don't think this is true. Most of the doctors and lawyers I know work at big firms with a publicly reviewable presence, but there's no practical way to review individuals at those firms.


Accountants don't have to have a CPA. Half the accountants working under my partner (Accounting Manager at a large private university) don't even have a Bachelor's in Accounting.

> EMTs need to get an EMS license/certification

I love EMTs. I was one. I'm a paramedic. I train new EMTs. But the EMT course is 160 hours, and is designed and tested to be passable as a high school junior. Let's not use that as a comparison.

Most of these positions also have zero to minimal continuing education requirements which often, let's be real, are trivial. Quick online courses that can be busted out in a couple of hours, or "go to this hotel in a nice location, spend a couple of days, and go to the conference room off the lobby for a couple of hours in the morning".

Software engineering? You have people saying here - with a straight face - "Yeah, a 3-4 hour take home exam at every company you interview at is entirely reasonable" for the rest of your professional life.


I have a friend who is an accountant. For entry level jobs, those jobs were meant to be done by someone with a high school diploma and the bar for interviewing is literally "hey can you do basic excel" as you get closer to staff level the interviews become far more complex and nearing what you see in tech because they are testing if you _could_ pass the CPA if you had to. This kind of grilling can be skipped by simply having a CPA.

There's really some truth to the licensing thing. In some ways, I'd really like our field to adopt certifications so we can skip the BS of interviews. I got the leetcode certification, let's talk design or something relevant please.


Doctors have licensing tests, also called boards. They take a few rounds over the course of med school and residency.


I think the difference is that it's really hard to tell how difficult SWE work is and whether or not someone's doing it (since the real work is all in the brain). So it's comparatively easy for a fraudster to skate on very little knowledge/ability for a long time. When this happens with doctors or pilots we call it a major motion picture. When this happens with SWEs we call it Tuesday.


Why SWEs aren't required to document their process of problem solving? Like "I have problem X, I intend to solve it in that way, first thing I did is google that shit, found this article, compared those libs, picked that one because these and these reasons, etc.". Yeah, it can be painful and unusual first time but when it's mandatory it can make probably the best habit a professional can have.

This would help everyone from SWE itself (by tracking problem solving process) to his manager, colleagues and everyone who will work after and it's a ready draft for a blog article that'll say more about them than any CV.


It's not particularly easy to do, the parts that don't lead to a solution are boring, and people may make nitpick criticisms that aren't at all helpful.

I work at a place where design documents are supposed to include the alternatives that were considered and rejected, for much the same reason, and this does work to an extent, but it's not quite what you're suggesting.


At companies I've been at (mostly earlier phase startups, YMMV) there has always been an effort to do some sort of technical vetting.

Designers need to present designs / their portfolio.

Sales people need to do a demo.

Product people need to put together a mock roadmap or pitch a feature.

And so on.


> Designers need to present designs / their portfolio.

As someone married to a designer, this is soooo much easier than the hoops programmers have to go through.

Might take a little more work upfront (or just printing out work from previous jobs if allowed), but then you just flip through an existing portfolio the night before, and bring the same portfolio to every interview, no extra prep required.

Meanwhile, a programmer has to perform intense 1-8 hour tests every single time they apply anywhere, and make sure they remember the answers to gotcha questions in about 30 different subjects they could be asked about.

My wife always goes to way more interviews and talks to way more recruiters than I ever have (probably 5x more), because all she needs to do is read through her portfolio and practice some questions for 30 minutes the night before. And her interviews are usually just one or two hours long.

Meanwhile I always have to spend weeks brushing up on Leetcode before making a big new job push to make sure I don't have too many surprises, and I avoid going on interviews because it'll be long grind that I usually have to take half a day off work for.

I still had to do the stupid technical tests for a mobile app job where I could tell them to go to the app store and download a game of mine, with my name on the title screen, and they could play it, and they were really impressed with the game (the Xbox 360 version of it won a game design award in a contest hosted by Microsoft, and it looked and played identically).

Like... come on.


FWIW, I've often given interviewees one of 3 options:

1. Do an in-interview programming test. We try to make this as "real world" as possible with limited time, i.e. we give the candidate some existing code (which is similar but "slimmed down" compared to our actual code) and tell them to enhance it by adding some feature, then in some cases we had another piece of code with some bugs and asked them to fix them and write test cases.

2. Do a take home programming problem. I like to give people the option because some folks just do really poorly under the pressure of an in-interview test. When it's finished and they come back, we review it together and talk about their choices, etc.

3. If the programmer has lots of publicly reviewable code, I ask them to just share it with me so then I can review it and discuss it when they come in.

I basically just need to understand "Can this person write code?", and, related, "Can this person take a request in English and translate it to code quickly and efficiently?" And despite giving these choices, when I've posted a description of this on HN in the past I was still flooded by responses about how I shouldn't expect any of this: "I have a real life, I don't have time to do your take-home problems", or "I've been working in industry for years and coding for a job, all that code is proprietary and I can't show it."

All that may be well and good, but then my interview process is working - I don't want to hire you, and I can find people I do want to hire that are willing to do one of those 3 things, and it's not my job to make you understand that. Honestly, for all of the bitching about technical interviews, I feel a huge part of it is that:

1. People just can't accept that there are other people that are better than them that do do well on technical interviews and excel on the job.

2. Yes, there are outliers, and you might be one of them, but it's hard to craft an interview process around outliers. I also agree with Joel Spolsky's mindset of "It's better to pass on someone who might be OK, but you're not sure, than take the risk of a bad hire." I feel like every time I've made a bad hire there were definitely yellow flags during the interview that I tried to explain away, but I always ended up regretting the hire later and I've become more hardline on "if you can't prove your skills in the interview, I'm going to pass".


Have you considered that that's a function of startups and not any intrinsic necessity of those positions?


> I'd like to believe this is true, but it fails to explain why candidates for other business functions don't receive the same scrutiny.

I don’t know where this idea comes from. Other roles at most companies I’ve worked for have had plenty rigorous screenings for people, often including more reference checks, portfolio reviews, work samples, presentations, and other things.


Having worked a bunch of other jobs, SWE is an order of magnitude mentally harder than most other jobs. It's like being a translator, poet, detective, and puzzle solver all at once. And you have to do it all collaboratively with a team of other strong-willed, high IQ, low EQ teammates. With weekly deadline pressure. And management who thinks it's taking too long.

Of course my cousin who is a lawyer at Cravath works like 3x more hours harder than I do. She gets paid like 2.5x more too. They just hire tons of people and let the job weed out the bad ones. Most engineering teams can't do that because we're not trying to squeeze 100 hours a week of work out of our engineers.

Of course, plenty of teams do basic work. But plenty of teams with even basic sounding work have to handle an absolutely huge amount of complexity.


SE is different because those other professions generally aren't creating anything. If SE had a program where it just writes the code for you, then we wouldn't have to test them, just like an MBA can work off existing Excel sheets because what matters is the output of that application. Most new code and bug fixes require extremely detailed abstract knowledge that (so far) hasn't been able to be commoditized into an application. The next few years may be a game changer for that though.


I don't agree that those other professions aren't creative. If anything, the ambiguity behind what constitutes a successful brand design or convincing a client to buy your product seems to require more abstract knowledge to me (as a software engineer) than the ability to read and implement syntax.


the vast majority of people who work in offices just push papers, go to meetings and other mindless bs. the people who build brands are higher level managers/ivy-league over-achievers. sales people are hired or retained based on talent, getting a very low base salary and high commission. writing code is way more than reading and implementing syntax, it's actually making the design work or solving very tricky bugs. People with bare-minimum degrees and no demonstrable acumen are useless. take any given tech idea you want, it doesn't "just work", the devil is in the details.


> the vast majority of people who work in offices just push papers, go to meetings and other mindless bs

I know this to be true for many working in software engineering. Conversely,

> writing code is way more than reading and implementing syntax, it's actually making the design work or solving very tricky bugs.

This can be true but is not always true. I think you were right when you said

> take any given <sales|tech|branding|marketing|HR> idea you want, it doesn't "just work", the devil is in the details

:)


I don't know that it's higher, per se but it's more that being able to discuss concepts isn't enough. A programmer needs to be able to translate those concepts into actual algorithms and working code. I've interviewed people who were able to look at the coding problem we gave them and discuss it intelligently, but when it came to actually writing even pseudocode to solve it, failed miserably.


That’s true for other roles, like an MBA grad that can discuss financial principles but can’t navigate Quickbooks or use Excel.

From my admittedly limited understanding, many of those openings are filled based on resume and verbal interviews with little or no quantitative evaluation of skills.


Use Excel yes. I'd expect an MBA grad to know the accounting principles that Quickbooks is based on and maybe puzzle out how to use it but not be fluent in it to the degree I'd expect of Excel.


You said it yourself - it's a question of engineering versus business roles.

Software engineering doesn't necessarily have a higher bar than other comparable STEM.

And lest we forget many other roles have to pay their dues upfront at a much earlier stage: doctors have the MCAT, lawyers have to pass the bar, many accountants become CPAs, etc.


I've never heard of a civil engineer being asked to design a blueprint in Autodesk with a more senior engineer watching them, or an accountant asked to calculate a department's P&L given 90 minutes and a folder full of Excel files. It might happen, but I suspect it's uncommon.

You're right about exams, but that's a one time thing. New lawyers, doctors, and CPAs have to demonstrate textbook mastery to pass a handful of exams once in their career. Engineers are expected to demonstrate textbook mastery for every job they apply to _for their entire career_ (and often multiple times per application!)

It's also worth noting that engineers have standardized exams and certifications, like CompTIA or AWS Certs, but for whatever reason those credentials do not seem to carry much weight. I've never heard of those replacing technical evaluations, just used to enhance a resume.


And SWEs have to go to college or post-grad. However they're eternally in the low level hell of solving coding questions.


Yes, you can weed out 50% of incompetent applicants, but that is not the issue. The problem is that the people who will excel in these questions are the ones playing the leetcode game for months. The people with real jobs will pass your question but will do so-so compared to the leetcode gamers, and the second group will get the job. Also, doing exceedingly well in the coding questions doesn't guarantee these people are any good at the real job.


Too bad there isn't a test for "fucks given." That would weed out about 80% of applicants. I can work with just about anyone who passes that test.


While I don't advocate for it, a long take-home problem filters for that.


It also filters for "people with children", "experts who realized they aren't show dogs", and "anyone who values their time".


What about a take home test that takes 1 hour with no leetcode/trivia? At a certain point I feel like you have to pick your poison


My last "1 hour" test took me hours to complete. I asked them to implement it in front of me in an hour and they couldn't.

Programmers are terrible at estimating and programmers will choose tasks that are obvious to them because it's the exact thing they do every day, but it might not be so easy for people not in their exact niche.


>a long take-home problem filters for that.

I disagree. Maybe and only if it's paid and paid well. Maybe $150 an hour. Not many who are good will put up with that; because they don't have to.


The old saying is "pay peanuts, get monkeys".

Let me propose a variant of that: "you'll end up with monkeys if you require people to do monkey tricks".


Your take-home exam will not get many high quality candidates. Most people who have an option will not put up with this kind of requests.


That's actually a good task though - do something that at least partially resembles what you'll do in the job.

I think these folks are moreso annoyed by academic quizzes cribbed from 70's programming books that don't flex anything we're interested in, and do focus on things that are typically not very relevant to the job. Oddly they do seem to both prioritize new grads that are willing to shovel shit, and at the same time reject experienced folks that don't have the time for said shit.


What you're doing sounds fine. We did something similar, and what we got back was either 1) the obviously correct solution, 2) try-and-error soup, or 3) extremely complex over-engineered junk (we specifically told people not to do this, so double fail).

What most people object to is stuff that's just really time-consuming to do well. And/or stuff that gets rejected for silly reasons (typically requirements that weren't actually stated).

Or things like "please implement Conway's game of life in 30 minutes. START NOW".


The last time I interviewed and did a few LC problems, it was my experience that most of them were trivially solvable by some combination of implementing an iterator, doing a fold, and maybe adding memoization. Not every problem obviously, but those 3 steps seem to pretty generically cover most easy/mediums that will come up in a coding skills interview. When I got my first job, I didn't know what any of those things were, so I've also found coding interview problems to have become easier for me over time.

I've never used much Python in my day job, but the `yield` keyword is basically overpowered for LC problems.


You would be surprised (or maybe you wouldn't) at how many applicants get filtered by extremely basic elements of a tech test that's specific to the employer and therefore not something that can just be memorised or drilled. It's a low bar, but it can be a very worthwhile one.

There's a second factor, too, which is that sometimes you want an easy test so that you can judge coding style. You need to be careful not to ding people who don't already use whatever your house style is (which has bitten me in the past) but you generally do want to see something that you can have a style conversation about.


This is true; however, based on my experience the interviewers are usually very dissatisfied to discover such "one simple trick", the implicit expectation being that you are expected to gruel through the problem without abstractions.

This part has been always funny to me, because the same interviewers also simultaneously expect knowledge of abstractions in their "low-level design" phase of the interview, where irrelevant abstractions are added in to satisfy some odd constraint that would never come up in the real world.


Yeah I realized you are at a significant disadvantage by not interviewing in python especially when you get some problem that requires parsing some input. IMO it is worth it to spend a couple of weeks practicing python before doing any technical interview.


I interviewed at Amazon and they told me I could pick any language. I chose C and I managed to get the test competed in time, but after I was done the interviewer started asking me questions about how my code worked and it quickly became evident that they didn’t know C. Should have picked Python..


Yes and no! I was just rejected from a job because I used Python's heap functions and the interviewer didn't know what those were or how they worked.

It's not the first time either, once got rejected for using namedtuples!


I'm sorry - at this point in time Python is the only language I expect every single developer to know. You don't have to be an expert, you don't have to like it, but you need to know it.


> ...but you need to know it.

Why? Firms that don't use it aren't going to use it, and there are a whole lot of firms out there that don't use it.

Plus: grammars that define scope by indentation level can all fucking die in a fire. I don't have nearly enough digits to count the number of times a customer mis-indented a deeply-nested section of a YAML file and caused absolute (very-difficult-to-diagnose) havoc in their environment. [0] IME, Python is not any better than YAML in this regard.

[0] Yes, I'm aware that there is a whitespace-insensitive syntax for YAML. However, it's not the default, and you can't use every YAML construction in it, so it is -IME- rarely used.


I never said every firm has to use Python, I said every developer needs to know Python basics. I'm old enough to remember a time when every developer needed to know Pascal, even though very few firms actually used it. It was simply a universally known language to assess one's skills. So it is today with Python.

WRT your rant against Python's used of indentation, most people I know aren't a fan, but editors take care of it and it's rarely an issue. It's not a problem for a whiteboard exercise.


> It was simply a universally known language to assess one's skills. So it is today with Python.

I've had... several... interviews over the years. Zero of them used Python. Might be that you're just in a corner of the industry that loves Python for some reason.

> ...most people I know aren't a fan, but editors take care of it...

I've had editors totally screw up indentation of copied and pasted Python code many, many times. Editors might get it right much of the time, but they absolutely do not (and provably cannot) get it right all of the time. On top of that, visually finding whitespace errors is far, far harder than visually finding enclosing-scope-signifier errors.

> ... and it's rarely an issue.

All sorts of things are rarely an issue until they're an issue. And then when they're an issue, they're often a big fucking deal. [0]

Don't you agree that we (as an industry) should be working to reduce the number of footguns in the world?

[0] Ferinstance, if everyone used an editor that treated YAML as a tree of nodes and used a strict schema [1] to control what nodes you could add where, then that customer havoc I mentioned wouldn't have happened. But, when Corporate Security only gives you SSH access to the restricted-access system that you're currently repairing, running such a tool is simply out of the question. So, one uses a text editor to make one's changes. In situations like this, removing every footgun possible from the work area is very, very important.

[1] Schemas? For YAML? I wish. I really, really do.


> Don't you agree that we (as an industry) should be working to reduce the number of footguns in the world?

Sure. The problem is everybody and his brother have an idea for what that looks like. There's no universally agreed-upon consensus of what a footgun actually is, which makes it rather difficult to remove them. I've been creating software for over 40 years now and the only constant truism I've discovered in that time is people will find reasons to bitch about something. Some people hate braces. Some people hate wordiness. Some people hate parenthetical statements. Some people hate math and "mathy-looking" languages. It goes on and on. And that's just syntax! We can go down several rabbit holes WRT how to handle errors.

Meanwhile, according to the latest TIOBE index, Python is the #1 language, followed by C++ and then C.

As I keep saying, you don't have to like Python, you don't have to use it, but you should be able to whiteboard it. And whiteboards don't give two fucks about whitespace.


> Sure.

Cool. I'm glad you agree. Given that we're talking about a particular entirely-avoidable footgun, there's nothing more to be said.

> TIOBE

Oh boy, there are assloads of very valid critcism of TIOBE's popcon. You should take a look at their methodology some time:

> Since there are many questions about the way the TIOBE index is assembled, a special page is devoted to its definition. Basically the calculation comes down to counting hits for the search query +"<language> programming"

See [0] for more embarrassing details.

> And whiteboards don't give two fucks about whitespace.

Given that you're late-career, you may be unaware that very many interviews are done remotely these days. So, no, whiteboards absolutely do give many fucks about whitespace these days.

[0] <https://www.tiobe.com/tiobe-index/programminglanguages_defin...>


I'm also sorry, because that's ridiculous. There's more to tech than web programming.


web programming is probably on the 3rd or 4th rank of what python is used for nowadays

Also, you don't really need to "learn" python. I mean, if you have been in this industry for long enough, it's the kind of languages that you can pick up in 1 afternoon. That's just how basic and easy it is. That's why it's so popular despite all its flaws. Like I'm sure you somehow already know python, even if you never used it.


There's a distance from knowing Python enough to find your way around in projects to knowing it enough to solve coding riddles. The latter requires the level of familiarity attained only with regular use. Plenty developers go by their daily jobs without having to write a single line of Python.


I'd actually say it's the former that requires familiarity with the language. The latter only requires you to know some basic looping/control constructs. You don't need to know anything about classes or modules, for example. No need to understand async vs threads vs multiprocessing.

Honestly if you write psuedocode for an algorithm, there's a decent chance it'll be correct Python even if you've never seen Python.


> it's the kind of languages that you can pick up in 1 afternoon

Yeah nah. Especially not for current Python, which is quite a bit more complex and involved than it was 20 years ago.

Of course you can get some stuff done in Python on your first afternoon, but that's true for most mainstream languages. And that's nowhere near the same as actually knowing what you're doing.


I agree, but the context is to pick it up well enough to do coding interviews. Which I think is fair. People can pick up enough python for coding interviews pretty easily.


For what it's worth, I've mainly used it for utility and test scripts, including tests when I worked in firmware development. I think it's a poor fit for web development or large projects.


I've never seen Python used for web programming, actually. I know it can be done, but I've not been in a shop where it's been done.


From personal experience the coding round gets easier for senior/staff roles, even for the exact same question, because of the experience the interviewers have and the signal they are looking for (eg problem solving, communication, testing, etc.)

At junior and "SDE II" level coding rounds are just toxic newly minted SDEs trying to make it a competition between the candidate and themselves ( I've got interviewers offended when I came up with a simpler solution than the one he had in mind)


It's true that the interview result can only be as good as the interviewer's skill and awareness of what to look for. Which will often be terrible. BUT that does point out a mis-perception of the interview process. You will do better by getting along, "figuring out", going along with the interviewers' plan - rather than trying to demonstrate your own cleverness. Not saying this is what @dixie_land personally went for in that case - but perhaps that if you notice the interviewer getting offended, you better figure out fast what you did and work to make them happy again.

If you can figure out what the interviewer is trying to get out of you, then give them that. That may or may not reflect a useful job skill, but that is an interview skill.


Why work to make the interviewer happy again? If your solution is better than the interviewer's you should expect the interviewer to acknowledge that fact, like an adult and like a team player. He is not supposed to be offended or unhappy.


I guess that depends on whether or not you want the job. This is a clear example of when soft skills can make a difference.


Agreed that it's a soft skills interview at that point for the interviewee, but I think what OP above may be pointing at is that if you've got a good solution, and your interviewer is getting mad... maybe you as the interviewee are getting culture fit signals from the interviewer?

Wanting the job might be down to you needing money. OK, use the soft skills, and make the interviewer happy. If you don't particularly need the money right now, then evaluate whether you want to work with this interviewer at all.


It's rare that you will be working for that interviewer. Much more likely this is just one of the juniors, one of the team, that you may work "with" but not "for". They still matter, as soft skill, because they will give a thumbs up or down to the boss or to the rest of the committee, and they can make up any reason for it that they want. And do you want the company to offer you the job or not?

But yeah, if you get to interview with the boss and they are a problem for you, then that does matter.

Also you are in a better situation if you get the job offer - they want you -, and let it go because you learned about them - and you don't want them anymore. Get the offer.


I agree. To be blunt, ass-kissing is a soft skill. Whether you choose to deploy that skill really depends on what you're looking for (eg big name companies, high TC, remote work, etc.)

And end of the day, interviews are also a chance for candidates to evaluate the company


This might also be because (at least back in the ZIRP days) you would get an order of magnitude or two more applications for junior roles than for senior ones.


My perspective aligns with your newer opinion. I have never studied for an interview, and cannot clearly imagine what such a process would involve; neither have I ever taken a CS course. A coding interview therefore feels like an opportunity to demonstrate my approach to problem-solving using the skills I have acquired over the years, which feels like a reasonable thing to ask of a potential future coworker.

My pet theory, after listening to people gripe about coding interviews for many years now, is that people who have gone into the workforce from a university CS program frequently mistake job interviews for classroom tests, imagining that the goal is to produce a correct answer, and that is why they believe they must study and memorize.

That is certainly not what I expect when I am interviewing someone! I want to see you work and I want to hear you communicate, so I can judge what it might be like to collaborate with you. If I can see that you are capable of breaking down a problem and digging in, asking sensible questions, and making progress toward a reasonable solution, I don't care that much whether you actually arrive there.


For FAANG and FAANG cargo cultists, the goal absolutely is to provide a correct answer, and to do it while pretending you're reasoning it out from scratch rather than recognizing the pattern from the hundreds of leetcode practice questions you've drilled on.


Naturally I can tell you only what my own expectations are as an interviewer, and I can only guess what other people might expect; but something I can share as a fact is that simply doing the work presented to me, using the skills acquired naturally through the course of my career, with no pretending or practice questions or memorization involved, got me a job at two of those tech giants.

That was many years ago; perhaps things have changed. All I know is that the picture of tech interviewing I see so commonly complained about does not match my experience.


I wish on everyone complaining about tech interviews the misfortune of working with an incompetent fraud who makes their work life miserable.

Jobs are on offer for 6 figure salaries that require nothing more metabolically taxing than typing on a keyboard, in a temperature controlled environment, where you get to use your brain to solve problems, and these complainers think it won't be rife with frauds? The whole bootcamp phenomenon was openly churning them out.

Yes, I realize there are a few self learner diamonds in the rough. Yes, some tech interview questions or styles are ridiculous. But it's the best of a lot of bad options.


FAANG-like companies don't usually interview bootcamp candidates. Most times, they're interviewing other ex-Fang-like or top university grads with a relevant CS degree.

Track record and conversational interviews are used for hiring lawyers, doctors, MBA and marketing professional.....why are programmers any different ?

Reality is, todays tech interview questions select for those without a life.

Great professionals solve hard problems. It drains you by night, leaving just enough time for some of workout, sleep, primary hobby, parenting and relationships. Even on a good week, you have to make compromises.

It's one thing to ask leetcode gotchas to fresh grads who've had 2 whole years to do leetcode. But conducting 10 rounds for a senior engineer with zero free time is torture.

Give take home projects. Do 2 hour long debugging sessions. Do system design. Just give me a work item. All good.

30 minute compile-or-die leetcode questions are not it. If you wanna test for IQ, make me rotate some shapes. None of this rote learned monkey business. It's not even that hard. But the prospect of giving up all my weekends for 3 months, just to get 12 days worth of time to be leetcode prim and proper..is untenable.


> Track record and conversational interviews are used for hiring lawyers, doctors, MBA and marketing professional.....why are programmers any different ?

Doctors and lawyers have a nonprofit credentialing body that makes them take an industry respected test that is the equivalent of our tech interviews. We have nothing like that in dev work. Would love to see it.

MBA and marketing people rely very heavily on their networks. Those without these networks are at a severe disadvantage, and their prior work is usually very close to the bottom line of a company. Devs could do this kind of stuff more, but don't largely for cultural and organizational reasons.

IQ test would be great if they weren't illegal.

I'm not doing take-home work unless I'm staring down the barrel of not paying my mortgage, and I don't trust anyone showing me there's didn't ask ChatGPT.

Dev work is sensitive and exacting. I want to see the candidate actually do it in front of my face without phoning a friend or copying from github.


Doctors and lawyers take that credential test once, immediately following their graduation from years of fulltime schooling that has been specifically designed to prepare them for that test.

Software engineers are expected to retake our tech interviews, which you describe as equivalent, over and over and over, every time they change jobs.


IQ tests are not illegal.


I think it's by design. People with no lifes are usually better workers. And those who have lifes, but managed to find time to leetcode really want to work for you.


> I once thought they were purely hazing with zero relevance to day to day work, but as I get more senior I drift further away from that opinion.

A lot of it is/was. Hiring managers for a long time didn't know how to hire devs so they would have devs hire devs and, well, devs like to have lots of pissing contests and that spilt over into interviewing techniques which got cargo culted because that's another thing devs are outstanding at.


You hear about the worst cases on the internet, but you see mostly of the average ones on reality.

Hazing people to invent some genial algorithm that all of humanity failed to for decades, except for some lucky individual somewhere; on demand, on short notice, with time pressure, and in a high-stakes environment will never be a good interview. But also, the people that do that do not keep interviewing for long.

Personally, I haven't been in an interview for a long time (as a candidate). But most of the "best practices" from the time I was are now common jokes. I have seen many of those practices applied, but even at that time there were many places that were reasonable.


I've been observing that coworkers hired through the modern formulaic leetcode/sd/behavioral loop are homogeneously competent in a specific way — if there is an agreement (aka "alignment") on what needs to be actually done, they'd do it passably fine. Corporate dysfunction is more of a product of how that alignment is achieved.


As someone not in the tech industry... The whole thing about coding tests sounds insane. Yes, I have friends in hiring positions in the tech industry, yes, I understand that many applicants cannot even complete these extremely basic questions. But shit. You're hiring so-called "engineers." The bar should be higher than fizzbuzz.

If people hiring structural engineers had to ask them "should you build bridges with steel or cardboard" and expect 80% of applicants to fail or cheat on the question, our society would be fucked.


Coding interviews were never bad. Asking candidates to pull a red black tree out of their ass was always bad interviewing (unless you were hiring for this of course).


It really just depends on the recruiting culture of the company in question, in my experience. I've interviewed at top companies and been given coding problems I could have solved in high school. And I've interviewed at 10-person startups and been given ridiculous leetcode brainteasers. And vice versa.


Not every company is FANG. Last time, pre covid, I had an on site coding interview with BigCo. Interviewer asked to write an algorithm to find the biggest rectangle you can make from a list of lines. I immediately stood up, and went to a drawing board in the conference room we were sitting. He was baffled, and asked me back to the desk to the computer. I told him my initial thoughts on the algorithm and started sketching the algorithm in vs code that was presented on the laptop. He quickly jumped in, removed my `for` loops and pointed at the first line of the file, import lodash, and told me to use it. I said I'm not familiar with the libarary, but I'm happy to continue with the solution. We talked about it for a while, and I explained my approach at length, while not being able to reach the keyboard. After that, he started typing his solution on my computer using `function programming` and lodash. After we agreed that the general idea is pretty much the same I asked him what is the computational complexity of his solution, pointing out that using a lot of map/reduce/list may not be the optimal approach. He looked at me, at the code at it was obvious that he genuinly had no clue. We finished the interview few minutes later and to my surprise I was moved one notch up the ladder to the next interview.

I have 25yoe and conducted close to thousand interviews. My secret sauce is to ask broad questions and let the candidate draw the map of areas where he pictures himself as competent, and then drill down, with a series of questions. As an employer, team leader or engmgr I don't need a compiler or savant, I need a teamplayer that is confident within his domain and not afraid to say he doesn't know. I've been on both side of the fence, so I know it's not easy to say you don't know during the interviewm, but the bottom line is that most of the time if you don't know something you can easily look it up on the internet or even ask your teammates during a coffee break. What I fear the most is the guy who fears to say he doesn't know and tries to keep his face.

EDIT: part of where it comes from is my first serious day job, a big international startup (still running) which opened a small office in my hometown. We had a guy who, for three weeks, was coming to the office, sitting in front of his computer, looking busy, staring at the code, clapping at the keyboard, making some noise, and... after three weeks it turned out he didn't even started his task, because he didn't knew how to open a connection to the database. Not a corp, just a small 6 body shop, but he played his role perfectly.


Yeah I feel like this is sour grapes from midwits that aren't as good at programming as they think they are. Sometimes you get a dick interviewer that asks you a trick question, but most interviewers don't care if you get a problem exactly right, they just want to hear you discuss a problem intelligently and show expertise while coding.


Let me guess: you're one of the good ones.


I agree, it's sour grapes. These companies grew to be the most powerful in the world, even electing presidents, through these interview processes. The midwit memes can be summarized:

(low IQ) acting on simple instinct vs. (mid IQ) paralyzed by complex rationale vs. (high IQ) acting on simple instinct

The high IQ guys who just do the work to grind LC show enormous signal for being effective software engineers.


Professionally, I’ve never used anything most LC problems would signal for. I have libraries for that.


Leetcode seems like the epitome of the midwit meme to me. "I'd just use a library" vs "I'll write a custom optimized solution" vs "I'd just use a library".


What type of coding interview do you find more valuable for the interviewer? Algo code interview always looked like the interviewer trying to show off to me. Guess it depends on the requirements of the job, though...


I'm on this that technical interviews aren't broken (at least the ones I did during the past 2 years or so), what is broken are the job specs. They may have turned _picky_, but okish.


I have had the opposite evolution. As I get more senior they seem increasingly silly.


You are clearly not looking for JavaScript or fullstack jobs. The more senior you get has less relevance to product delivery and more relevance to tool chain nonsense. That means if get great at product delivery you are no longer compatible with the job market, as in you have walked off the bell curve.


Why not exactly? The model weights cannot encode a sort of math engine? The hidden state cannot encode carryover values? Why do we assume these things can't happen at some level?


I agree. The reasoning is there, and becoming more capable every year (across the various models). It's easy to look for limitations, but what was once glaring problems are now much more subtle.


This is a pretty common thread throughout Graeber's work; incuriosity and stubbornly blinkered views of phenomena as part of a strained argument for left-anarchism. The criticisms of The Dawn of Everything are withering and along these lines.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: