Hacker News new | past | comments | ask | show | jobs | submit login
AI is the reason interviews are harder now (softwaredesign.ing)
73 points by prakhar897 19 days ago | hide | past | favorite | 110 comments



I wish more companies would have interviewees conduct code reviews. Code reviews as an interview show a number of things you wouldn’t get from a typical interview—what opinions they have, what are the things they call out vs what they don’t waste time on, how they might communicate with another teammate, and more. And if we’re going to a world where AIs do much of the work and we just need to check that they implemented what we intended, those code review skill will still be highly relevant.


gpt-4 is at least as good as i am at code reviews, so i don't think this solves the problem this post is about


The objective is not finding someone good at reviews, or even that gets a specific review "right". It's having a chance to understand how someone thinks, better evaluate their actual knowledge and test for some important red flags.


gpt-4 is only mediocre at many things, and i've found mistakes in its code reviews (not, to be sure, mistakes i couldn't have made), but one thing it's absolutely superhuman at is avoiding red flags


> gpt-4 is only mediocre at many things, and i've found mistakes in its code reviews

>> gpt-4 is at least as good as i am at code reviews, so i don't think this solves the problem this post is about

These two comments don't seem consistent.

Honestly, "i've found mistakes in its code reviews" vs "it's absolutely superhuman at is avoiding red flags" is possibly not self-consistent. But I think you mean glaring mistakes?

Which if I understand you correctly, than I'm not sure how you get the first conclusion unless you are saying you're mediocre (which is fine). But really it just makes it seem like you and the machine complement one another, rather than compete, which makes me not understand the original comment in context.

But I mostly agree with Copenjin. Interviews are less about on the spot skill checks so much as learning how someone thinks and problem solves. Honestly, you could play a boardgame/videogame/cardgame with them and it would be an effective interview (asking them relevant questions during the game, and maybe even better if it's a relatively unknown game so they can zero shot it). The reason for this is that in real world work you are more concerned with how someone adapts to changing environments and thinks through situations. To see when they'll ask for help, what they might get stuck on, and how they strategize.

Your employees will always be gaining new skills. And honestly, it is easier to take a lower skilled person who's more adaptable and driven and turn them into a great employee than it is to take someone who's got skills but will stagnate. But ymmv depending on the job and requirements. Sometimes you just need to fill a seat.


code reviews are not one of the things gpt-4 is mediocre at; it's much better at them than it is at, for example, providing reliable information or writing code. its pattern of strengths and weaknesses in code reviews is not the same as human patterns

by 'red flags' i inferred copenjin to be referring to things like getting aggressive or defensive, rather than making dumb mistakes, but i could be wrong about that. i guess there are also some mistakes that are so dumb that they'd be a red flag, and i have to admit that gpt-4 is somewhat subhuman at avoiding those

if you play a board game with someone you can assess their general intelligence and capacity for logic. all else being equal, having more general intelligence and knowing how to think logically do make you a better programmer. (if you just want to assess general intelligence, a much faster pair of tests would be reverse digit span and reaction time.) but those are far from the only things that matter, they're not enough to be a great programmer, and they're not even among the most important factors. other important factors in programming include things like knowing how to program, knowing how to listen, and being willing to ask for help (and accept it), which a board game generally will not test


> its pattern of strengths and weaknesses in code reviews is not the same as human patterns

I agree with you. Which is why I say that they complement. But your reply to Baron implies that what they were suggesting wasn't a solution. I agree with the sentiment of the post to do things in person. But what I take from Baron is that it is much harder to fake the process with GPT because the actual part of the code review isn't so much about finding the bugs, it is you watching someone perform the code review (presumably through screen sharing and a video chat). You could have this completely virtual, but I think you're right to imply that the same task could be then optimized.

But at the end of the day, I think the underlying issue is that we're testing the wrong things. If GPT can do sufficient, then what do we need the human for? Well... the actual coding, logic, and nuance. So we need to really see how a human performs in those domains. Your interviewing process should adapt with the times. It is like having a calculus exam where you test someone and ban calculators but also include a lot of rote, mundane, and arduous arithmetic calculations. That isn't testing the material that the course is on and isn't making anyone a better mathematician, because any mathematician in the wild will still use a calculator (mathematicians and physicists are often far more concerned with symbolic manipulation than numerals).

> other important factors in programming include things like knowing how to program, knowing how to listen, and being willing to ask for help (and accept it), which a board game generally will not test

I agree the game won't help with the first part. But I thought I didn't need to explicitly state that you should also require a resume and ask for a github if they have one. But I did explicitly say you should ask relevant questions. And I'm not entirely convinced on the latter, which are difficult skills to check for under any setting. There's a large set of collaborative games in which do require working and thinking as a team. I was really just throwing a game out there as a joke, being more a stand-in for an arbitrary setting.

At the end of the day, interviewing is a hard process and there are no clear cut solutions. But hey, we had this conversation literally a week ago: https://news.ycombinator.com/item?id=40291828


I'm not sure what this comment is supposed to imply?

Are you bad at code reviews? Is the code you're reviewing fairly standard?

GPT misses nuance. It can't reason while you still can. It certainly can do certain tasks better than you but certainly humans can do better at other tasks (specifically in the creativity side, logic, and when it comes to nuanced thinking). But if you're always focused on being quick (quantity over quality) then yeah, I think GPT could replace you. Otherwise, I don't know how anyone comes to this conclusion.


I think you may be discounting GTP.

I've seen it notice logic errors in proprietary non-standard code that a human missed. It may not be able to literally "reason" through your code but it can follow the logic pretty well. I've even been able to have it pretty accurately comment the "intent" (vs. function) or spaghetti code with some reasonable accuracy.


i think i'm better than the average senior developer at code reviews, but of course your reference point for the average senior developer may be different from mine. if you'd like to find out, we can schedule a jitsi call where i review some of your code, and you can post your assessment in this thread afterwards

gpt-4 makes stupid logic errors a lot, and i agree that sometimes it's bad at nuance, inappropriately applying heuristics in a context where they're inapplicable. it's much more creative than people are, though, and people also have those same flaws


For now, you can easily get around this by showing the code via screen share.


You mean, the interviewer does the screen share, and the candidate should review whatever they see on the screen?

"Can you scroll back a bit? Now, can you show me the docstrings for Foo.detachBar and Bar.attachFoo?"


Yeah. Not a great experience.


can you? couldn't the candidate just use ocr to feed the code to gpt-4?


“Before I answer, let me scroll through the entire pull request to make sure all the code has been made visible.”


I just recently did an interview with Gitlab and they do this. They have a codebase that is obviously not something they monetize or use themselves for anything other than interview purposes (for the ones worried about stolen labor). They make a PR for the codebase to add a new small feature, and assign you as a reviewer.

I quite liked it, though I wasn't a fan of how bad the PR is itself, since I ended up having way too many comments, some of them being massive "wtf is this and why would you ever do it this way?" Types of things that IRL I would've refused to review without a proper rewrite.


Had a job interview yesterday and in one part of it they showed me an existing class in their project and had me tell them what I think of it. What I thought was good and not so good about it


I've been doing this for more than a decade, can confirm that it gives very high quality signals.


Yup. When I was doing a lot of engineering interviews I often used to give candidates a choice. Either a standard type of interview or if they had any public code (eg on their github) we could review together. It was a pleasant experience and gave really great signal.


I would LOVE someone to roast my code in real-time!


Yeah - It's always a really interesting conversation. "Why did you do x rather than y?" etc. As an interviewer you have to be much more on your game to make sure you get the signal you need, but you're just talking one engineer to another and getting a sense for what they are like, how they deal with annoying/hard aspects of problems, what they find interesting to do etc. It's much better for those things than a standard programming interview.


The best interview I've ever had was when the guy just asked me to compare two technologies I had worked with. Obviously, the interviewer was also familiar with them. He was able to gauge my knowledge, my ability to choose the right tool for the job, my communication skills. Looking back, it really felt like one of those "we have to make a decision" meetings.


I did a code review style interview at a startup a few years ago (loved them; got an offer, but my then-current employer countered surprisingly well). I really enjoyed it.


This is great. I'm in DevOps and when hiring, the tech interview is a round we've been off and on over time. Doing code reviews instead makes a lot more sense.


I don't like code reviews. If need be I might try to bs my way (paychecks come quicker than permission to fire) into a coding job and those would be a showstopper.


It doesn't matter if AI makes interviews harder. An utterly broken system is not practically worse when it becomes more broken.

We were already well past the point of futility before AI became part of the system. The only real way to get a job is by the equivalent of mass spamming and luck (which also means low chance of fit if hired) or by human connection and familiarity (the one true way for anything in life).


>by human connection and familiarity (the one true way for anything in life)

Every job I've had since grad school was through personal connections. (And luck.)

I suspect the state of SWE (and adjacent roles) for the past decade+ leads to a lot of SWEs believing that sending out a few applications will result in a great offer in a week.

I'm not sure how much AI has to do with making spray and pray less effective or enabling various cheating in interviews--presumably leading at some point to only allowing in-person interviews after an initial screen--but it probably decreases the time spent on the initial filter and you start relying more on heuristics like where they went to school.


The problem with connections is that some people have none.

I help immigrants for a living, and the biggest underlying challenge for recent immigrants is the lack of a network. They truly start from scratch, so for a year or two they are often completely alone.

Networking is absurdly effective, since all nodes stake their reputation every time they recommend another node. However it only helps those in the network. This leaves a lot of good candidates out. It can end up creating sort of exclusive club.


I just landed a contract today after 9 months of not working. It was dumb luck.


Agreed, it's a total gamble, but even in a game of luck there can be factors that make your odds even worse.


Prior to the pandemic, in-person interviews were the norm at big tech, but they never really went back to them afterward.

I wonder if this could be the beginning of the end for purely remote interviewing; seems like in-person interviews would be less likely to be gamed in this way.


This exactly.

Before the pandemic the team I was in used to conduct interviews in person. We'd give people a sheet of paper with a bunch of questions that they had to answer. No leetcode type stuff. Just basics.

With that we had one person cheat with a phone.

Doing remote interviews from then on we saw at least one person who was very probably cheating. They would get an answer, pause, then give a response, but then when we asked follow up questions they would seem not to understand their previous answer. They also had 'Audio Visual Issues' at the start of the interview. We figured they had someone else listening in on the interview and giving them answers. No AI needed.

One of the worrying things was that they did it fairly badly, had they done it well we probably could not have detected it.

Perhaps the ideal interview now would be one in person with a computer that wasn't connected to the internet where you told a candidate exactly what tools and references they would have and then gave them a coding problem.

For these jobs they were pretty heavily cloud related and also showed another probable problem, people are getting other people to get their certifications for them. If you have an AWS Associate Level cert you should be able to explain what S3 is without blinking.


Fun story:

I had two interviews one after another. The first interviewer asked me a basic question, and then kept drilling "ok but how does it work in more detail?". This was extremely exhausting but really refreshed my knowledge.

Then the second interviewer asked me exactly the same question. I was able to give him perfect textbook answer with lots of details because of the previous interview. I also have a tendency not to look into the camera/someone's eyes when talking, I only do this when I'm listening. The interviewer accused me of cheating and ended the interview there and then.


If you didn’t tell the second one that you just had this exact question, then that’s understandable

I’ve been on both sides of this. Occasionally mistakes on the recruiting side means two people get scheduled to do the same facet.

I’ve also been asked literally the same question multiple times. If you don’t let them know, then obviously you are going to come across like you are suspiciously over prepared.

Perhaps the questions got leaked on Glassdoor or something. Either way the interviewer is justified in rejecting you if something seems off and they can’t get a good read of your actual thought process


Agreed. Plus, an interviewer wants to see how you work on a team. If you just had someone else help you understand something, you should say, "That's a great question, and Sally and I just went really in-depth with it in my last interview."

Yep I agree I think its coming to an end unless some good and reliable counter cheating tools are developed.


I'm not a great programmer, but I think I'm quite good, I have contributions in quite a lot of foss projects, I have CS masters, yet I have no idea how would I even start these FAANG problems. In 30 minutes? Who is this optimizing for?

I worked in one "unicorn" once and all I did was getting protobufs from one API, putting it to database and taking it back from database and putting it to different API; it was so boring. I solved my boredom by contributing to some in-house framework, after which I was told I am out of my line and should go back to copying data between APIs.

Are people at Facebook actually solving these hard CS problems in daily life? Why are these interviews even a thing?


Many people are not. These trivia-style questions _used to_ be given to both assess problem-solving skills and identify people who were capable of doing the actual CS needed to solve the core problems these companies were dealing with. These days, these problems are mostly gate keeping IMO.


To be fair, gate keeping is the literal point of the interview process but I think I understand what you mean.

Is it fair to set the bar higher than what it was set for you to be hired into a company? Yeah, I think so.

The problem though is that leetcode challenges are shallow, and don't measure the applicant's ability to understand complex issues or algorithms. Most code is not solved once in 45 minutes, but iterated on multiple times.

"I solved a leetcode issue, so you have to solve one too if you want to be hired on" I think is the kind of gatekeeping you're talking about.


It’s filtering on dedication and desperation.

It’s why so many in the tech industry are H1B. There’s no lack of domestic talent but domestic talent doesn’t need a visa to stay in the country. So, they’ll not go through the insane process and just take a normal job that doesn’t have as insane of a hiring bar.


To be fair the ones who do succeed in those FAANG interviews are probably pretty smart, good under pressure and know algorithms / data structures very well. I'd have a hard time passing these interviews even with a lot of preparation - I don't excel under pressure nor do I have the will to prepare for those interviews for months (which is what it would take for me to reach 'FAANG' level interview taking and I'm being generous here, it could take me more than months). The vast majority of candidates are like me - some combination of time constraints and lack of superior intelligence will prevent them for excelling in those interviews or even trying. I'm just not in the top 2% of devs in terms of intelligence and am not particularly great in charming my way into an offer, nor am I a member of a protected group / DEI. So knowing my chances are low to begin with I'm pretty much fine with deciding FAANG is not very realistic for me.


> The vast majority of candidates are like me - some combination of time constraints and lack of superior intelligence will prevent them for excelling in those interviews or even trying.The vast majority of candidates are like me - some combination of time constraints and lack of superior intelligence will prevent them for excelling in those interviews or even trying.

Surely that's an attitude thing more than a true limitation? Just because the hill is steeper for you doesn't mean you can't still get to the top.


Could be. Or it could be I'm just being realistic about my abilities - I mean statistically the vast majority of candidates don't make it to FAANG, you have less than a 1% chance of getting an offer. Even if you try all of them that's not great odds - 5%. Even doing this rotation twice, still not great - 10%. Now sure, you can make the odds work better for you by preparing more than the others, but there will be at least a few dozen other candidates that know this just as well as you do and will cram for these interviews just as hard. What are the chances you're smarter / better prepared / better interviewee than all of them? I'm sure there are many candidates who studies hard, are reasonably intelligent and just can't make the cut. That's what the statistics say at least. Now for me the worst part in interviewing is the nervousness, in a 5 round interview I'll have to be in amazing shape to not blow any of them up. It has never happened to me before that I could stay focused and nerves free for 5+ hours straight during a series of interviews and I'm talking about much easier interviews than what Google/Meta throws at you. And I've gone through at least 50 interviews with all kinds of companies over the years.

It's just very difficult. I'm not saying people shouldn't try difficult things - I did, and I do and I will. But also being realistic about your abilities and chances is important.

Now not all FAANG is the same, for example Microsoft (who isn't part of FAANG apparently?) where I live isn't that hard to get into. Hard yes but is less notorious than the others. But I'm talking in general here.


At this point I wonder if I should tell you explaining your statistics are incorrect. It's actually worse than you point out.

Let's say you have a 1% of getting hired. And getting 2 offers is still getting hired, right? So the odds of getting hired after N interviews is:

1 - (1 - P)^N.

(because you have to be in the "not hired" 99% bucket every time)

For 5 interviews with P=1% => 4.9%

For 10 interviews with P=1% => 9.6%

For 20 interviews => 18.2%

50 interviews => 39.5%

It takes more than 68 interviews to get to a 50% chance of getting hired. And of course, the number of interviews can be infinite and you still won't have 100% odds.


Lol 4.9% I stand corrected then! My 1% chance figure is also pulled out of thin air, I have no idea how many candidates are being interviewed for a position. I'm sure there are thousands who are applying and much less than that get to the interview stage. Do Google/Meta/etc actually make the investment of interviewing hundreds of candidates for each SWE position? I have no idea.


Exactly.

The typical day-day is always something like 'munge the data from this input stream so it can be processed by this analysis service and then uploaded to snowflake' or whatever.

Very rare you need any DSA (other than being aware of memory/performance implications of certain datastructures, etc) let alone DP.


Can anyone confirm the premise of the article? Are interviews actually harder?

And if so, maybe it’s because a lot of companies are “hiring” but not really. They’ll hire a senior for a junior position but otherwise just keep waiting for the “right fit”.


> Can anyone confirm the premise of the article? Are interviews actually harder?

No interviewing is absolutely not harder (from the perspective of the hiring company). Have been interviewing candidates or hiring manager for 25+ years at this point. It's always been hard to run a good interview process or do a good interview [1] and hard to hire good candidates no matter how good the process (because good people are rare and always in demand). It doesn't appear to me to be harder to interview now than it used to be.

If anything it's a bit easier than during the dotcom bubble for example but for different reasons. Now I get fewer candidates in total but more candidates who are at or close to the skill bar, so discernment is harder but the consequences of making a suboptimal choice are not quite as extreme (because a lot of candidates are pretty close to each other), whereas then I would be spammed with a deluge of candidates the vast majority of which were totally hopeless and extremely few of which were competent at all so the temptation to lower the bar was extremely high.

[1] ie the task is a challenging one. It requires difficult tradeoffs, and time, skill and practise and application on the part of the interviewer. There is no one-size-fits all method which works for all roles, candidates and organizations. Interview processes tend also to degrade over time so any time when you think it's going well is a local maximum and it will slide from there and you'll need to rethink, change it up etc in a month or two.


I was hoping I'm not alone, I had mixed signals reading the article. OP talks about how the interviews are harder in 2024 and says candidates are using AI to crack impossible time bound programming tasks and instead recommend companies to do the interview online. Somewhere something is disconnected


Post seems more of a marketing for UltraCode than establishing the connection between AI and Interviews getting harder. Even before AI there were instances where Leetcode Hard questions were asked in interviews that are highly unlikely to solve in 30 mins, unless seen that Q before.


If they interview for someone able to solve leetcode hards on the spot, that's what they will get.

I hope for them that they they don't focus on this (j/k, we already know that some do).


I conduct a lot of interviews for front-end developers. And I now settled on something that works well.

Step 1 is an online HackerRank test that takes about 30 minutes. Its purpose is to filter out the really bad candidates (if you never did interviews, you can't imagine how bad the majority of candidates are).

Step 2 is a remote video interview where I go over some code that I write step by step, and then have all these weird bugs that they need to solve. I basically test for "A junior has this strange problem, can you figure out what it is?".

Practically it tests knowledge of the event loop, closures, event propagation, React peculiarities, ... .

We do remote video interviews, and sometimes I have candidates that are obviously cheating. It's always funny how chaotic and incomprehensible their explanations are. And in the end, don't even come to a solution.

I never liked homework tasks, because you can always cheat, for example your spouse might be an awesome coder.

If any experienced interviewer has more tips, I'm always happy to hear them :).


When you have a calculator, what is the point of testing people on how fast they can do complex math with paper and pencil.

I think same thing applies here. The interview process must evolve to factor in gen ai. If a candidate is effective coding with gen-ai, does it matter that they are not effective without it ?

Several people have mentioned code reviews, +1 to that.


Instead of these interviews, consider this approach: Ask the applicant to provide links to their own open projects and/or to merged PRs they submitted to others open projects. Over each submitted relevant link, run a script that computes: `log(stars) * git_diff_length`. For one's own projects, git_diff_length is relative to an empty repo. If a link is not relevant to the position, do not consider it. For each applicant, use the sum of the top-scoring three links. This is assuming that the stars are not fake which is something you have to evaluate by seeing if the starring users have unique repos of their own or not.


I get the premise , but part of me asks : Well, the candidate was able to come up with an answer, why does it matter how they did it?

Presumably these questions are rooted in some kind of developer reality, they're asked to gauge technical expertise and suitability.. what's the real difference between someone with a magic box that gives them the right answers versus someone who can derive the answers themselves if the questions suitably emulate real life scenarios?

why should a company care about 'natural' problem solvers aside from the context of IP ownership and so on?

It sounds like it really just turns application questions into a voight-kampf test for no real good reason when the real point is to ascertain whether or not a candidate can get the job done.

I think this kind of stuff is just a gut reaction from a humanity that realizes that it's not the cleverness of the interview questions that are the problem, it's that machine tools are now at near human levels in the majority of mundane stuff a developer does every day -- the reactions are born from a panicked realization that it no longer makes sense to employ the lower end of the developer skill spectrum.


>Presumably these questions are rooted in some kind of developer reality

Have you ever taken one of these interviews? One of the primary criticisms of the practice is how disconnected they are from the realities of the job itself, requiring studying specifically for the interview process to actually pass.


After avoiding similar companies for years, a couple of years ago, out of curiosity, I took a call from Meta, and they wanted to send me a reading list of books before the first interview.


I absolutely hate any homework a firm wants to impose on me. I’m usually juggling multiple interviews and don’t have time for that (especially when they’ll just ghost you at the end).

But I get it. It’s so expensive for everyone involved if you hire the wrong person and need to correct it. Asking a candidate to put more work up front seems reasonable when the offer can easily be worth $250k or more for what is essentially a day of zoom calls.

What’s a better way? Hire as a contractor and convert to full time after 6mo of performance evals? That’s also problematic.


> I get the premise , but part of me asks : Well, the candidate was able to come up with an answer, why does it matter how they did it?

I might be able to pass the bar / medical exams with ChatGPT 4. And even if not, I will be able to with ChatGPT 5/6 etc. I have no knowledge at all of medicine or law would you want me as your doctor/lawyer?


possibly in some cases companies think it is better to employ programmers (or managers, doctors, lawyers, accountants, etc.) than prompt engineers


If you think the machine tools are near human level, you're why technical interviews are still in use.


I've interviewed people who were employed in senior developer positions in fairly large companies who could not write code. I don't mean they wrote bad code, I mean that I could not find a way of promoting them to plexplain even basic programming concepts or write a single line of code.

They're not near the developers you want to hire. But they are better at what should be done than some people who have developer jobs.


Some people freeze during interviews, like complete mental shutdown. And it's a downward spiral, once you start freezing it gets worse and then even simple questions become impossible. That, or you were dealing with professional liars.


For what it's worth, I probably have given off this vibe in interviews in the past, despite being (IMO) a decent programmer.

My brain just shuts off during some interviews and I can't recall even the simplest of things. I forgot the name for ternaries in one of my interviews, despite these being things I use more or less daily.

I'm not sure what it is really. I do fine in exams/tests. I don't have any kind of anxiety on the job or otherwise. I don't even really feel anxious during interviews either, but my brain just goes poof and I can barely form a coherent sentence anymore out of the blue, never understood it.


When people shut down and struggle to answer, that is very noticeable. If that happened, I'd be asking about their wellbeing and see if there's alternative ways for us to assess.

My problem is candidates that keep being able to talk in ways that to a non-technical manager would sound as if they plausibly know their stuff, but then struggle to offer up any kind of detail when you dig into specifics, while still being articulate yet not giving me any reasons as their answers remain superficially coherent yet descends into technobabble.

I've had candidates telling me they've gone blank. That's fine - I'll find something else to ask about, dial it back, slowly circle back to the problem from another angle, and if I'm fine with everything else we'll discuss e.g. giving them a problem to solve without me there, outside the interview setting. I've had people similarly go blank in front of a whiteboard. That's fine - I ask them to forget about that, and talk through the problem instead.

The candidates I consider to be unable to code are not the ones who freeze up or go incoherent but can pass some other assessment, but the ones that are "confidently wrong" from a fairly high level, and you drill down until they can't explain a simple if statement, while often still talking with apparent confidence.

Maybe I'm sometimes wrong and they're just too anxious to admit to finding it challenging. But if so, that's a much bigger problem to me than if they're struggling with the interview setting more generally.


Measuring soft skills and knowing how to hire the people who have them when your organization needs them is a different problem imo. Obviously there are problems with giving guys with decades of experience graph theory problems when the problems you want to solve are more abstract and organizational.


We're not talking complex problems here, because I firmly believe those kinds of problems do not belong in interviews as coding problems. Maybe spoken about as higher level problem solving. We're talking checking very basic stuff and stressing to them I couldn't care less about whether the syntax is correct. Usually we'd work our way "down" to those kinds of problems once I started getting suspicious about their abilities based on higher level answers.


How can it be that some seniors cannot write code and have worked at some large companies? It seems so weird but I have hard similar stuff before


I've had the misfortune of working with someone who wasn't even at the level of the original ChatGPT release, a decade before it came out.

And more recently some interns, also (but more forgivably) below that level.

It's certainly raised the minimum necessary standard beyond some humans.


I think you're actually pointing towards a bigger issue here, which is that chatgpt can help poor candidates get over the interview process and into your org despite not knowing what they're doing. There's also a deeper commentary where we discuss how the industry is completely unwilling to train eager people and how training costs are largely absorbed by the candidate before they start working.


Two thoughts on this. The first is honesty - I don’t mind if candidates google the question if they tell me about it. Information retrieval is a vital part of the software engineering job. What I take issue with is dishonesty, a trait that would not be something I’d accept in my teams.

The other thought is this: interview questions aim to collect evidence for skills needed for the job in a setting where you can’t observe these skills directly. I had a candidate for a senior engineering position once who clearly used an LLM. They had a vague understanding about what latency is, had no idea about SLOs, P99 or circuit breakers. The LLM then helped them to parrot this knowledge to me. But I didn’t want to hire them because of their theoretical knowledge - they need to monitor systems, act on alerting, build and improve existing monitoring and maybe join the 24x7 rotation. These are all scenarios in which it’s insufficient to know where to look for information given a keyword. The work reality doesn’t provide them always with something they can easily put into an LLM. Even if it did, progress would be too slow if for any incident they’d need to consult their llm first.


Need to know that the dev knows when the AI is wrong.


A more direct solution would be to curate a list of mistakes made by the AI and see if the dev can correct them.


Even better would be a mixed bag of AI mistakes and correct code with no indication of which is which.


Could even have more than one mistake per question.

As an aside, I really hate the 'art' of US essay writing. The AI has now learned to emulate this equivocating waffle that sits somewhere between not wrong and not even wrong. I do wonder now - with better tools available could we instead teach people to first act as an editor for other peoples content, perhaps even AI content. In effect have the people act as the discriminator in the GAN. I think it might be a faster and more thorough way of learning that embraces the help of AI while ditching the awful equivocating waffle that is currently being taught.


I think purpose of asking Leetcode hard questions in interviews is to reject if someone solves on the spot, and proceed with someone made reasonable progress and well thought out about the problem


I did (as an interviewer) a lot of interviews last year, and I was always pretty explicit about allowing the candidate to use “whatever tools you'd use in real life, including copilot and ChatGPT”. This has two big advantages:

- if someone not using them is less performant than someone using them, I should definitely hire the later, because mastering the use of these tools is an important skill, like knowing your key bindings or using a proper IDE and debugger.

- it forces you to design the interview in a way that isn't trivially solved by an AI, and challenges the actual skills of the interviewees. Of course it means the interview cannot be given to brainless hiring managers or HR, but must be handled by software team leaders/managers themselves, or at least senior developers.


You can do point 2 alone, skipping the nonsense of point 1. Please respect the time of the people that are interviewing and don't push your "innovative" ideas about interviewing on them. You are interviewing, not doing experiments.


> skipping the nonsense of point 1. Please respect the time of the people that are interviewing and don't push your "innovative" ideas about interviewing on them. You are interviewing, not doing experiments.

I have to admit I have no idea what you're complaining about. I've never treated the interviewees like they were experiment subjects and I don't see where you are reading that in “point 1”…

All I'm saying in the first bullet point is that weather you like them or not, these tools are here to stay if they improve developers productivity and it would make no sense not to take the mastery of a tool as an asset when evaluating a candidate (like if someone is showing that they are proficient with git or a debugger during the interview question, that's a good thing, if they show they are proficient with Copilot or whatever AI-based tool then it's also a valuable skill that you should have in mind when making your evaluation. That's it. There's nothing here about making people lose their time, experimenting with them or whatever.


> because mastering the use of these tools is an important skill like

Personal opinion.

> if they show they are proficient with Copilot or whatever AI-based tool then it's also a valuable skill that you should have in mind when making your evaluation.

Again, this is your personal opinion. I don't even consider this being a skill. Even less than "knowing IDE bindings". What is important during interviews is verifying that someone is at least able to deliver, troubleshoot and will not be a nuisance to the rest of the team (loosely speaking). Can they use AI tools effectively to speed up development fixing undesirable artifacts? Good, but not something that matter to evaluate if someone should be hired or not. Also, if you want to evaluate how they solve problems, using AI during interview will introduce a huge amount of noise.


> > because mastering the use of these tools is an important skill like

> Personal opinion.

If those tools improve your productivity (it's highly dependent on what kind of job you're doing tbh) then it's not a matter of personal opinion: all things equal your employer will always value a more productive employee over a least productive one. Refusing for personal reasons to use a particular kind of tools that would increase your productivity is fine, but then don't be shocked if employers favor other candidates!

> What is important during interviews is verifying that someone is at least able to deliver, troubleshoot

Yes, and if his use of such a tool improves his ability to do so, then it is actually an asset!

> and will not be a nuisance to the rest of the team (loosely speaking)

Indeed, and I discussed that [0].

> Can they use AI tools effectively to speed up development fixing undesirable artifacts? Good, but not something that matter to evaluate if someone should be hired or not.

It's not the use of AI itself that's worth being hired, but the overall productivity: if he is more productive with the AI tool than you without it, I'll hire him every time (assuming his non-tech skills are at least equivalent to yours, indeed).

> Also, if you want to evaluate how they solve problems, using AI during interview will introduce a huge amount of noise.

In your job you either have problems that are out of reach of current LLM tech (very likely today, as they aren't that smart) in which case exhibiting such a problem to ask in interview should not be hard[1], or you don't and then you don't need to have employees capable of solving such problems.

Keep in mind that for most part your company and the job you're offering are very likely boring anyway, and you don't need to have a team of genius onboard. You just need decent human being that are able to cooperate, with just good enough technical abilities to solve the problem they are facing. And it's actually good if an IA can lower the technical skill cap, so that you can focus on the human part (we're clearly not there yet, as IMHO the current tech mostly improve the productivity of people solid enough to catch all the bullshit they spit out every other answer, hidden among helpful stuff).

[0] here https://news.ycombinator.com/item?id=40364095

[1] My interview was built around a fairly basic concurrent program problem, which is something we were using all the time and at the same time something ChatGPT 4 really struggles at without being heavily nudged, and when you make it correct its mistake it introduces bugs that where there before.


This + show me tools you are using. Like show me how you use copilot or even what you would search on stackoverflow for.

A surprising number of candidates would fail even the "I can use my own IDE" part of this exercise. It is still the same thing after all - show me what you have in your toolbox and how well you can use those tools.


This, exactly. Thank you for wording it much better than I did.


So in general it means giving harder and harder interviews as the AI gets better. Can you give me an example of what you ask that ChatGPT is unable to solve?


No, it has nothing to do with giving harder interviews, but more about giving more realistic ones, because as you might have noticed, ChatGPT is still quite far from being an autonomous agent able to solve realistic tasks end to end.

And if you manage to solve realistic problems using ChatGPT or equivalent, then you certainly are a good enough candidate for the job, which consists of solving real world problems no matter how you do it.

But at this point it's very clear that most people using ChatGPT cannot, so it's a good sign that the candidate in themselves have the added value the company is looking for.

Then again there are other aspects to being a good candidate besides the ability to solve problems, like how you behave in a group and how you communicate, but these are also things you can see in an interview and for which an LLM isn't going to help (And god I wish it could help them on that too, because the number of antisocial people or sociopath having harmful effect on organizations is too damn high).



Fifty years ago there was a big debate about whether or not pocket electronic calculators should be allowed in exams since they would enable people with poor arithmetic skills. Slide rules were allowed since there is some skill using a slide rule. If someone is smart enough to harness AI to cheat on an interview, they are probably worth hiring, unless there is concern about dishonesty, eg in a bank.


Try incruiter.com/incbot


Honestly, I miss all-day _actually on-site_ on sites. Getting on site and meeting people definitely took the edge off of what were otherwise very hard interview rounds.


If the enshittification of the Internet by the proliferation of AI results in the destruction of the awful pop quiz style coding interviews then at least it will have had some value.


Mostly because it is pretty easy to use AI to cheat and if you aren’t leveraging new tools, you are falling behind.

You should be using AI for interviews, AI for cover letters, bots to mass spam every remote job on LinkedIn (most of the jobs I have “applied” for in the past few weeks aren’t even dev jobs, but an application costs nothing so better safe than sorry), and all manner of other tools to play this game.


This is horrible abusive behaviour people like you are the reason we can't have nice things.


It is the reason you can’t have nice things. I have nice things. If I stay ahead of the scamming curve, I will always have nice things.

And that’s the problem. My action, your collective bill. Tragedy of the commons is very much an unsolved problem and the winning play has consistently been to destroy the commons through max exploitation.


This time round it can also directly harm you, despite your self awareness of the most significant public issue with what you're doing, as shared black-lists for known spammers significantly predates LLMs.


Unless you become extremely prolific, blacklists work on matching some kind of resume info like a name or more likely an email or phone number.

Those can be rotated. I specifically use a different email every job search wave just due to the tremendous volume so I would just need a new phone number and to use my middle name or something.

But you are right, this is a risk.


I’m guessing either you’re not American or you don’t actually do this and are just trying out an argument to see what happens.

Like sure, you might make it through the interview with your fifth rotated name and deepfaked voice, but then someone from accounting is going to ask for your social security number so they can add you to the payroll. And if your SS# isn’t assigned to the name you applied with… well, best case you’re fired, worst case you become very interesting to the FBI.


I don’t take hundreds of jobs and social security numbers aren’t submitted with a job application. There are far more companies than my capacity to quiet quit at jobs.


So are you just submitting job applications and then not taking the job offers if you succeed? Or are you taking the offer and when they ask for your social you reveal that you applied under a fake name? Or is your social associated with a different fake name in each company’s payroll? I’m not a lawyer, but it’s starting to sound like you might be committing some flavor of actual fraud (that is, if you are in fact doing the stuff you’re posting about).


And this doesn't bother you morally in any way?


What is he doing that seems wrong, or immoral?

Genuine question.


I think the morality he's speaking to is the scalability of using that strategy. If everyone does it, then the system overloads and breaks. If only a few individuals do it, then those individuals willing to arrogate more resources to themselves "win". What makes those individuals so special to society?

I'm using the word "morality" to mean what is beneficial to society, within reasonable definitions. Please allow me to just hand-wave that away for now.

It's thinking about what are the ramifications of your actions on others? Why should you benefit and not others? Because you thought of it first? Because you're better at using this technique? Is this the kind of behavior that society wants to promote to achieve its goals?

We tend to use "morality" as a shortcut for meaning "not actively destructive to others." ... or something like that. I know we have to agree on which goals does society have, does society actually have goals, etc.

Or can we just let individuals pursue what they think are their own goals and hope for the best? And what best are we hoping for? Are we hoping that the system stays the same enough so that you personally will move toward your own goals? What if this pursuits prevents others from achieving theirs? What mechanisms do we have for changing things if we detect pathological behavior that will lead too far toward a place that everyone would consider to be "bad" (e.g., no food being produced).

Dog eat dog OK with everyone? What if this behavior ends up being so destructive that it affects even those who were initially excelling? What obligation do we have to others to keep the system working for them? Do we only think about that in terms of the eventual benefit to ourselves?

Morality's a big topic. I've probably mucked it up, but I'll leave it there.


There’s a lot to talk about here for sure and I read all your post it was very thoughtful.

I think the tools he is using is available to everyone. So it’s not like the others are at a disadvantage inherently but are choosing to be at a disadvantage. Should there be some sort of award for not using the tools? and if so, why?

I think your example about the system collapsing is dead on. But that’s proof of a bad system and not the questionable morals of the ones contributing to the collapse.

Also I would agree there is some social destructive dilema that occurs when you take shortcuts to get ahead of others. I spent my childhood growing up in low income areas of nyc, and the saying was as long as you get your piece, meaning whatever you need to do to survive is ok because at the end of the day you gotta feed yourself and your family. Think like selling illegal drugs for income. Sure you can feed your family with that money but the damage you are doing to your community will come around to affect you or your future family at some point. Where do you draw the line of bearing a responsibility to others?

I would argue that if they feel the need to sell drugs or are allowed to is a failure of the system as a whole instead of putting the blame on the person for working around the constructs of their community.


You should not be allowed to graduate from a software engineering program without taking a course on ethics.


I did take several courses in ethics. I internalized several key lessons.

1. People who stand up for what’s right get fired and blacklisted. Perhaps killed.

2. People who get caught get second chances or never even lose their ill gotten gains. Enron fraudsters have rebuilt their wealth becoming new executives and public speakers on ethics. This is in addition to any wealth they hid away. Boeing has killed many people before the 737 max for similar sloppiness. It didn’t matter then either.

3. Low level fraud virtually never gets prosecuted, yet alone unethical behaviour. We had plenty of cases where we studied some embezzler who got fired. The penalty for ill gotten millions is being fired? That’s just the day in the life of an employee in the world of regular layoffs.

Here’s the problem with every ethics class I have taken. Who won in the end except in the most extreme cases (Madoff)? Usually the crook.


Do you honestly think that would do any good with someone who has decided to hack the system for their personal benefit no matter the consequences to everyone else. I'd suspect that this was a troll except that it's an account that's been around for a while and a lot of people actually do think this way.

If it makes you feel any better though I'm unconvinced that mass spamming applications is actually very effective though it does carry collateral damage.


Yeah what I've heard I've heard about it is that recruiters hate it and have moved away from cold applications. Networking and working with recruiters is way nicer and a lot less work in my experience.

And I of course think they're an ass hole, I just think I'm general there's a pattern of scammy behavior in software that I don't really perceive in other industries. My education was not in computer science originally and one of the books I read was To Engineer is Human by Petroski. The professional standards are just higher in fields like aerospace and civil engineering. You might say that software doesn't kill people when it fails but I think that's underselling the powerful role software plays in our society. I think it's completely reasonable to push for higher ethical standards in the field even if it means some undergrads are annoyed they have to take a humanities course.


Ethics is a cultural problem. You can't teach it in a college course.


Talking about it can't hurt


Considering the kinds of companies that so many software engineers end up working for and to what ends, that's sort of laughable. How nice your shiny ethics might feel glued to your chest while you in any case go help the major tech companies do everything in their power to squeeze as much revenue out of users and every ounce of personal privacy too, among other things, including parasitic dark patterns, intentional user manipulation, blatant dishonesty, industrial scales of data theft and so forth..


That's a reason in favor of teaching ethics




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: