Have you gone back to in-person whiteboards? More focus on practical problems? I really have no idea how the traditional tech interview is supposed to work now when problems are trivially solvable by GPT.
The last time I've used a leet code style interview was in 2012, and it resulted in a bad hire (who just happened to have trained on the questions we used). I've hired something like 150 developers so far, and what I ended up with after a few years of trial and error:
1. Use recruiters and network: Wading through the sheer volume of applications was even nasty before COVID, I don't even want to imagine what it's like now. A good recruiter or a recommendation can save a lot of time.
2. Do either no take home test, or one that takes at most two hours. I do discuss the solution candidates came up with, so as long as they can demonstrate they know what they did there, I don't care too much how they did it. If I do this part, it's just to establish some base line competency.
3. Put the candidate at ease - nervous people don't interview well, another problem with non-trivial tasks in technical interviews. I rarely do any live coding, if I do, it's pairing and for management roles, to e.g. probe how they manage disagreement and such. But for developers, they mostly shine when not under pressure, I try to see that side of them.
4. Talk through past and current challenges, technical and otherwise. This is by far the most powerful part of the interview IMHO. Had a bad manager? Cool, what did you do about it? I'm not looking for them having resolved whatever issue we talk about, I'm trying to understand who they are and how they'd fit into the team.
I've been using this process for almost a decade now, and currently don't think I need to change anything about it with respect to LLMs.
I kinda wish it was more merit based, but I haven't found a way to do that well yet. Maybe it's me, or maybe it's just not feasible. The work I tend to be involved in seems way too multi faceted to have a single standard test that will seriously predict how well a candidate will do on the job. My workaround is to rely on intuition for the most part.
When I was interviewing candidates at IBM, I came up with a process I was really happy with. It started with a coding challenge involving several public APIs, in fact the same coding challenge that was given to me when I interviewed there.
What I added:
1. Instead of asking "do you have any questions for me?" at the very end, we started with that general discussion.
2. A few days ahead, I emailed the candidate the problem and said they are welcome to do it as a take home problem or we could work on it together. I let them know that if they did it ahead of time, we would do a code review and I would ask them about their design and coding choices. Or if they wanted to work on it together, they should consider it a pair programming session where I would be their colleague and advisor. Not some adversarial thing!
3. This the innovation I am proud of: a segment at the beginning of the interview called "teach me something". In my email I asked the candidate to think of something they would like to teach me about. I encouraged them to pick a topic unrelated to our work, or it could be programming related if they preferred that. Candidates taught me things like:
• How to choose colors of paint to mix that will get the shade you want.
• How someone who is bilingual thinks about different topics in their different languages.
• How the paper bill handler in an ATM works.
• How to cook pork belly in an air fryer without the skin flying off (the trick is punching holes in the skin with toothpicks).
I listed these in more recent emails as examples of fun topics to teach me about. And I mentioned that if I were asked this question, I might talk about how to tune a harmonica, and why a harmonica player would want to do that
This was fun for me and the candidate. It helped put them at ease by letting them shine as the expert in some area they had a special interest in.
"Teach me something" is how the test prep companies would interview instructors. Same idea—have a candidate explain a topic they are totally comfortable with—but they were more focused how engaging the lesson was, how the candidate handled questions they didn't expect, etc. moreso than I expect you would if you use this in a IC coding interview. It's a neat idea though, I can imagine lots of different ways it would be handy.
Kudos to you - this sounds like a fantastic interview format.
I especially like that you're prepping them very clearly in advance, giving them every opportunity to succeed, and clearly setting a tone of collaboration.
In person design and coding challenges are a pressure cooker, and not real-world. However, giving people the choice, seems like a great way to achieve the balance.
Honestly, I'm really just commenting here so that this shows up in my history, so I can borrow heavily from this format next time I need to interview! :) Thanks again for sharing.
That sounds really cool. I wish I was running into more job interviews like the one you describe. The adversarial interviewing really hurts the entire feel of the process
I don't remember how I came up with the idea. Maybe I just like learning things.
One candidate even wrote after their interview, "that was fun!"
Have you ever had a candidate say that? This was the moment when I realized I might be on to something. :-)
Interviews are too often an adversarial thing: "Are you good enough for us?"
But the real question is would we enjoy working together and build great things!
People talk about "signal" in an interview. Someone who has an interest they are passionate and curious about and likes to share it with others? That's a pretty strong signal to me.
If this became popular, wouldn't people start rehearsing for it like they do Leetcode interviews, and it would be become another performance that people focus on and optimize for, rather than on the skills for the job?
Rehearsing how to teach something is not a bad thing.
And I'm sure you're going to get different questions about different aspects of the things you're trying to teach from different interviewers. The wonderful thing about this is that it models trying to teach a concept to a fellow coworker. How you handle the questions during the teaching time says a lot about the candidate.
Or would it train people to choose topics very carefully, such that a little teaching skill goes a long way?
It's not like the interviewer getting a lesson in tuning a harmonica is going to bust out a harmonica and start putting his newfound knowledge to work, or revisit the subject in 6 months to see if he's retained the knowledge, or bring in a panel of harmonica tuning experts to check there weren't any major gaps or mistakes in the lesson.
I think the lesson to be learned from the parent comment is to put candidates at ease and let them express their interests. I think it doesn't matter if you chose to use "teach me something" specifically. However, it does matter how to try to be accommodating towards the candidate either by asking about their hobbies, some recent news any fond memory/project etc.
I forgot to include something important in my initial comment. In my introductory email, I explained that the "teach me" segment is completely optional.
If someone didn't want to do it, that's fine and wouldn't be held against them. In practice, I think one person out of 20 chose not to do it.
And if they weren't a great teacher, that was fine too!
The purpose of the segment is to give the candidate a chance, if they want, to shine at something they are interested in, and help put them at ease by letting them start out being the expert.
I think the point is to remove the adversarial aspect and therefore see how they react when not under pressure. I wouldn't be surprised if OP does warn their candidates about it.
You have it exactly right. I always told the candidate in my introductory email that the segment is optional and I like to learn all varieties of new things and they should feel free to pick any topic they want.
And then when they were teaching me, I made sure to pay attention, ask questions about anything I didn't understand - not to judge their teaching skills but to show I was interested and listening.
I immediately want to learn about all these cool things you listed.
I work as a developer and as an interviewer (both freelance). Now I want to integrate your point 3. into my interviews, but not to choose better candidates, just to learn new stuff I never thought about before.
It is your fault that I see now this risk in my professional life, coming at me. I could get addicted to "teach me something".
'Hey candidate, we have 90 minutes. Just forget about that programming nonsense and teach me cool stuff'
You just need a small file, like a point file. Any gasheads remember those? And a single edge razor blade or the like to lift the reed.
To raise the pitch, you file the end of the reed, making it lighter so it vibrates faster.
To lower the pitch, you file near the attached end of the reed. I am not sure on the physics of this and would appreciate anyone's insight.
The specific tuning I've done many times is to convert a standard diatonic harp (the Richter tuning) to what is now called the Melody Maker tuning.
The Richter tuning was apparently designed for "campfire songs". You could just "blow, man, blow" and all the chords would sound OK.
Later, blues musicians discovered that you could emphasize the draw notes, the ones that are easy to bend to a flat note to get that bluesy sound. This is called "cross harp". For example, in a song in G you would use a C harp instead of one tuned in G.
The problem with cross harp is that the 7th is a minor 7th and you have no way to raise it up to a major 7th if that would fit your song. And the 2nd is completely missing! In fact you just have the tonic (G in this case) on both the draw and blow notes where you might hope to hit the 2nd (A). There is no A in this scale, only the G twice.
To imagine a song where this may be a problem, think of the first three notes of the Beatles song All My Loving. It starts with 3-2-1. Oops, I ain't got the 2. Just the 1 twice.
This is where the file comes in. You raise the blow 1st to a major 2nd. And you raise the minor 7th to a major 7th in both octaves.
Now you have a harp with that bluesy sound we all love, but in a major scale!
Re: lower pitch, I'd hazard a guess that you're basically reducing the restoring force, so the resonant frequency goes down. Think of the attachment point as a bunch of springs in parallel; you snip a few of them and the overall spring constant is reduced. Or another way to think of it: Imagine you had a reed of a given width and then added mass to the end by making it wider at the non-attached end. You'd expect the frequency to go down.
I'm sometimes tuning accordions, which is rather similar, but if you want to lower the tone by a lot, you can plop some solder close to the tip of the reed.
It's pretty clear why that's the opposite of filing off material close to the tip, so obviously the tone goes lower.
In my mental image, filing close to the base of the reed gives the reed a similar shape as putting extra material next to the tip (thinner at the base, thicker next to the tip), and that's why it behaves the same.
The solder on the tips reminds me of a doctor visit years ago. I thought I may have broken a finger, and when I got to the doctor's office I mentioned to the receptionist that I had a high-deductible (HSA-compatible) insurance plan.
The physician's assistant said, "We could send you over for an X-ray, but since you're paying out of pocket, we can start with a simple test." He pulled a contraption out of his desk drawer and asked, "Do you know what this is?"
I said, "Yeah, a tuning fork. And based on the size and those heavy weights on the tips, it must be tuned to a rather low frequency."
He said, "Yep. So we get it vibrating and then touch the base to your finger. If you have a fracture, it will hurt because of the broken bone ends jiggling against each other. Then we will go for the X-ray to get more details. If it doesn't hurt, you are good to go. Is that OK?"
"Sounds good to me!"
It didn't hurt at all, and I just had to pay for a simple office visit instead of an expensive X-ray.
Probably so. Or in some cases, multiple resonant frequencies all at once, like a bell. I asked my friend Miss Chatty (ChatGPT) about this and she had some interesting insights:
Once we are there, it then is all about techniques. Working from first principles is going to get us there, but there are likely pitfalls, traps, all manner of gotchas laying in wait...
And there we now have a basis for further discussion.
I spent a decade and change doing adult instruction in CAD. Early on, many were still transitioning off 2d hand drawings. Boy, was that an art! I got to do a few the very old school way and have serious respect for the people who can produce a complex assembly drawing that way. Same for many shapes that may feature compound curves, for example.
But I digress!
Asking them what they wanted or would teach me was very illuminating and frankly, fun all around! I was brought a variety of subjects and not a single one was dull!
Seems I had experiences similar to yours.
One of my questions was influenced by someone who I respected highly asking, "what books are on your shelf at home?"
This one almost always brought out something about the candidate I would have had no clue about otherwise. As time advanced, it became more about titles because fewer people maintain a physical book shelf!
I am! But I am now on the other side of the virtual table.
IBM conducted a mass layoff a few months ago, and as our little team lost our only customer at the time (this is public information), many of us were let go. Ah well.
One door closes, another opens.
If anyone out there is curious about "who is this guy with the strange and interesting ideas about interviewing", you can find my LinkedIn and other contact info in my HN profile.
I love this. I hope you don't mind, but I'm going to steal it for when I am back in the saddle interviewing folks. We need much more of this and a lot less of the adversarial BS.
(3) is biasing the process strongly in favour of people who spin a good story. If you're looking for a certain team culture then OK but this is going to neatly screen out anyone who just wants to do their job well and doesn't particularly know how to sell the extra-curriculars they have.
Sure, they can do that. Then the average interviewer is going to recommend hiring the person who was the best at spinning a good story rather than the person who has the clearest explanation (which might easily come across as boring or simplistic). The problem here isn't getting through the interview; it is what the interviewer does next in the hiring pipeline.
Somehow this reminds me of my old physics professor, who often had simplistic, almost boring explanations of things, like jiggling atoms making friends with each other, or how a train stays on the track.
... Classic interviewing techniques of "explain how X algo works" or "write code to solve Y" will unfairly bias against interviewees that don't test well under pressure, but would otherwise be a good coworker.
... "Teach me something interesting to you" or "tell me stories about past experiences" will unfairly bias against interviewees that are shy, soft spoken, or are mildly socially awkward, but would otherwise be a good coworker.
Given the above awareness of where bias can emerge, how should the interview be done in order to get a candidate that knows what they're doing, and works well with the rest of the team?
Other comments mention relying more on recruiters and referrals, but that isn't always an option.
I don't see what useful signals are supposed to come out of an interview. If referrals and recruiters aren't an option I'd probably try to skip the interview altogether and go with a long probation period (3-6 months). Or possibly have a short 20 minute free-form interview talking about their last day job and expectations of the new one with a very short list of major red flags ("doesn't want the job", "unable to form sentences") then block candidates who raise them.
- How do they take a business problem and model it into code
- How do they debug their own code
- Is their code easy to read
- Do they name their variables/fields/methods/classes in easy to understand and consistent ways or are the names confusing or inaccurate
- How do they take constructive criticism
- How collaborative are they
- Do they think about the problem first or do they just start hacking away
- When asked to add a feature to existing code, do they start hacking or do they write out a test describing the new functionality first
- When confronted with vague requirements, how well do they ask questions to get the information they need
- How much experience do they have with algorithms, database design, systems design, building things so they scale well
If it were possible to work all that out in the interview then there wouldn't be any bad hires.
As a wishlist I like it, I just don't see how you're going to assess all that in an interview. You'll notice that the technique of the day ("teach me something") doesn't address any of the dot points and that holds for ... pretty much any technique. Interviews are a weak process for assessing anything.
The long probation approach completely ignores the very real costs of onboarding someone new.
It takes time, money and people to bring someone in, and hiring is actually quite a risk for many companies (unless they're huge and/or in a hiring frenzy).
If a candidate doesn't work out, that's a lot of time and money down the drain, and potentially lost work, and disruption to teams and timelines, etc..
Most people don't get this part, and I think that's why they don't understand why interview processes are structured the way they are.
You really want to do the best possible evaluation, on all fronts, at the start. The longer a bad candidate stays in your pipeline or company, the more expensive and disruptive it gets.
Specifically for engineering I think it could work if you really press on that it's about teaching something. A core part in an engineering team is to walk eachother through concepts, so judging how you can explain concepts to someone is actually a good thing i believe?
A big part of any engineering job is casual technical communication. This seems like it's entirely fair line of questioning. And the candidate gets to pick their strongest topic. What more do you want?
I don't think this approach favors the talkers / story spinners too much. It is the other way around!
It is a nerd test. You can get every nerd talking if you ask them to tell you about a special skill or project they really care about.
I remember quite a lot of these lunch conversations:
Me: Hey, what's up? Do you like the salad?
Introvert Nerd: Mmmm-Hmmm.
Me: Great weather outside!
Introvert Nerd: Mmmm-Hmmm.
Me: I was hiking last weekend in the mountains and got caught in a storm.
Introvert Nerd: Mmmm-Hmmm.
Me: Does your work project proceed well?
Introvert Nerd: Mmmm-Hmmm.
Me: I've heard that you regularly cook medieval dishes and you created a food medievality detector using a Raspberry Pi and a horseshoe?
Introvert Nerd: Oh, yes! You know, measuring the medievality of a dish is not as simple as it sounds! Obviously, there are no American foods like tomatoes or potatoes allowed, but did you ever think which spices were common in Europe in the High Middle Ages and why that changed in the Late Middle Ages...
...
The problem is in leaving the topic of what to teach open. To some people, this may feel like freedom. To others, in the context of an interview where the purpose is to judge the candidate, it will just lead to a bunch of stress from trying to guess which types of things to teach are currently a la mode on the interview circuit.
For a fairly casual question I'd see it more like an ice breaker or a way to build some rapport, but it's down to how you ask the question as well. Letting people know in advance (as the parent suggested) rather than dropping it on the fly is a good enough accommodation for people who would be anxious about that.
I find interviews quite tense and I generally dislike having them (on both sides of the table), but throwing in a few disarming questions here and there throughout really takes the edge off. A conversational, or more cerebral, approach like that can suit some people far better than live code tests or quick-fire Q&A.
This post is some unintentional satire how IBM operates (nobody invents anything in IBM anymore).
The opening question is a copy of what was done before (probably by someone who doenst work at IBM anymore) and all the new stuff is stolen from outsiders.
I'm surprised at the negativity your approach seems to have sparked in a few, but I found it really great, probably very effective as well and will probably start to use it at some point.
Thank you! And please do feel free to use the idea, or change it and make it your own. The great thing is that it starts the interview with the candidate being the expert at something, and I am their student.
So was I! Frankly, having done similar things, I found their comment here lucid, well thought out and valuable enough to write some things in resonance.
My take is the negatives may just be people leaning hard on, "if it seems too good to be true...
> In my email I encouraged the candidate to teach me something unrelated to our work, or programming related if they preferred that.
What if I decide to teach you something about the Quran and you don't hire me?
Perhaps this is just urban legend but from the stories I've heard hiring for FAANG-type companies there are people out there interviewing with their only goal being baiting you into a topic they can sue you over.
Worst instance I have heard of is when an interviewer asked about the books the candidate liked to read (since the candidate said they're an avid reader in their resume) and he just said that he liked to read the Bible. After not getting hired for failing the interviews he accused the company of religious discrimination.
I'm by no means an expert in US law and don't know the people this happened to directly so maybe it's just one big fantasy story but it doesn't seem that far fetched that
- If you are a rich corporation then people will look to bait you into a lawsuit
- If you give software engineers (or any non-lawyers) free rein on how to conduct interviews then some of them can be baited into a situation that the company could be sued over way more easily than a rigid leetcode interview
I think nothing came of the fellow who liked reading the Bible but I would imagine the legal department has a say in what kind of interview process a rich company can and can not use.
I think you may have misunderstood the guidance from your legal team on this.
There is literally no way to prevent a candidate from disclosing a protected characteristic during an interview. Some obvious examples: they might show up to the interview in a wheelchair, they might be wearing a religious garment, they might be visibly pregnant, and so on...
What legal doesn't want, is you asking questions directly intended to elicit that kind information when the candidate didn't volunteer it. Asking the candidate a direct question like "are you planning to have kids?" makes it sound like that information will be used in the hiring decision.
Completely unrelated, but I find that other people will create problems by making an off-colour statement, something mildly offensive, or disrespectful and then when they get a reaction they say "why are you making a problem where none exists".
DARVO/gaslighting behaviour, and I wish it was rarer than it is.
I have found that if you tighten the parameters a bit, you can still get all the benefit of what this question is asking for. For example, teach me the rules of a sport, board game etc. You still get to see if they can present a coherent explanation of something relatively complex, but you can avoid these potentially dangerous topics.
People who are poor managers or in positions of power, but not emotionally regulated or unwilling to get therapy, find blaming others easier than self awareness and doing the work of bettering themselves. Couple that with financial success and why should they when all our society values is how big your bank account balance is.
and like you said it's not that deep. pick something light.
a candidate that pulls the religious thing now is one that's going to drag their crazy shit into the office later. or complete misread the room / culture now, and in the future.
like it's a technical job interview, talk about something vague technical, even if it's just how the convection settings work in your oven.
Companies with armies of attack lawyers riding helicopter gunships don't make great targets for this kind of thing, in general. I shudder to think about trying to take Apple or Disney to court.
It's somewhat overblown - obviously anyone can try to submit a demand letter about anything. My experience with legal in the hiring process is they want to avoid obvious own goals, and document the process so that clear reasoning can be expressed. Then, unless you really do something obviously discriminatory, you can tell people who claim they've been discriminated against to go pound sand.
There are lots of good tactical reasons to settle claims like that, so lawyers may advise you to settle, but if you're of the "we don't negotiate" mindset, in my experience most lawyers are quite happy to gear up for a fight with the right groundwork in place.
As someone with a pretty long career already, and who's comfortable talking about it, I was a bit surprised that in three interviews last year nobody asked a single thing about any of my previous work. One was live coding and tech trivia, the other two were extensive take-home challenges.
To their credit, I think they would have hired "the old guy" if I'd aced their take-homes, but I was a bit rusty and not super thrilled about their problem areas anyway so we didn't get that far. And honestly it seems like a decent system for hiring well-paid cogs in your well-oiled machine for the short term.
Your method sounds like what we were trying to do ten years ago, and it worked pretty well until our pipeline dried up. I wish you, and your candidates, continued success with it: a little humanity goes a long way these days.
So, did you find a company that you are happy with (interviewing or otherwise)? I would be really interested to know how you are dealing with tech landscape changes lately, and your plans for staying in tech ...
>>Do either no take home test, or one that takes at most two hours. I do discuss the solution candidates came up with, so as long as they can demonstrate they know what they did there, I don't care too much how they did it. If I do this part, it's just to establish some base line competency.
The biggest problem with the take home tests are not people who don't show up due to not being able to finish the assignment, But that those people who do, now expect to get hired.
95% people don't finish the assignment. 5% do. Teams think submitting the assignment with 100% feature set, unit test cases, onsite code review and onsite additional feature implementation still shouldn't mean a hire(If not anything, there are just not enough positions to fill). From the candidate's perspective, its pointless to spend a week working and doing every thing the hiring team asked for and still receive a 'no'.
I think if you are not ready to pay 2x - 5x market comp you shouldn't use take home assignments to hire people. There is too much work to do, and receive a reject at the end. Upping the comp absolutely makes sense as working a week to get a chance at earning 3x more or so makes sense.
Most of the time, those take home tests cannot be done in 2 hours. I remember one where I wasn't even done with the basic setup in 2 hours, installing various software/libraries and debugging issues with them.
We did a lot of these assignments and no one assumed that they will be hired if they complete it. Its about how you communicate your intent. I always told the candidates, that the goal of the task is 1. to see some code and if some really basic stuff is on point and 2. that you can argue with someone about his or her code.
If I have a public portfolio of existing projects on GitHub, couldn't that replace an assignment? Choose one of my projects (or let me choose one), and let's discuss about it during the review interview.
>>We did a lot of these assignments and no one assumed that they will be hired if they complete it. Its about how you communicate your intent.
Be upfront that finishing the assignment doesn't guarantee a hire and very likely the very people you want to hire won't show up.
Please note that as much as you want good people to participate in your processes. Most good talent doesn't like to waste its time and effort. How would you feel if someone wasted your time and effort?
I am in germany, so by far not the same situation as in other areas of the world. If I would get such an assignment myself and I have the feeling that this will help the company and also me to verify, if it is a fit, I will do that 1 to 3 hour task very happily.
How does that process handle people who have been out of work for a few years and can pass a take-home technical challenge (without a LLM) but cannot remember a convincing level of detail on the specifics of their past work? I’ve been experiencing your style of interview a lot and running up against this obstacle, even though I genuinely did the work I’m claiming to have done.
Especially people with ADHD don’t remember details as long as others, even though ADHD does not make someone a bad hire in this industry (and many successful tech workers have it).
I do prefer take-home challenges to live coding interviews right now, or at least a not-rushed live coding interview with some approximate advance warning of what to expect. That gives me time to refresh my rust (no programming language pun intended) or ramp up on whichever technologies are necessary for the challenge, and then to complete the challenge well even if taking more time than someone who is currently fresh with the perfectly matched skills might need. I want the ability to show what I can do on the job after onboarding, not what I can do while struggling with long-term unemployment, immigration, financial, and family health concerns. (My fault for marrying a foreigner and trying to legally bring her into the US, apparently.)
And, no, my life circumstances do not make it easy for me to follow the common suggestion of ramping up in my “spare time” without the pressures of a job or a specific interview task. That’s completely different from when I can do on the job or for a specific interview’s technical challenge.
This is slightly tangential to your questions, but to address the "remembering details about your past work", I've long-encouraged the developers I mentor to keep a log/doc/diary of their work.
If nothing else, it's a useful tool when doing reviews with your manager, or when you're trying to evaluate your personal growth.
It comes in really handy when you're interviewing or looking for a new job.
It doesn't have to be super detailed either. I tell people to write 50% about what the work was, and 50% about what the purpose/value of the work was.. That tends to be a good mix of details and context.
Writing in it once a month, or once a sprint, is enough. Even if it's just a paragraph or two..
Yeah, my resume does include a summary of what I did each job, but it sounds like the brag doc idea would involves much more detail.
Honestly, putting those details in writing in a way that I retrain after leaving might leave me vulnerable to claims of violating my corporate NDA. But legalities aside, yes, it would help with these kinds of interview issues - prospectively only of course, not retrospectively.
Annoyingly, it's also exactly the kind of doc that's difficult for people with ADHD to create and maintain rigorously, even though people with ADHD are more likely than average to need the reminders of those details. A lot of ADHD problems are that kind of frustrating catch-22, especially when interacting with a world that is largely designed around more typical brain types.
Got it.. I will admit to being unfamiliar with the challenges of ADHD so I apologize if my suggestion appeared to disregard said challenges.
If I can ask a follow-up question: Is it the act of writing it down, or the whole recall aspect of it that is challenging? Or something else?
I've personally starting using audio transcription with a voice recorder to capture ideas more easily when I'm not at my computer (I hate typing on my phone keyboard) and that has made building these kinds of diary/log/notes documents a lot easier because I can just speak my ideas into the recorder whenever it is convenient for me, instead of waiting until I'm at a computer.
one of my buddies was a navy psychologist assistant. was a corpsman (navy medic) who moved that way.
dude maintained a "me wall" of achievements, commendation medals, etc. plus a file full of other crap. he mentioned he got a lot of promotions because you have a section to fill out where you describe all the shit you did over the least year, and he always came with examples...
I can't say I've interviewed someone to which this applies - unfortunately! Probably just doesn't get surfaced by my channels.
I would definitely not expect someone out of work for a while to have any meaningful side projects. I mean, if they do, that's cool, but I bet the sheer stress of not having a job kills a lot of creativity and productivity. Haven't been there, so I can only imagine.
For such a candidate, I'd probably want to offer them a time limited freelance gig rather quickly, so I can just see what they work like. For people who are already in a job, it's more important to ensure fit before they quit there, but your scenario opens the door to not putting that much pressure on the interview process.
Thanks for being understanding and compassionate about the situation! I like your idea of a time limited freelance gig, though immigration obstacles do sometimes make that quite complicated (as in my current situation). It's way better than not considering that people might have a reason for a resume with gaps or other irregularities which is not being a bad employee.
Beyond immigration types of obstacles, somehow recruiters and hiring managers rarely consider how many bad managers, bad executives, and bad companies there are when evaluating gaps and short tenures in an employee's resume. But equally, discussing those matters during an interview risks being seen as unprofessional and unreasonably negative. "Why did you leave this company?" "Oh, the warnings I got about the leadership from a former employee before I joined turned out to be true, and they didn't want to hear professionally presented necessary feedback about emotional safety in an emergency incident response situation, so they fired me without a single meaningful 1:1 discussion other than trying to assign blame." / "Oh, the CEO was enough of a problem that the investors eventually replaced him in a subsequent funding round despite him being the majority shareholder, but that was long after I had resigned or been fired due to that CEO's particular problems." / "Oh, they communicated in a very idiosyncratic way which didn't work for me and which I haven't seen at any company before or since." / etc. Most of that doesn't fly in an interview, but equally, even if it did, saying too much of that sounds like making up excuses for oneself even when it's 100% true. Same thing for why a job search doesn't succeed quickly.
Ours is a messy and imperfect industry, and it sucks that interview candidates have to pretend otherwise to seem like they'll be reasonable employees. Meanwhile the companies and executives that act in those ways get to present whatever positive and successful image they want, and they get to praise each other for firing fast or cutting costs with attrition or layoffs.
Well, it could be "just" naive hiring processes. It's pretty natural for people to use themselves and their immediate circles as thought guinea pigs for any process they come up with. And unfortunately, quite often, people in positions of power don't work their way up there, so it takes more effort than they are usually willing to invest to relate. Which is a shame, because doing things beyond what everybody else is doing can be a huge advantage.
One of the problems I see in big corporates is that feedback cycles for bad policy can be so long, whoever caused something is far away when the effects hit. AKA nobody really cares. But to be fair, I've always tried to care, and worked with many others who do.
Just a suggestion, but I have done 3-5 projects a year for a long, long time, and as an executive (for almost a decade) had dozens of projects annually I was overseeing and/or contributing to. I am not a fan of LinkedIn, but I did eventually start logging at least some of the projects I do in their "projects" section. That helps me remember and revisit some of the projects, and when you go back 10 years later it's sort of joyful walk down memory lane sometimes.
I feel like take home tests are meaningless and I always have. Even more so now with LLMs, though 9/10 times you can tell if it's an LLM-people don't normally put trivial comments in the code such as
> // This line prevents X from happening
I've seen a number of those. The issue here is that you've already wasted a lot of time with a candidate.
So being firmly against take home tests or even leetcode, I think the only viable option is a face to face interview with a mixture of general CS questions(i.e. what is a hashmap, benefits and drawbacks, what is a readers-writer lock, etc) and some domain specific questions: "You have X scenario(insert details here), which causes a race condition, how do you solve it."
> I feel like take home tests are meaningless and I always have. Even more so now with LLMs
This has been discussed many times already here. You need to set an "LLM trap" (like an SSH honey trap) by asking the candidate to explain the code they wrote. Also, you can wait until the code review to ask them how they would unit test the code. Most cheaters will fall apart in the first 60 seconds. It is such an obvious tell. And if they used an LLM, but they can very well explain the code, well, then, they will be a good programmer on your team, where an LLM is simply one more tool in their arsenal.
I am starting to think that we need two types of technical interview questions: Old school (no LLMs allowed) vs new school (LLMs strongly encouraged). Someone under 25 (30?) is probably already making great use of LLMs to teach themselves new things about programming. This reminds me of when young people (late 2000s/early 2010s) began to move away from "O'Reilly-class" (heh, like a naval destroyer class) 500 page printed technical books to reading technical blogs. At first, I was suspicious -- essentially, I was gatekeeping on the blog writers. Over time, I came to appreciate that technical learning was changing. I see the same with LLMs. And don't worry about the shitty programmers who try to skate by only using LLMs. Their true colours will show very quickly.
Can I ask a dumb question? What are some drawbacks of using a hash map? Honestly, I am nearly neck-bearded at this point, and I would be surprised by this question in an interview. Mostly, people ask how do they work (impl details, etc.) and what are some benefits over using linear (non-binary) search in an array.
The drawback is that elements in a hashmap can’t be sorted and accessing a specific element by key is slower then accessing something in an array by index.
Linear search is easier to implement.
These are all trivial questions you ask to determine if a person can develop code. The hard questions are whether the person is the cream of the crop. The amount of supply of developers is so high most people don’t ask trivial questions like that.
That's OK. I wrote: <<And if they used an LLM, but they can very well explain the code, well, then, they will be a good programmer on your team, where an LLM is simply one more tool in their arsenal.>> If anything, I would love it if someone told me that they used an LLM and explained what was good and bad about the experience. Or maybe they used it and the code was sub-par, so they need to make minor (or major) changes. Regardless, I think we are kidding ourselves if people will not make (prudent and imprudent!) use of LLMs. We need to adapt.
"Drawbacks" was the wrong word to use here, "potential problems" is what I meant - collisions. Normally a follow up question: how do you solve those. But drawbacks too: memory usage - us developers are pretty used to having astronomical amounts of computational resources at our disposals but more often than not, people don't work on workstations with 246gb of ram.
I think the better word is tradeoff since there are no perfect data structures for each job. The hasmap has the advantage of O(1) access time but the drawback of memory usage, an unsorted nature and the depends on a good hashing function to minimize collisions. A vector is also O(1), but it has an upfront memory cost that cannot be avoided. A map has a O(Log(n)) access cost, but has less memory usage, is sorted by nature and the comparison function is easier to implement.
Three similar data structures, but each with its own tradeoffs.
Good point about collisions. When I wrote the original post, I didn't think about that. As a primarily CRUD developer, I never think about collisions. The default general purpose hash map in all of my languages is fine. That said: It does matter, and it is a reasonable topic for an interview!
If you really need to test them / check that they haven't used an LLM or hired someone else to do it for them (which was how people "cheated" on take-home tests before), ask them to implement a feature live; it's their code, it should be straightforward if they wrote it themselves.
If you are evaluating how well people code without LLMs you are likely filtering for the wrong people and you are way behind the times.
For most companies, the better strategy would be to explicitly LET them use LLMs and see whether they can accomplish 10X what a coder 3 years ago could accomplish, in the same time. If they accomplish only 1X, that's a bad sign that they haven't learned anything in 3 years about how to work faster with new power tools.
A good analogy of 5 years ago would be forcing candidates to write in assembly instead of whatever higher level language you actually use in your work. Sure, interview for assembly if that's what you use, but 95% of companies don't need to touch assembly language.
> ... LET them use LLMs and see whether they can accomplish 10X what a coder 3 years ago could accomplish...
Do you seriously expect a 10x improvement with the use of LLMs vs no LLMs? Have you seen this personally, are you 1 10th the developer without an LLM? Or is the coding interview questions you ask or get asked, how to implement quicksort, or something?
Let's make it concrete, do you feel like you could implement a correct concurrent http server in 1/10th the time with an LLM than what you could do it without? Because if you jut let the LLM do the work I could probably find some issue in that code or alternatively completely stump you with an architectural question unless you are already familiar with it, and you should not be having an LLM implement something you couldn't have written yourself.
In that case, could you begin proving that point by having it write an http request parser. Let's make it easy and have it require content length header and no support for chunked encoding at first. Csn pick any language you like but since thats such critical infrastructure it must export a C API. Let's also restrict it to HTTP 1/1.1 for the sake of time.
Concidering this would probably at most take a days work to get at least a workable prototype done if not a full implementation, using an AI you should be able to do it in a lunch break.
To add: I can very well imagine this process isn't suitable for FAANG, so I can understand their university exam style approach to a degree. It's easy to arm chair criticise, but I don't know if I could come up with something better at their scale. These days, I'm mostly engaged by startups to help them build an initial team, I acknowledge that's different from what a lot of other folks hire for.
Why not? Plenty of large organizations hire this way. My first employer is bigger than any FAANG company by head count, and they hired this way. Why is big tech different?
When you're operating a small furniture company, your master craftsmen can hand-fit every part. Match up the hole that's slightly too big with the dowel that's slightly too big, which they put to one side the other day.
When you're operating a huge furniture company, you want every dowel and every hole the same size. Then any fool can assemble it, and you don't have to waste time and space on a collection of slightly-too-large dowels.
To scale up often means focusing on consistency and precision, and less on expert judgement and care.
Also human judgement is loaded with bias. Height influences outcome and this is a statistically proven phenomena. Ask any interviewer in this thread and they will claim height and good looks aren’t a factor but the statistics prove them wrong.
The literal only way to get rid of that is quantitative tests.
That's a nice story, and completely irrelevant. Different orgs scale different ways, and ultimately hiring is done at the team level. You have a vacancy or need on a team to fill. Big org or small org, the situation is the same.
Interestingly large companies will often have a hiring pipeline that isn't specific to a single team and role.
In the past Google made a lot of hires where they first give people a phone screen, then five in-person whiteboard interviews, then the interviewers send a dossier to a committee, then that committee decides whether to hire or not, then "team matching" lets hiring managers see the resumes.
So the interviews are basically conducted by random people, who have no idea what team the candidate will end up on.
Of course if you look at the number of employees Google has, and the average tenure, you can see why they make hires like Ikea makes chairs.
Yes that is how FAANG works, but they’re the oddity. Most large organizations don’t hire that way. Google invented this method of hiring, and others copied. It is debatable whether it has been good or not for them.
Usually what happens in a large org is (1) a department gets allocated a head count, and it eventually trickles down to team allocations; (2) a team needs a role that they can’t backfill from the rest of the company; (3) the team lead puts out an ad, receives resumes (directly or through HR), schedules interviews, and makes the hiring decision themselves. Exactly the same as a small company or startup in the last step.
Well, I respect the scale and speed. My process was still working fine at ~5 per month. I have doubts it'd work with orders of magnitude more. There's a lot of intuition and finesse in there, that is probably difficult to blindly delegate. Plus, big companies have very strong internal forces to eliminate intuition in favour of repeatable, measurable processes.
That’s cool. My main job is being a research scientist but i also work at a computational science consultancy that essentially builds numerical simulations for hire along with other software consultancy jobs. I occasionally have been asked to serve as an outside consultant for hiring data science positions. I’ve been wondering how to grow that a bit into more regular work because I enjoy it. Any thoughts on that?
Well, my contracts pretty much all came through my network, someone recommending me, and then clients recommending me, and so on. Not sure if I just got lucky or if this actually works, but my first step would be to tell some relevant folks you know that you're doing this now, see if they have any advice or know someone who might be interested. Another approach would be to ask the consultancy you work with if they want to add this service you can provide to their portfolio, see if it's something they could sell to existing or new clients.
However, these days, people seem barely OK with paying for a recruiter. They think just because more people than usually are looking, they get to just lean back and great candidates show up.
IMHO, if anything, it got more difficult to hire. More noise to work through, people in existing roles are more reluctant to switch, candidates hustle more because they need a job etc. I absolutely think it's a useful service. But I don't know if it's easy to market.
> Put the candidate at ease - nervous people don't interview well
This is great advice. I have great success with it. I give the same 60 second speech at the start of each interview. I tell candidates that I realise that tech interviews are stressful -- "In 202X, the 'tech universe' is infinitely wide and deep. We can always find something that you don't know. If you don't have experience in a topic that we raise, let us know. We will move to a new topic. All, yes, all people that we interviewed had at least one topic where they had no experience, or none recent." Also, it helps to do "interview ramp-up", where you start with some very quick wins to build up confidence with the candidate. It is OK to tell them "I will push a bit harder here" so they know you are not being a jerk... only trying to dig deeper on their knowledge.
Putting candidate at ease is definitely important.
Another reason:
If you're only say one of four interviewers, and you're maybe not the last interviewer, you really want the candidate to come out of your interview feeling like they did well or at least ok enough, so that they don't get tilted for the next interview. Because even if they did really poorly in your interview, maybe it's a fluke and they won't fail the rest of the loop.
Which is then a really difficult skill as an interviewer - how do you make sure someone thinks they do well even if they do very poorly? Ain't easy if there's any technical guts in the interview.
I sure as shit didn't get any good at that until I'd conducted like 100+ interviews, but maybe I'm just a slow learner haha
I would never show an interviewer how I code. Let alone allow them to give me 2 hours to solve a problem. It's contradicting that you know that developers "mostly shine when not under pressure", yet you set 2 hours to solve a problem (Sounds like you don't want to pay them for the take home). The positive feedback you're getting for your flawed process just shows how worse other, more common, recruiters are.
I’ve done the “at home” test for ML recently for a small AI consulting firm. It's a nice approach and got me to the next round, but the way the company evaluated it was to go through the questions and ask "fundamental ML bingo" questions. I don't think I had a single discussion about the company in the entire interview process. I was told up front "we probably won't get to the third question because it will take time to discuss theory for the first two".
If you're a company that does this, please dog food your problems and make sure the interview makes the effort feel valued. It also smells weird if you claim it's representative of a typical engineering discussion. We all know that consultancy is wrangling data, bad data and really bad data. If you're arguing over what optimiser we're choosing I'd say there's better ways to waste your customer's money.
On the other hand I like leetcode interviews. They're a nice equalizer and I do think getting good at them improves your coding skill. The point is to not ask ludicrously obscure hard problems that need tricks. I like the screen share + remote IDE. We used Code which was nice and they even had tests integrated so there wasn't the whiteboard pressure to get everything right in your head. You also know instantly if your solution works and it's a nice confidence if you get it first try, plus you can see how candidates would actually debug, etc.
Wow, that was a great write up. Can I interview with you? Lol, everything you wrote was really spot on with my own interview experiences. I tend to get super nervous during interviews and have choked up on many interviews asking for live coding on crazy algorithm problems. It state of hiring seems to be really bad right now. But I'll take your advice and try to get in contact with some recruiters
> 2. Do either no take home test, or one that takes at most two hours. I do discuss the solution candidates came up with, so as long as they can demonstrate they know what they did there, I don't care too much how they did it. If I do this part, it's just to establish some base line competency.
You need to do this to establish some baseline competency... for junior hires with no track record?
Recruiters have been notoriously bad in my experience. Relying on network has the potential to create bias and avoid good candidates that simply don't have an "in".
I've let people use GPT in coding interviews, provided that they show me how they use it. At the end I'm interested in knowing how a person solves a problem, and thinks about it. Do they just accept whatever crap the gpt gives them, can they take a critical approach to it, etc.
So far, everyone that elected to use GPT did much worse. They did not know what to ask, how to ask, and did not "collaborate" with the AI. So far my opinion is if you have a good interview process, you can clearly see who are the good candidates with or without ai.
Earlier this past week I asked Copilot to generate some Golang tests and it used some obscure assertion library that had a few hundred stars on GitHub. I had to explicitly ask it to generate idiomatic tests and even then it still didn't test all of the parameters that it should have.
At a previous job I made the mistake of letting it write some repository methods that leveraged SQLAlchemy. Even though I (along with my colleague via PR) reviewed the generated code we ended up with a preprod bug because the LLM used session.flush() instead of session.commit() in exactly one spot for no apparent reason.
LLMs are still not ready for prime-time. They churn out code like an overconfident 25-year-old that just downed three lunch beers with a wet burrito at the Mexican place down the street from the office on a rainy Wednesday.
I feel like I am taking crazy pills that other devs don't feel this way. How bad are the coders that they think these AI's are giving them super powers. The PR's with AI code are so obvious and when you ask the devs why, they don't even know. They just say, well the AI picked this, as if that means something in and of itself.
AI gives super powers because it saves you an insane amount of typing. I used to be a vim fanatic, I was very efficient but whenever I changed language there was a period where I had to spend getting efficient. Setup some new snippets for boilerplate, maybe tweak some LSP settings, save some new macros.
Now in cursor I just write "extract this block of code into its own function and set up the unit tests" and it does it, with no configuration from my part. Before I'd have a snippet for the unit test boilerplate for that specific project, I'd have to figure out the mocks mysel, etc.
Yes, if you use AI to just generate new code blindly and check it in without no understanding, you end up with garbage. But those people were most likely copy pasting from SO before AI, AI just made them faster.
Hmm, that is interesting; reading is harder? You have to read a lot of code anyway right? From team members, examples, 3rd party code/libraries? Through the decades of programming at least I became very proficient and rapidly spotting 'fishy' code and generally understanding code written by others. AI coding is nice because it is, for me, the opposite of what you have; reading the code it generates is much faster than writing it myself even though I am fast at writing it; not that fast.
I have said it here before, because I would love to see some videos of HNers who complain AI gives them crap as we are getting amazing results on large and complex projects... We treat AI code the same as human code, we read it and recommend or implement fixes.
Much, much harder. Sure, you can skim large volumes of code very quickly. But the type of close reading that is required to spot logic bugs in the small is quite taxing - which is the reason that we generally don't expect code review processes to catch trivial errors, and instead invest in testing.
But we are not talking about large volumes of code here; we are talking about; LLM generates something, you check it and close read it to spot logic bugs and either fix yourself, ask the LLM or approve. It is very puzzling to me how this is more work/taxing than writing it yourself unless for very specific examples;
Examples from every day reality in my company; writing 1000s of lines of react frontend code is all LLM (in very little time) and reviews catch all the issues while the database implementation we are working on we spend sometimes one hour on a few lines and the LLM suggest things but they never help. Reviewing such a little bit of code has no use as it's the result of testing a LOT of scenarios to get the most performance out in the real world (across different environments/settings). However, almost everyone in the world is working on (similar issues like) the former, not the latter, so...
Shame we cannot combine the two threads we are talking about, but our company/clients structure do not allow us to do this differently (quickly; our clients have existing systems with different frontend tech; they are all large corps with many external and internal devs which built some 'framework' on top of whatever frontend they are using; we cannot abstract/library-fy to re-use across clients). I would if I could. And this is actually not a problem (outside it being a waste to which I agree) as we have never delivered more for happier clients in our existence (which is around 25 years now) than in 2024 because of that. Clients see the frontend and being able to over-deliver there is excellent.
I think this is great advice for folks who work on software that is well enough contained enough that you can run the entire thing on your dev machine, and it happens to be written in the same language/runtime throughout.
Unfortunately I've made some career choices that mean I've very rarely been in that position - weird mobile hardware dependencies and/or massive clouds of micro services both render this technique pretty complicated to employ in practice.
Oh, we have automated tests out the wazoo. Mostly unit tests, or single-service tests with mocked dependencies.
Due to unfortunate constraints of operating in the real world, one can only run integration tests in-situ, as it were (on real hardware, with real dependencies).
When you type the code, you definitely think about it, deepening your mental model of the problem, stopping and going back and changing things.
Reading is massively passive, and in fact much more mentally tiring if whole reading is in detective mode 'now where the f*ck are some hidden issues now'. Sure, if your codebase is 90% massive boilerplate then I can see quickly generated code saves a lot of time, but such scenarios were normally easy to tackle before LLMs came. Or at least those I've encountered in past few decades.
Do you like debugging by just tracing the code with your eyes, or actually working on it with data and test code? I've never seen effective use of such regardless of seniority. But I've seen in past months wild claims about magic of LLMs that were mostly un-reproduceable by others, and when folks were asked for details they went silent.
Depends ofc on the complexity of the area, but... reading someones code to me feels a bit like being given a 2D picture of a machine, then having to piece together a 3D model in my head from a single 2D photo from one projection of the machine. Then figuring out if the machine will work.
When I write code, the hard part is already done -- the mental model behind the program is already in my head and and I simply dump it to keyboard. (At least for me typing speed has never been relevant as a limiting factor)
But I read code I have to reassemble the mental model "behind" it in my head from the output artifact of the thought processes.
Of course one needs to read code of co-workers and libraries -- but it is more draining, at least for me. Skimming it is fast but reading it thoroughly enough to find bugs by reading requires making the full mental model of the code which takes more mental effort for me at least.
There is a huge difference in how I read code from trusted experienced coworkers and juniors though. AI falls in the latter category.
(AI is still saving me a lot of time. Just saying I agree a lot that writing is easier than reading still.)
Running code in your head is another issue that AI won't solve (yet); we had different people/scientists working on this; the most famous person there being Brett Victor, but also Jonathan Edwards [0] and Chris Granger (lighttable). I find the example in [0] the best; you are sitting there with your very logically weak brain trying to think wtf will this code do while there is a very powerful computer next to you that can tell you. But doesn't. And yet, we are mostly restricted to first think out the code to at least some extent before we can see it in action, same for the AI.
You mean like a blueprint of a machine? Because that is exactly how machines are usually presented in official documentation. To me the skill of understanding how "2d/code" translates to "3d/program execution" is exactly the skill that sets amateurs apart from pros, saying that, I consider myself an amateur in code and a professional in mechanical design.
"In the small", it's easy to read code. This code computes this value, and writes it there, etc. The harder part is answering why it does what it does, which is harder for code someone else wrote. I think it is worthwhile expending this effort for code review, design review, or understanding a library. Not for code that I allegedly wrote. Especially weeks removed, loading code I wrote into "working memory" to fix issues or add features is much much easier than code I didn't write.
Yes, I have to read what it writes, and towards the end it gets slow and starts making dumb mistakes (always; there's some magically bad length at which it always starts to fumble), but I feel like I got the advantages of pairing out of it without actually needing to sit next to another human? I'll finish the script off myself and review it.
I don't know if I've saved actual _time_ here, but I've definitely saved some mental effort on a menial script I didn't actually want to write, that I can use for some of the considerably more difficult problems I'll need to solve later today. I wouldn't let it near anything where I didn't understand what every single line of code it wrote was doing, because it does make odd choices, but I'll probably use it again to do something tomorrow. If it needs to be part of a bigger codebase, I'll give it the type-defs from elsewhere in the codebase to start with, or tell it it can assume a certain function exists
I think that spending a lot of time typing is likely an architectural problem.
But I do see how AI tools can be used for "oneshot" code where pondering maintainability and structure is wasted time.
For me, getting what's in my head out onto the screen as fast as possible increases my productivity immensely.
Maybe it's because I'm used to working with constant interruptions, but until what I want is on the screen, I can't start thinking about the next thing. E.g. if I'm making a new class, I'm not thinking about the implementation of the inner functions until I've got the skeleton of the class in place. The faster I get each stage done, the faster I work.
It's why I devoted a lot of time getting efficient at vim, setting up snippets for my languages, etc. AI is the next stage of that in my mind.
Maybe you can keep thinking about next steps while stuff is "solved" in your head but not on the screen. It also depends on the type of work you're doing. I've spent many hours to delete a few lines and change one, obviously AI doesn't help there.
That's certainly the case for myself, too, though I've got roughly two fewer decades in this than yourself!
But typing throughput has never been my major bottleneck. Refactoring is basically never just straight code transforms, and most of my time is spent thinking, exploring or teaching these days
> AI gives super powers because it saves you an insane amount of typing
I feel like I'm going a little bit more insane whenever folks say this, because one of the primary roles of a software engineer is to develop tools and abstractions to reduce or eliminate boilerplate.
What kind of software are you writing, where generating boilerplate is the limiting factor (and why haven't you fixed that)?
> because one of the primary roles of a software engineer is to develop tools and abstractions to reduce or eliminate boilerplate.
Is it? Says who? Not only do I see entire clans of folk appearing who say ; DRY sucks, just copy/paste, it's easier to read and less prone to break multiple things with one fix vs abstractions that keep functionality restricted to one location, but also; most programmers are there to implement crap their bosses say, that crap almost never includes 'create tools & abstractions' to get there.
I agree with you actually, BUT this is really not what by far most people working in programming believe one of their (primary) roles entail.
See THIS is a usage that makes sense to me. Using AI to manipulate existing code like this is great. I save a ton of time by pasting in a json response and saying something like “turn this into
Data classes” makes api work so fast. On the other hand I really don’t understand devs that say they are using AI for ALL their code.
Copilot kind of auto completes exactly what I want most of the time. When I want something bigger I will ask Claude to give me that, but I always know what I am going to get, and I could have written it myself, it would have just taken tons of typing. I feel like I am kind of an orchestrator of sorts.
So if you were to do a transformation like that you'd cut the code and paste it into a new function. Then you'd modify that function to make the abstraction work. An LLM will rewrite the code in the new form. It's not cut/paste/edit. It's a rewrite every time with the old code as reference.
Each rewrite is a chance to add subtle bugs, so I take issue with the description of LLMs "working on existing code". They don't use text editors to manipulate code like we do (although it might be interesting if they did) and so will have different issues.
Vim has a way to run shell programs with your selection as standard input as you'd know, and it will replace the selection with stdout.
So I type in my prompt e.g "mock js object array with 10 items each having name age and address" do `V!ollama run whatever` for example and it will fill it in there.
Now this is blocking and I have a hacky way to run it async and fill it in based on marks later in my vimrc. Neovim really since I use jobstart().
This also works with lots of other stuff, like quick code/mock generation e.g sometimes instead of asking an LLM I just write javascript/python inline and `vap!node`/python on it.
I do agree with the filling in of text, but only when the patterns are clear. Any kind of thinking on logic or using libraries I find it still leads me astray every time.
>I feel like I am taking crazy pills that other devs don't feel this way.
Don't take this the wrong way, but maybe you are.
For example, this weekend I was working on something where I needed to decode a Rails v3 session cookie in Python. I know, roughly, nothing about Rails. In less than 5 minutes ChatGPT gave me some code that took me around 10 minutes to get working.
Without ChatGPT I could have easily spent a couple hours putzing around with tracking down old Rails documentation, possibly involving reading old Rails code and grepping around to find where sessions were generated, hunting for helper libraries, some deadends while I tried to intuit a solution ("Ok, this looks like it's base64 encoded, but base64 decoding kind of works but produces an error. It looks like there's some garbage at the end. Oh, that's a signature, I wonder how it's signed...")
Instead, I asked for an overview of Rails session cookies, a fairly simple question about decoding a Rails session cookie, guided it to Rails v3 when I realized it was producing the wrong code (it was encrypting the cookie, but my cookies were not encrypted). It gave me 75 lines of code that took me ~15 minutes to get working.
This is a "spare time" project that I've wanted to do for over 5 years. Quite simply, if I had to spend hours fiddling around with it, I probably wouldn't have done it; it's not at the top of my todo list (hence, spare time project).
I don't understand how people don't see that AI can give them "superpowers" by leveraging a developers least productive time into providing their highest value.
> I don't understand how people don't see that AI can give them "superpowers" by leveraging a developers least productive time into providing their highest value.
I'm unwilling to write code^1 that's not correct, or at least as correct as I'm able to make it. The single most frustrating thing in my daily life is dealing with the PoC other devs cast into the world that has more bugs than features, because they can't tell it's awful. I've seen code I've written at my least productive, and it's awful and I'm often ashamed of it. It's the same quality as AI code. If AI code allows me to generate more code that I'm ashamed of, when I'm otherwise too tired myself to write code, how is that actually a good thing?
I get standards for exclusively personal toy projects, and stuff you want others to be able to use are different. But it doesn't add value to the world if you ship code someone else needs to fix.
^1 I guess I should say commit/ship instead, I write plenty of trash before fixing it.
Have you tried asking LLMs to clean up your code, making it more obvious, adding test harnesses, adding type hints or argument documentation, then reviewing it and finding out what you can learn from what it is suggesting?
Last year I took a piece of code from Ansible that was extremely difficult to understand. It took me the better part of a day to do a simple bug fix in it. So I reimplemented it, using several passes of having it implement it, reviewing it, asking LLMs to make it simpler, more obvious code. For your review: https://github.com/linsomniac/symbolicmode/blob/main/src/sym...
I think you're making my point for me. Isn't it "taking crazy pills" if you use it incorrectly, and then stand on that experience as proof that other devs are insane? :-)
I agree with your assessment in the situation you describe: greenfielding a project with tools you are unfamiliar with. For me LLMs have worked best when my progress is hindered primarily by a lack of knowledge or familiarity. If I'm working on a project where I'm comfortable with my tools and APIs, they can still be useful from time to time but I wouldn't call it a "superpower", more like a regular productivity tool (less impactful than an IDE, I would say). Of course this comment could be outdated in a few months, but that's what it feels like to me in the here and now.
IMHO, still feels like a superpower in the editor when I type "def validate_session(sess" and it says "Are you wanting to type: `ion_data: dict[str, str]`?" Especially as the type annotations get more convoluted.
Devs who don't feel that way aren't talking about the stuff you're talking about.
Look at it this way - a powerful debugger gives you superpowers. That doesn't mean it turns bad devs into good devs, or that devs with a powerful debugger never write bad code! If somebody says a powerful debugger gives them superpowers they're not claiming those things; they're claiming that it makes good devs even better.
The best debugger in the world would make me about 5% more efficient. That's about the percentage of my development time I spend going "WTF? Why is that happening?" That's the best possible improvement from the best possible debugger: about 5%.
The reason is that I almost always have a complete understanding of everything that is happening in all code that I write. Not because I have a mega-brain; only because "understanding everything that is happening all the time" becomes rather easy if all of your code is as simple as you can possibly make it, using clear interfaces, heavily leveraging the type system, keeping things immutable, dependency inversion, and aggressively attacking any unclear parts until you're satisfied with them. So debuggers are generally not involved. It's probably a couple times per week that I enter debug mode at all.
It sounds a little like saying "imagine the driving superpowers you could have if your car could perfectly avoid obstacles for you!" Okay, sure, that'd be life-saving sometimes, but the vast majority of the time, I'm not just randomly dodging obstacles. Planning ahead and paying attention kinda makes that moot.
Now imagine working on a 10+ years old codebase that 100s of developer hands have gone over. And several dependencies not being possible to even run locally because of being so out of date. Why work on that in the first place? Sometimes it pays really well.
Please read things in context - my comment was about what people mean when they talk about a thing giving developers superpowers. I was not making a claim about how much you, personally, benefit from debuggers.
Also: I don't use debuggers much either. Is Superman's heat vision not a superpower because he rarely needs to melt things? :P
It’s different if you work on a large legacy codebase where you have maybe a rough understanding of the high level architecture but nobody at the company has seen the code you’re working on in the last five years. There a debugger is often very helpful.
That is your own code and only your libs then, no imports? Or you work in a language where imports are small (embedded?) so you know them all and maintain them all? Or maybe vanilla js/c without libs? Because import 1 npm and you have 100gb of crap deps you really don't understand which happens to work at moment t0 but at t1, when 1 npm updated, nothing works anymore and can't claim to understand it all as many/most people don't write 'simple code'.
It's pretty uncommon. Most of my teammates' code is close to my standard -- I am referred to as the "technical lead" even though it's not an official title, but teammates do generally try to code to my standard, so it's pretty good code. If I'm doing a one-off bug fix or trying to understand what the code does, usually reading it is sufficient. If I'm taking some level of shared ownership of the code, I'm hopefully talking a lot with the person who wrote it. I'm very choosy about open-source libraries so it's rare that I need to dig into that code at all. I don't use any ecosystems like React that tempt you to just pull in random half-baked components for everything.
The conflux of events requiring me to debug someone else's code would be someone who wrote bad code without much oversight, then left the company, and there's no budget to fix their work properly. Not very common. Since I usually know what the code is supposed to be doing, such code can likely be rewritten properly in a small fraction of the original time.
> They just say, well the AI picked this, as if that means something in and of itself.
In any other professional field that would be grounds for termination for incompetence. It's so weird that we seem to shrug off that kind of behavior so readily in tech.
Nah, already had multiple cases of that; one with a lawyer at a big corp and some others; the story is not straight up 'ai said so' but more like: 'we use different online and offline tools to aid us in our work, sometimes the results are less than satisfactory, and we try to correct those cases'. It is the same response, just showing vulnerability; we are only human, even with our tools.
I think what you're saying is a bit idealistic. We like to think that people get terminated for incompetence but the reality is more complicated than that
I suspect people get away with saying "I don't know why that didn't work, I did what the computer told me to do" a lot more frequently than they get fired for it. "I did what the AI said" will be the natural extension of this
Depending on what language you use and what domain your problem is in, current AIs can vary widely in how useful they are.
I was amazed at how great ChatGPT and DeepSeek and Claude etc are at quickly throwing together some small visualisations in Python or JavaScript. But they struggled a lot with Bird-style literate Haskell. (And just that specific style. Plain Haskell code was much less of a problem.)
Because there are plenty of devs who take the output, actually read if it makes sense, do a code review, iterate back and forth with it a few times, and then finally check in the result. It's just a tool. Shitty devs will make shitty code regardless of the tool. And good devs are insanely productive with good tools. In your example what's the difference with that dev just copy/pasting from StackOverflow?
Because on SO someone real wrote it with a real reason and logic. With AI we still need to double check that what we are giving ever made any sense. And SO also has votes to show the validty.
I agree if devs iterated over the results it could be good, but that has not been what I have been seeing.
It is not a traditional tool because tools we had in the past had expected results.
To go off on a tangent: yes, good developers can produce good code faster. And bad developers can produce bad code faster (and perhaps slightly better than before, because the models are mostly a bit smarter than copy-and-paste is).
Everyone potentially benefits, but it won't suddenly turn all bad programmers into great programmers.
In my experience the lack of correctness and accuracy (I have seen a lot of calls to hallucinated apis), is made up from the "eagerness" to fill out boilerplate.
It's indeed like having a super junior, drunk intern working for you if we are using the gps analogy.
Some work is done but you have to go over it and fix a bunch of things.
I don't follow this take. ChatGPT outputted a bug subtle enough to be overlooked by you and your colleague and your test suite, and that means it's not ready for prime time?
The day when generative AI might hope to completely handle a coding task isn't here yet - it doesn't know your full requirements, it can't run your integration tests, etc. For now it's a tool, like a linter or a debugger - useful sometimes and not useful other times, but the responsibility to keep bugs out of prod still rests with the coder, not the tools.
Yes and this means it doesn't replace anyone or make someone who isn't able to code able to code. It just means it's a tool for people who already know how to code.
> an overconfident 25-year-old that just downed three lunch beers with a wet burrito at the Mexican place down the street from the office on a rainy Wednesday
The LLMs are much more eager to please and to write lots of code. When I was younger, I would get distracted and play computer games (or comment on HN..), rather than churn out mountains of mediocre code all the time.
> The LLMs are much more eager to please and to write lots of code.
My process right now when working LLMs is to do the following:
- Create problem and solution statement
- Create requirements and user stories
- Create architecture
- Create skeleton code
- Map the skeleton code
- Create the full code
At every step, where I don't need the full code, the LLM will start coding and I need to stop it and add "Do not generate code. The focus is still design".
one of my biggest issues with the LLM is how it always wants to give me a mountain of code. A lot of the time Im using it for react, and it always gives me a full component no matter how much I specify I just want the method. It will not remember this for more than one message and will go back to giving me as much code as possible.
Yeah, it almost feels like you are talking to somebody with OCD. The frustrating part is, output tokens are usually a lot more expensive than input tokens, so they are wasting energy and money :-). Also, the more they generate, the greater the chance it will create attention issues as the conversation progresses.
This is why I built my chat app to let me manipulate LLM responses. If I feel it is not worth knowing, I'll just erase parts of it to ensure the conversation doesn't get side tracked. Or I will go back to the original user message and modify it to say
### IMPORTANT
- Do not generate more code than required.
The nice thing about LLM conversations are, every time you chat, the LLM treats it as a first time conversation, so this trick will work if the model is smart enough.
Is that the LLM's fault or SQLAlchemy for having that API in the first place? Or was that a gap in your testing strategy, as (if I'm reading it right), flush() doesn't write anything to the database but is only intended as an intermediate step (and commit() calls flush() under water).
I think we're in a period similar to self-driving cars, where the LLMs are pretty good, but not perfect; it's those last few percent that break it.
> At a previous job I made the mistake of letting it write some repository methods that leveraged SQLAlchemy. Even though I (along with my colleague via PR) reviewed the generated code we ended up with a preprod bug because the LLM used session.flush() instead of session.commit() in exactly one spot for no apparent reason.
Ive had ChatGPT do the same thing with code involving SQLAlchemy.
You can't tell us that LLM's aren't ready for prime time in 2025 after you tried Copilot twice last year.
New better models are coming out almost daily now and it's almost common knowledge that Copilot was and is one of the worst. Especially right now, it doesn't even come close to what better models have to offer.
Also the way to use them is to ask for small chunks of code or questions about code after you gave them tons of context (like in Claude projects for example).
"Not ready for prime time" is also just factually incorrect. It is already being used A LOT. To the point that there are rumors that Cursor is buying so much compute from Anthropic that they are making their product unstable, because nvidia can't supply them hardware fast enough.
I stopped using AI for code a little over a year ago and at that point I'd used Copilot for 8-12 months. I tried Cursor out a couple of weeks ago for very small autocomplete snippets and it was about the same or slightly worse than Copilot, in my opinion.
The integration with the editor was neat but the quality of the suggestions were no different than what I'd had with Copilot much earlier, and the pathological cases where it just spun off into some useless corner of its behavior (recommending code that was already in the very same file, recommending code that didn't make any sense, etc.) seemed to happen more than with Copilot.
This was a ridiculously simple project for it to work on, to be clear, just a parser for a language I started working on, and the general structure was already there for it to work with when I started trying Cursor out. From prior experience I know the base is pretty easy to work with for people who aren't even familiar with it (or even parsing in general), so I think given the difficulties that Cursor had even putting together pretty basic things it might be that a user of Cursor would see minimal gains in velocity and end up having less understanding in the medium to long term, at least in this particular case.
I tried it with Claude Sonnet 3.5 or whatever the name is, both tab-completed snippets and chat (to see what the workflow was like and to see if it gave access to something special).
The claim of the comment I replied to was "LLM's are not ready for prime time" and my opinion is that LLM prime time is already here. Using LLM's to code (or to learn how to code) is obviously super popular.
Who's talking about quality anyway?
Code quality is not and was never the number one most important thing in business, popularity of a product/service or keeping a job. You may find that unfortunate (I agree), but it's just how it is based on my own 15yr+ experience.
I imagine most of the things that would be good uses for seniors in AI aren't great uses for a coding interview anyway.
"Oh, I don't remember how to do parameterized testing in junit, okay, I'll just copy-paste like crazy, or make a big for-loop in this single test case"
"Oh, I don't remember the API call for this one thing, okay, I'll just chat with the interviewer, maybe they remember - or I'll just say 'this function does this' and the interviewer and I will just agree that it does that".
Things more complicated than that that need exact answers shouldn't exist in an interview.
> Things more complicated than that that need exact answers shouldn't exist in an interview.
Agreed, testing for arcane knowledge is pointless in a world where information lookup is instant, and we now have AI librarians at our fingertips.
Critical thinking, capacity to ingest and process new information, fast logic processing, software fundamentals and ability to communicate are attributes I would test for.
An exception though is proving their claimed experience, you can usually tease that out with specifics about the tools.
We do the same thing. It's perfectly fine for candidates to use AI-assistive tooling provided that they can edit/maintain the code and not just sit in a prompt the whole time. The heavier a candidate relies on LLMs, the worse they often do. It really comes down to discipline.
To me it's the lack of skill. If the LLM spits out junk you should be able to tell. ChatGPT-based interviews could work just as well to determine the ability to understand, review and fix code effectively.
>> If the LLM spits out junk you should be able to tell.
Reading existing code and ensuring correctness is way harder than writing it yourself. How would someone who can't do it in the first place tell if it was incorrect?
Make the model write annotated tests too, verify that the annotations plausibly could match the test code, run the tests, feed the failures back in, and iterate until all tests are green?
This has been my experience as well. The ones that have most heavily relied on GPT not only didn't really know what to ask, but couldn't reason about the outputs at all since it was frequently new information to them. Good candidates use it like a search engine - filling known gaps.
Yea I agree. I don't rely on the AI to generate code for me, I just use it as a glorified search engine. Sure I do some copypasta from time to time, but it almost always needs modification to work correctly... Man does AI get stuff wrong sometimes lol
I don't really can't imagine being it usefull in the way where it writes logical part of the code for you. If you are not being lousy you still need to think about all the edge cases when it generates the code which seems harder for me.
> you are not being lousy you still need to think about all the edge cases
This is honestly where I believe LLMs can really shine. I think we like to believe the problems we are solving are unique, but I strongly believe most of us are solving problems that have already been solved. What I've found is, if you provide the LLM with enough information, it will surface edge cases that you haven't thought of and implement logic in your code that you haven't thought of.
By chatting with the LLM, I created four user stories that I never thought of to improve user experience and security. I don't necessarily think it is about knowing the edge cases, but rather it is about knowing how to describe the problem and your solution. If you can do that, LLMs can really help you surface edge cases and help you write logic.
Obviously what I am working on, is really not novel, but I think a lot of the stuff we are doing isn't that novel, if we can properly break it down.
So for interviews that allow LLMs, I would honestly spend 5 minutes chatting with it to create a problem and solution statement. I'm sure if you can properly articulate the problem, the LLM can help you devise a solution and a plan.
I like that you’re openminded to allow candidates to be who they are and judge them for the outcome rather than using a prescribed rigid method to evaluate them. Im not looking to interview right now but I’d feel very comfortable interviewing with someone like you, I’d very likely give out my best in such an interview. Id probably choose not to use an LLM during the interview unless I wanted to show how I brainstormed a solution.
same thing here. Interview is basically a representative thing of what we do, but also depends on the level of seniority. I ask people just to share the screen with me and use whatever you want / fell comfortable with. Google, ChatGPT, call your mom, I don't care as long as you walk me through how you're approaching the thing at hand. We've all googled tar xvcxfgzxfzcsadc, what's that permission for .pem is it 400, etc.. no shame in anything and we all use all of the things through day. Let's simulate a small task at hand and see where we end up at. Similarly, there is a bias where people leaning more on LLMs doing worse than those just googling or, gasp, opening documentation.
Yes, the current google search is somehow bad than sometime between covid or before that. Using chatgpt as search engine can save time sometimes and if you're somewhat knowledgeable, you can pinpoint the key info and crosscheck with google search.
What does effective use look like? I have attempted messing around with a couple of options, but was always disappointed with the output. How do you properly present a problem to a LLM? Requiring an ongoing conversation feels like a tech priest praying to the machine spirit.
Candidates generally use it in one of two ways: either as an advanced autocomplete or like a search engine.
They'll type in things like, "C# read JSON from file"
As opposed to something like:
> I'm working on a software system where ... are represented as arrays of JSON objects with the following properties:
> ...
> I have a file called ... that contains an array of these objects ...
> I have ... installed locally with a new project open and I want to ... How can I do this?
No current LLMs can solve any of the problems we give them so pasting in the raw prompt won't be helpful. But the set up deliberately encourages them to do some basic scaffolding, reading in a file, creating corresponding classes, etc. that an LLM can bang out in about 30 seconds but I've seen candidates spend 30 minutes+ writing it all out themselves instead of just getting the LLM to do it for them.
GitHub Copilot Edit can do the second version of this. It is pretty good at it too. It sometimes gets things wrong but for your average code (and candidates typing in "C# read JSON from file" are way below average unless they never written in C#), if you give all the files for a specific self-contained part of the program, it can extend/modify/test/etc. it impressively well for an LLM.
The difference compared to where we were just 1-2 years ago is staggering.
High level - having a discussion with the LLM about different approaches and the tradeoffs between each
Low level - I'll write up the structure of what I want in the form of a set functions with defined inputs and outputs but without the implementation detail. If I care about any specifics with the functions I'll throw some comments in there. And sometimes I'll define the data structures in advance as well.
Once all this is set up it often spits out something that compiles and works first try. And all the context is established so iteration from that point becomes easier.
> High level - having a discussion with the LLM about different approaches and the tradeoffs between each
I honestly can't imagine this. If the AI says "However, a downside of approach B is that it takes O(n^2) time instead of the optimal O(nlog(n))", what do you think the odds are that it literally made up both of those facts? Because I'd be surprised if they were any lower than 30%. It's an extremely confident bullshitter, and you're going to use it to talk about engineering tradeoffs!?
> Once all this is set up it often spits out something that compiles and works first try
I'm sorry, but I'm extremely* doubtful that it actually works in any real sense. The fact that you even use "compiles and works first try" as some sort of metric that the code it's producing shows how easily it could slip in awful braindead bugs without you ever knowing. You run it and it appears to work!? The way to know whether something works -- not first try, but every try -- is to understand every character in the code. If that is your standard -- and it must be -- then isn't the AI just slowing you down?
> I honestly can't imagine this. If the AI says "However, a downside of approach B is that it takes O(n^2) time instead of the optimal O(nlog(n))", what do you think the odds are that it literally made up both of those facts? Because I'd be surprised if they were any lower than 30%. It's an extremely confident bullshitter, and you're going to use it to talk about engineering tradeoffs!?
Being confidently incorrect is not a unique characteristic of AIs, plenty of humans do it too. Being able to spot the bullshit is a core part of the job. If you can't spot the bullshit from AI, I wouldn't trust you to spot the bullshit from a coworker.
When I was last interviewing people (several years ago now), I’d let them use the internet to help them on anything hands on. I was astounded by how bad some people were at using a search engine. Some people wouldn’t even make an attempt.
My company, a very very large company, is transitioning back to only in-person interviews due to the rampant amount of cheating happening during interviews.
As an interviewer, it's wild to me how many candidates think they can get away with it, when you can very obviously hear them typing, then watching their eyes move as they read an answer from another screen. And the majority of the time the answer is incorrect anyway. I'm happy that we won't have to waste our time on those candidates anymore.
So far 3 of the 11 people we interviewed have been clearly using ChatGPT for the >>behavioral<< part of the interview (like, just chatting about background, answering questions about their experience). I find that absolutely insane, if you cannot hold a basic conversation about your life without using AI then something is terribly wrong.
We actually allow using AI in our in-person technical interviews, but our questions are worded to fail safety checks. We'll talk about smuggling nuclear weapons, violent uprising, staging a coup, manufacturing fentanyl, etc. (within the context of system design) and that gives us really good mileage on weeding out those who are just transcribing what we say into AI and reading the response.
> I find that absolutely insane, if you cannot hold a basic conversation about your life without using AI then something is terribly wrong.
I'm genuinely curious what questions you ask during the behavioral interview. Most companies ask questions like "recall a time when..." and I know people who struggle with these kinds of questions despite being good teammates, either because they find it difficult to explain the situation, or due to stress. And recruitment process is not a "basic conversation" — as a recruiter you're in far more comfortable position. I find it hard to believe anyone would use an LLM if you ask them question like "what were your responsibilities in your last role", and I do see how they might've primed the chat to help them communicate an answer to a question like "tell me about a situation when you had a conflict with your manager"
We usually just ask them to share their background, like the typical background exchange handshake at the beginning of any external call.
That normally prompts some follow ups about specific work, specific projects, if they know so-and-so moot at their old company. I call it behavioral because I don’t have another word but it’s not brainteasers and etc like consulting/finance interviews.
I think you (your company) and many other commenters here are just trying too hard.
I had just recently lead through several interview rounds for software engineering role and we have not had any issue with LLM use. What we do for the technical interview part is very simple - live whiteboarding design task where we try to identify what the candidate's focus is and might pivot at any time or dig deeper into particular topics. Sometimes, we will even go as detailed as talking about particular algorithms the candidate would use.
In general, I found that this type of interview is the most fun for both sides. The candidates don't feel pressure that they must do the only right thing as there is a lot of room for improvisation; the interviewers don't get bored with repetitive interviews over and over as new candidates come by with different perspectives. Also, there is no room for LLM use because the candidate has to be involved in drawing on the whiteboard and showing their technical presentation skills, which are very important for developers.
Unfortunately, we've noticed that candidates are on another call and their screen is fed by someone else using chatGPT and pasting the responses, as they can hear both the interviewer and the candidate
I saw a pretty impressive cheat tool that could apparently grab the screen from the live share, process text on the screen in response to an obscure keybind and then run it through OCR to solve (or just look up a LC solution).
At that point it seems like trying too hard, but be aware there are theoretical approaches which are extremely hard to detect (the inevitable evolution of sticky notes on the desk, or wall behind the monitor).
> if you cannot hold a basic conversation about your life without using AI then something is terribly wrong.
I wouldn’t be surprised if the effect of Google Docs and Gmail forcing full AI, is a generation of people who can’t even talk about themselves, and can’t articulate even a single email.
Is it necessary? Perhaps. Will it make the world boring? Yes.
what actually happens to the interviewee? Do they suddenly go blank when they realise the LLM has replied "I'm sorry I cannot assist you with this", or they try to make something up?
Yeah pretty much, they either go silent for 2-3 minutes or leave the call and claim their internet has cut out and need to reschedule.
Just one time someone got mad and yelled at the interviewer about nothing specific, just stuff like I’m not who you are looking for, you will never find anybody to hire.
So depressing to hear that “because of rampant cheating”
As a person looking for a job, I’m really not sure what to do. If people are lying on their resumes and cheating in interviews, it feels like there’s nothing I can do except do the same. Otherwise I’ll remain jobless.
Here's the thing: 95% of cheaters still suck, even when cheating. Its hard to imagine how people can perform so badly while cheating, yet they consistently do. All you need to do to stand out is not be utterly awful. Worrying about what other people are doing is more detrimental to your performance than anything else is. Just focus on yourself: being broadly competent, knowing your niche well, and being good at communicating how you learn when you hit the edges of your knowledge. Those are the skills that always stand out.
Yea but I also suck in 95% of FAANG like interviews since I'm very bad at leetcode medium/hard type of questions. It's just something that I never practiced. It's very tempting at this point to trow in my towel and just use some aid. No one cares about my intense career and the millions I helped my clients earn, all that matters (and sometimes directly affects comp rate) is how I do on the "coding task".
> I suck in FAANG interviews... it's just something I never practiced.
Well, sounds like you know the solution. Or set your sights on a job that interviews a different way.
I think it's mostly leetcode "easy", anyway. Maybe some medium. Never seen a hard, except maybe from one smartass at Google (they were not expecting a perfect answer). Out of a dozen technical interviews, I don't think I've ever needed to know a data structure more exotic than a hash map or binary search tree.
The amount of deliberate practice required to stand out is probably not more than 10-20 hours, assuming you do actually have the programming and CS skills expected for a FAANG job. It's unlikely you need to do months of grinding.
If 20 hours of work was all that stood between me and half a million dollars a year, I'd consider myself pretty lucky.
On the other hand, if 20 hours of leetcode practice is all that stands between you and half a million dollars a year, isn't that a pretty good indicator that the interview process isn't hiring based on your skills, talent and education, and instead on something you basically won't encounter in the workplace?
10-20 hours is assuming you’re qualified for the job and just bad at leetcode. I think many qualified people could pass without studying, especially if they’re experienced in presenting or teaching.
If you’re totally unqualified, 20 hours of leetcoding won’t get you a job at Meta.
Right. Almost any time somebody fails an interview it is not because of "very hard questions" but because they did not prepare properly in a sensible manner. People don't want whiteboarding, no programming questions, no mathematical questions, no fermi problems etc. which is plain silly and not realistic. One just needs to know the basics and simple applications of the above which is more than enough to get through most interviews. The key is not to feel overawed/overwhelmed with unknown notations/jargons which is what the actual problem is when people run away from big-O, DS/Algo, Recursion, application of Set Theory/Logic to Programming etc.
I don't approve of cheating but I think you're underestimating how hard some interview questions can be. Even competent people don't know everything and could draw a blank, in which case they would benefit from cheating despite being competent.
Not just difficult, but there's just so many of them (for the same company ofc). You could ace 3 interviews and not even be half way through the process. You have to be continually on top form for days/weeks on end.
A lot of these people also have a policy that even one person can fail you. So if you do 8 interviews with 2 people each, then there's up to 20 people in the process that can ruin it for you.
I think LLM performance on previously seen questions like interview questions is too good for it to be allowed. I wouldn't mind someone using an IDE or API docs, but you have to draw the line somewhere. It's like how you can't use a calculator that can do algebra on a calculus test. It just doesn't accomplish the goal of testing anything if you use too much tech, and using the current LLMs that all suck in general but can nail these questions that have a million examples online is bad. I would much rather see someone consult online references for information than to see them use an LLM.
Kids at my tiny high school football team did steroids to get an edge - no chance at a scholarship, either.
Different people have a different threshold for cheating no matter the stakes. I imagine some people vheat even if they know the answer - just to be sure.
It's much more widespread. Minor league player uses PEDs to make the major leagues. Middling major leaguer uses them to be an all-star. All-star uses them to make the hall of fame. In the context of programming, if some kind of cheating is what's necessary to nab a $150k job, a whole lot of people are going to cheat.
Yeah, we found this when we started doing take-home exams: it turns out that a junior dev who spends twice as much time on the problem than what we asked them to doesn’t put out senior-level code - we could read the skill level in the code almost instantly. Same thing with cheating like that - it turns out knowing the answer isn’t the same thing as having experience, and it’s pretty obvious pretty quickly which one you’re dealing with.
I don't know, I kind of feel like leetcode interviews are a situation where the employer is cheating. I mean, you're admittedly filtering out a great number of acceptable candidates knowing that if you just find 1 in a 1000, that'll be good enough. It is patently unfair to the individuals that are smart enough to do your work, but poor at some farcical representation of the work. That is cheating.
In my opinion, if a prospective employee is able to successfully use AI to trick me into hiring them, then that is a hell of a lot closer to the actual work they'll be hired to do (compared to leetcode).
I say, if you can cheat at an interview with AI, do it.
I dunno why there is always the assumption in these threads that leetcode is being used. My company has never used leetcode-style questions, and likely never will.
I work in security, and our questions are pretty basic stuff. "What is cross-site scripting, and how would you protect against it?", "You're tasked with parsing a log file to return the IP addresses that appear at least 10 times, how would you approach this?" Stuff like that. And then a follow-up or two customized to the candidate's response.
I really don't know how we could possibly make it easier for candidates to pass these interviews. We aren't trying to trick people, or weed people out. We're trying to find people that have the foundational experience required to do the job they're being hired for. Even when people do answer them incorrectly, we try to help them out and give them guidance, because it's really about trying to evaluate how a person thinks rather than making sure they get the right answer.
I mean hell, it's not like I'm spending hours interviewing people because I get my rocks off by asking people lame questions or rejecting people; I want to hire people! I will go out of my way to advocate for hiring someone that's honest and upfront about being incorrect or not knowing an answer, but wants to think through it with me.
But cheating? That's a show stopper. If you've been asked to not use ChatGPT, but you use it anyway, you're not getting the benefit of the doubt. You're getting rejected and blacklisted.
>I dunno why there is always the assumption in these threads that leetcode is being used
because it matches my experience. I work in games and interviews are more varied (math, engine/language questions, game design questions, software design patterns). I'd still say maybe 30% of them do leetcode interviews, and another 40% bring in leetcode questions at some point. I hate it because I need to study too many other types of questions to begin with, and leetcode is the least applicable.
> "You're tasked with parsing a log file to return the IP addresses that appear at least 10 times, how would you approach this?"
Out of curiosity, did anyone just reply with `awk ... | sort | count ... | awk`? Its certainly what I would do rather than writing out an actual script.
Nobody has yet, but if they did I'd probably be ecstatic! We specifically tell candidates they can use any language they want. A combination of awk/sort/sed/count/etc is just as effective as a Python script!
I once got a surprise leetcode coding interview for a security testing role that mentioned proficiency in a coding language or two as desirable but not essential.
I come from a math background rather than CS and code for fun / personal projects, so don't know the 'proper' names for some algorithms from memory. I could have done some leetcode prep / revision if I had any indication that it was coming up, though the interview was pretty much a waste of time. I told them that and made a stab at it, though they didn't seem interested in engaging at all and barely made eye contact during the whole interview.
> The employer sets the terms of the interview. If you don’t like them, don’t apply.
What you're missing here is that this is an individual's answer to a systemic problem. You don't apply when it's _one_ obnoxious employer.
When it's standard practice across the entire industry, we have a problem.
> submitting a fraudulent resume because you disagree with the required qualifications.
This is already worryingly common practice because employers lie about the required qualifications.
Honesty gets your resume shredded before a human even looked at it. And employers refusing to address that situation is just making everything worse and worse.
You make a valid point that while the rules of the game are known ahead of time, it’s strange that the entire industry is stuck in this local maximum of LeetCode interviews. Big companies are comfortable with the status quo, and small companies just don’t have the budget to experiment with anything else (maybe with some one-offs).
Sadly, it’s not just the interview loops—the way candidates are screened for roles also sucks.
I’ve seen startups trying to innovate in this space for many years now, and it’s surprising that absolutely nothing has changed.
>I’ve seen startups trying to innovate in this space for many years now, and it’s surprising that absolutely nothing has changed.
I don't want to be too crass, but I'm not surprised people who can startup a business are precisely the ones who hyper-fixate on efficiency when hiring and try to find the best coders. Instead of the best engineers. When you need to put your money where you mouth is, many will squirm back to "what works".
Or he can simply choose to ignore the arbitrary and often pointless requirements, do the interview on his own terms, and still perform excellently. Many job requirements are nothing more than a pointless power trip from employers who think they have more leverage than they actually do.
You're absolutely right. Ditching the pointless corporate hoops, proving you can do the job, and getting paid like anyone else is what truly matters. Most hiring processes are just bureaucratic roadblocks that needlessly filter out great candidates. Unless you're working on something truly critical, there's no reason to play along with the nonsense.
> Wanting to be paid under false pretenses is the definition of fraud.
What? No, it isn't.
Regardless, if the job requirements state "X years of XYZ experience" and you have to have >X years of experience, then using AI to look up how to do a leetcode problem for some algorithm you haven't used since your university days is absolutely not "false pretenses" nor fraud.
> What do I care about the terms of the interview as long as they hire me?
well that's the neat part... they aren't going to. All this AI stuff just happened to coincide with a recession no one wants to admit, amplifying the issue.
So yea, even if I'm desperate I need to be mindful of my time. I can only do so many 4-5 stage interviews only to be ghosted, have the job close, or someone else who applied earlier get the position.
If you lie about your qualifications to a degree that can be considered fraud, employers can and will sue you for their money back and damages. Wait till you discover how mind-numbing the American legal system is!
Nonsense. I don't endorse lying about qualifications, but employers don't sue over this. Employment law in most US states wouldn't even allow for that with regular W-2 employees.
If a candidate were up front with me and asked if they could use AI, or said they learned an answer from AI and then wanted to discuss it with me, I'd be happy with that. But attempting to hide it and pretend they aren't using it when our interview rules specifically ask you not to do it is just being dishonest, which isn't a characteristic of someone I want to hire.
On principle, what you’re saying has merit. In practice, the market is currently rife with employers submitting job postings with inflated qualifications, for positions that may or may not exist. So there’s bad actors all around and it’s difficult to tell who actually is behaving with integrity.
Due to the prevalence of the practice this is tantamount to suggesting constructive unemployability.
People were up in arms about widespread doping during the Lance Armstrong era. But the only viable alternative to doping at the time was literally to not compete at all.
I wouldn't call it cheating but most of the time it's just stupid. For majority of software developer jobs would be more suitable to discuss the solution of the more complex problem tham randomly stress out people just because you think you should.
> It is patently unfair to the individuals that are smart enough to do your work, but poor at some farcical representation of the work. That is cheating.
On the other hand, if you have 1,000 candidates, and you only need 1, why not do it if the top candidate selected by this method can do well on the test and your work?
It’s unfair but it meets their objective of finding a high in candidate. Google admits they do this.
The companies that do this only do it because they can. They have to have hundreds of people applying. The companies that don’t do this basically don’t have many people applying.
> it feels like there’s nothing I can do except do the same.
Why does it feel like that when you’re replying to someone who already points out that it doesn’t work? Cheating can prevent you from getting a job, and it can get you fired from the job too. It can also impede your ability to learn and level up your own skills. I’m glad you haven’t done it yet, just know that you can be a better candidate and increase your chances by not cheating.
Using an LLM isn’t cheating if the interviewer allows it. Whether they allow it or not, there’s still no substitute for putting in the work. Interviews are a skill that can (and should) be practiced. Candidates are rarely hired for technical skill alone. Attitude, communication, curiosity, and lots of other soft skills are severely underestimated by so many job seekers, especially those coming right out of school. A small amount of strengthening your non-code abilities can improve your odds much faster than leetcode ever will. And if you have time, why not do both?
Note also "And the majority of the time the answer is incorrect anyway."
I haven't looked for development-related jobs this millennium, but it's unclear to me how effective a crutch AI is for interviews--at least for well-designed and run interviews. Maybe in some narrow domains for junior people.
As a few of us have written elsewhere, I consider not having in-person interviews past an initial screen sheer laziness and companies generally deserve whoever they end up with.
> it feels like there’s nothing I can do except do the same. Otherwise I’ll remain jobless.
Never buy into this mentality. Because once you do, it never goes away. After the interview, your coworkers might cheat, so you cheat too. Then your business competitors might cheat, so you cheat too. And on and on.
sounds cheesy, but keep being honest. Eventually companies will realize (as we have years ago) that automating recruiting gets you automated candidates.
But YMMV. I have 9 years and still can get interviews the old fashioned way.
When I was interviewing entry level programmers at my last job, we gave them an assignment that should only take a few hours, but we basically didn't care about the code at all.
Instead, we were looking to see if they followed instructions, and if they left anything out.
I never had a chance to test it out, since we hadn't hired anyone new in so long, but ChatGPT/etc would almost always fail this exam because of how bad it is at making sure everything was included.
And bad programmers also failed it. It always left us with a few candidates that paid attention, and from there we figure if they can do that, they can learn the rest. It seemed to work quite well.
I was recently laid off from that company, and now I'm realizing that I really want to see what current-day candidates would turn in. Oh well.
For those tests I never follow the rules, I just make something quick and dirty because I refuse to spend unpaid hours. In the interview the first question is why I didnt follow the instructions, and they think my reason is fair.
Companies seem to think that we program just for fun and ask to make a full blown app... also underestimating the time candidates actually spend making it.
I’ve screened a lot of resumes and given a lot of interviews over the years, and it’s usually obvious when people are trying the scattershot approach, they just don’t match. I feel like treating it like a quantity game is unlikely to improve your odds, and tbh spamming out hundreds or thousands of applications sounds like a miserable way to spend time. You could spend that time meeting and talking to people. I’ve never applied to more than 2 jobs at once, jobs that I actually want, and never had trouble getting at least one of them (and it still takes time and effort and some coding and interviews).
Maybe not at the resume screening phase, but it’s usually still obvious once the interviews start when people aren’t interested in your specific company. Some people get lucky, sure, but the downside is that you have to get lucky, it’s wasting valuable time on low probability events. If you’re familiar with the statistical process of importance sampling, in my experience on both sides of the interview table, it’s effective and worthwhile to spend more time curating higher quality samples than to scatter and hope.
>but it’s usually still obvious once the interviews start when people aren’t interested in your specific company.
Can you really blame them? If you're not a houshold name, why would you expect someone to spend hours researching your specific company?
On the other hand, it can come off as creepy if your a small company and suddenly someone nerds out about how your CEO said this one thing at a talk years ago and knows your lead has cancer based on his personal blog. I'd rather just treat it as a transaction of my skills and services for money. We are not a family (multiple layoffs have taught me so)
> it’s effective and worthwhile to spend more time curating higher quality samples than to scatter and hope.
Not in this market. Too many ghost jobs, too many people ghosting after multiple rounds. Too many hiring freezes when you spend a month talking with a company. If you want respect from candidates, don't disrespect them.
Naw I don’t blame them. I’m not suggesting anyone spend hours researching each company. And I don’t expect candidates to do anything, I’m saying the candidates who do are the ones that tend to land the job, but it’s entirely the candidate’s choice. All it takes is minutes, really.
You sound like you’ve been burned. That sucks and I’m sorry, I sympathize. I’m hearing that the job market is very tough right now. A big part of that is because it’s extremely competitive. Taking it personally and assuming it’s disrespect isn’t going to help get the job though (even if there was disrespect… but that’s not the only explanation, so it’s a dangerous assumption).
>I’m saying the candidates who do are the ones that tend to land the job, but it’s entirely the candidate’s choice. All it takes is minutes, really.
Well, everyone has different experiences. I never felt like knowing about a company put me ahead in my early days. I guess I have a dump stat in Charisma (not surprised).
Like you said, the market is competitive. No one's going to take the nice guy over the one who blitz's an interview unless that nice guy has connections. Those few minutes of thousands of applications adds up to days of research. I just lack that time and energy these days.
>You sound like you’ve been burned. That sucks and I’m sorry, I sympathize.
several times, yes. It's honestly worse than my first job search out of college 10 years ago.
>Taking it personally and assuming it’s disrespect isn’t going to help get the job though
I only ask for basic decency. Keep a candidate in the loop, don't drag the process on for the sake of it, any take home should warrant a response (even if it's a template rejection letter). i.e. respect people's time.
I haven't been burned in a lot of my interviews, I'm not talking about bummers like the several times I was interviewing before a hiring freeze. I don't even treat non-responses as an interview process. But several of them just end with absolutely no communication nor closure after speaking for weeks with recruiters and hiring managers.
I don't know what to call that in a day and age where AI is supposedly increasing efficiency, other than disrespect. This has never happened before 2023, which makes the times all the more weirder.
My experience is that I've applied to companies where I was a perfect fit, but did not get an interview, and then I've applied to companies where I had not used any of the tech stack and still got an interview... There's a lot of wierd reason, one common is that they want to hire a specific person, maybe even in the company already, but they still need to post a job ad due to company policy. Or one where I got an interview even though I had no experience in their tech stack they explained they need to make at least 5 interviews before they hire and they had already found their guy so they interviewed other non qualified so that their candidate/friend would stand out as the most qualified... So never take hiring personally. It's just random. Do enough work to get an interview, many employers are very good at judging if you will fit in or not, so just leave it for them to figure that out, and be yourself. And don't take it personally when you get rejected. There's still a shortage of experienced software engineers, and lots of jobs to apply to Also if you get a bad feeling, just back out. It's when you've started turning down offers you have become good enough at searching/interviewing, and that's when you will find something great. Try to have at least 3 offers before you accept one.
You don’t understand reality. If all companies have 1000 candidates your only approach is scattershot.
The only time the bespoke approach works is if you have like 30 candidates only. But then there are still issues here because the candidate is still one in thirty so if he does a bespoke approach 30 times it takes an inordinate amount of time.
Got any evidence to share? It’s simply not true that “all” companies get a thousand applicants, whether you mean per job or total. Startups aren’t inundated with applicants. Neither are schools or hospitals or most web design shops or hundreds of other non-tech places that employ programmers. Some of the biggest tech names do get a lot of applicants, sometimes, for certain jobs, but I suspect you’re probably ignoring the majority of non-FAANG type businesses. Kids are definitely disproportionately aiming for the jobs that they’ve heard stories about paying really well, like AI and Apple, Facebook, Nvidia, etc. Those jobs can be super competitive, and they generally just don’t hire from bootcamps. Spamming entry level bootcamp resumes at big tech companies isn’t going to improve anyone’s odds much or at all, but whatever you don’t have to take my word for it.
The industry (all industries really) might want to reconsider online applications, or at least privilege in-person resume drop-offs because the escalating ai application/evaluation war that's happening doesn't seem to be helping anyone.
this is very strange statement. in what world did AI possibly shift power to the applicant?? applicants have almost never been in shittier position than they are now and things are getting much, much worse by the day
Yeah I don't get this either. I've been looking for a job for like 3-4 years, even an entry-level one, since I graduated college in May of 22 and I still haven't found one. I'm probably doing something wrong (and that's a different discussion), but it's getting harder and harder to know if it's me or the AI applicants or the AI ATS system. And then we have the AI job seekers which are AI-created accounts trying to find employment -- I've already started to see a few of these pop up on Linked In. They were banned, but still, the fact it's happening all is a bit worrying if not predictable.
The applicant can use AI to build CVs, tailor them and submit more, faster. Negating a lot of the automated algorithms that were being used to filter (torture) applicants.
How so? Tons of companies are moving to AI automated intake systems because they're getting flooded with low-quality AI generated resumes. Of course, the original online applications systems were terrible already which is what encouraged people towards low effort in their applications so it's become a stale-mate.
Did it? What I see instead is total mistrust of the open resume pool, because the percentage of outright lies, from resume to behavioral to everything else is just that high. So I see companies raise their hands and going back to maximum prioritization of in-network candidates, where we have someone vouching that the candidate is not a total waste of everyone's time.
The one who loses all power is the new junior straight out of school, which used to already be difficult to distinguish from many other candidates with similar resumes: Now they compete with thousands upon thousands of total fakes which claim more experience anyway.
Undergrads have many more opportunities to differentiate themselves than they realize. It could be internships, research, TA, clubs, sports, volunteering, Greek life, etc. Those put them closer to being "in-network" with certain organizations and people.
Even something like citizenship is a differentiating factor: an undergrad who applies to, say, a national lab won't compete with foreign students by definition.
That's fine. The ones who are "good cheaters" are probably smarter than many honest people. Think about those school days where your smartest peers were cheating anyway, despite teaching you organically earlier on. Those kinds of cheaters do it to turn an A into an A+, not because they don't understand the material.
Interviews aren’t about solving problems. The interviewer isn’t interested in a problem’s solution, they’re interested in seeing how you get to the answer. They’re about trying to find out if you’ll be a good hire, which notably includes whether you’re willing and interested in spending effort learning. They already know how to use AI, they don’t need you for that. They want to know that you’ll contribute to the team. Wanting to use AI probably sends the wrong message, and is more likely to get you left out of the next round of interviews than it is to get you called back.
Imagine you need to hire some people, and think about what you’d want. That’ll answer your question. Do you want people who don’t know but think AI will solve the problems, or do you want people who are capable of thinking through it and coming up with new solutions, or of knowing when and why the AI answer won’t work?
> They’re about trying to find out if you’ll be a good hire, which notably includes whether you’re willing and interested in spending effort learning
I admire this worldview, and wish for it to be true, but I can't help but see it in conflict with much of what floats around these parts.
There's a recent thread on Aider where the authors' proudly proclaim that ~80% of code is written by Aider itself.
I've no idea what to make of the general state of the programming profession at all at the moment, but I can't help but feel learning various programming trivia has a lower return on investment than ever.
I get learning the business and domain and etc, but it seems like we're in a fast race to the bottom where the focus is on making programmers' skills as redundant as possible as soon as possible.
>I admire this worldview, and wish for it to be true, but I can't help but see it in conflict with much of what floats around these parts.
Honest interviewers may not realize how dishonest other interviewers became in such recent times (2-3 years ago). Interviewing today compared to COVID times is night and day. Let alone the 10's Gold Rush.
Eh, I wish more people felt that way, I have failed so many interviews because I haven't solved the coding problem in time.
The feedback has always been something along the lines of "great at communicating your thoughts, discussing trade-offs, having a good back and forth" but "yeah, ultimately really wanted to see if you could pass all the unit tests."
Even in interview panels I've personally been a part of, one of the things we evaluate (heavily) is whether the candidate solved the problem.
Isnt one of the ways of solving the problem using all the tools at your disposal? If at the end of the day, isnt having working code the fundamental goal?
I guess you could argue that the code needs to be efficient, stable, and secure. But if you could use "AI" to get part way there, then use smarts to finish it off. Isnt that reasonable? (Devils advocate)
The other big question is the legality of using code from an AI in a final commercial product.
Yes that’s a fair question. Some companies do allow LLMs in interviews and on the job. But again the solution isn’t what the interviewer wants, so relying on an LLM gives them no signal about your intrinsic capabilities.
Keep in mind that the amount of time you spend in a real job solving clear and easy interview style problems that an LLM can answer is tiny to none. Jobs are most often about juggling priorities and working with other people and under changing conditions, stuff Claude and ChatGPT can’t really help you with. Your personality is way more important to your job success than your GPT skills, and that’s what interviewers want to see… your personality & behavior when you don’t know the right answer, not ChatGPT’s personality.
Yeah everyone says that they are interested in how you got there but this isn’t true in reality from my experience. Your bias inevitably judges them on the solution because you have many other candidates who got the correct solution.
You’re right, interviewers will still care about whether you come up with a solution, and they care about the quality of the solution. The part you might be missing is that what I said and what you said aren’t mutually exclusive; they are both true. Interviewers do have to compare you to other candidates, and they are looking for the candidates that stand out. They want more than a binary yes/no signal, if at all possible. What I was trying to say is that the interviewer doesn’t need the solution to the problem they ask you to solve, what they need is to see how well you can solve it. I hope that’s stating the obvious, but it’s worth really letting it sink in. It’s super common for early-career programmers to be afraid of interviews and complain about them. Things change once you start doing the interviewing and see how the process works.
If you've been given the problem of "without using AI, answer this question", and you use an AI, you haven't solved the problem.
The ultimate question that an interview is trying to answer is not "can this person solve this equation I gave them?", it's usually something along the lines of "does this person exhibit characteristics of a trustworthy and effective employee?". Using AI when you've been asked not to is an automatic failure of trust.
This isn't new or unique to AI, either. Before AI people would sometimes try to look up answers on Google. People will write research papers by looking up information on Wikipedia. And none of those things are wrong, as long as they're done honestly and up front.
If you are pretending to have knowledge and skills you don't have you are cheating. And if you have the required knowledge and skill AI is a hindrance, not a help. You can solve the problem easily without it. So "is using ai cheating"? IDK, but logically you wouldn't use AI unless you were cheating.
Knowledge and skill are two different things. Sometimes interviewers test that you know how to do something, when in practice it's irrelevant if you A) know how to retrieve that knowledge and B) know when to retrieve it.
There is foundational knowledge you must have memorized through a combination of education and experience to be a software developer. The standard must be higher than "can use google and cut and paste." The answer can't always be - "I don't need to be able to recall that on command, I can google/chatgpt that when I end up needing it." Would you go to a surgeon who says "I don't need to know exactly where the spleen is, I can simply google it during surgery."
For the goal of the interview - showing your knowledge and skills - you are failing miserably. People know what LLMs can do, the interview is about you.
Some can be quite good at the cheating: At least good enough to get through multiple layers. I've been in hiring meetings where I was the only one of 4 rounds that caught the cheating, and they were even cheating in behaviorals. I've also been in situations with a second interviewer, where the other interviewer was completely oblivious even when it was clear I was basically toying with the guy reading from the AI, leading conversation in unnatural ways.
Detection of AI in remote interviews, behavioral and technical, just has to be taught today if you are ever interviewing people that don't come from in-network recommendations. Completely fake candidates are way too common.
I'm at the same company I think. I don't get why we can't just use some software that monitors clicking away or tabbing away from the window, and just tell candidates explicitly that we are monitoring them, and looking away or tabbing away will appear suspect.
I haven’t been doing that much interviewing, but in the dozen or so candidates I’ve had I don’t think a single one has tried to use AI. I almost wish they would, as then at least I’d get past the first half of the question…
I'm using AI for interview screeners for nontechnical roles that require knowledge work. The AI interviewing app is very very basic, its just a wrapper put together by an eng, with enough features to prevent cheating.
Start with recording the session and blocking right-click, and you are halfway there. Its not hard.
The AI app has helped me surface top candidates. I don't even look at resumes anymore. There's no point. I interview the top 10 out of 200, and then do my references and select.
I mean they could be googling things; I’ve definitely googled stuff during an interview. I do think in-person interviews area important though, I did some remote final interviews with Amazon and they were all terrible
My startup got acquired last year so I haven't interviewed anyone in a while, but my technical interview has always been:
- share your screen
- download/open the coding challenge
- you can use any website, Stack Overflow, whatever, to answer my questions as long as it's on the screenshare
My goal is to determine if the candidate can be technically productive, so I allow any programming language, IDE, autocompleter, etc, that they want. I would have no problem with them using GPT/Copilot in addition to all that, as long as it's clear how they're solving it.
I recently interviewed for my team and tried this same approach. I thought it made sense because I want to see how people can actually work and problem solve given all the tools at their disposal, just like on the job.
It proved to be awkward and clumsy very quickly. Some candidates resisted it since they clearly thought it would make them judged harsher. Some candidates were on the other extreme and basically tried asking ChatGPT the problem straight up, even though I clarified up front "You can even use ChatGPT as long as you're not just directly asking for the solution to the whole problem and just copy/pasting, obviously."
After just the initial batch of candidates it became clear it was muddying things too much, so I simply forbade using it for the rest of the candidates, and those interviews went much smoother.
Over the years, I've walked from several "live coding" interviews. Arguably though, if you're looking for "social coders" maybe the interview is working as intended?
But for me, it's just not how my brain works. If someone is watching me, I'll be so self-conscious the entire time you'll get a stream of absolute nonsense that makes me look like I learned programming from YouTube last night. So it's not worth the time.
You want some good programming done? I need headphones, loud music, a closed door and a supply of Diet Coke. I'll see you in a few hours.
Yep, if I’m forced to talk through the problem, I’ll force myself to go through various things that you might want to hear, that I wouldn’t do.
Whereas my natural approach would be to take a long shower, workout etc and let my brain wander a bit before digging into it. But that wouldn’t fly during an interview..
Ironically this is exactly how I am too. Even at work, if I'm talking through a problem on a presentation or with my boss, I'm much more scatterbrained, and I'll try to dodge those kinds of calls with "Just give me 30 minutes and I'll figure it out." which always goes better for me.
That said, now we're just talking about take home challenges for interviews and you always hear complaints about those too. And shorter, async timed challenges (something like "Here's a few hours to solve this problem, I'll check back in later") are now going to be way more difficult to judge since AI is now ubiquitous.
So I really don't think there's any perfect methodology out there right now. The best I can come up with is to get the candidate in front of you and talk through problems with them. The best barometer I found so far was to set up a small collection of files making up a tiny app and then have candidates debug it with me.
The interview works as intended because the main priority is to avoid hiring people who will be a negative for the company. Discarding a small number of good candidates is an acceptable tradeoff.
In an interview, the coding challenge is often to produce something new from scratch while being closely monitored by people you don't know, who control your financial future.
When working with a "junior," you'd already be fairly familiar with the code base, build system, and best practices. And with a junior, you're not likely to be solving things that require deep concentration, like never-before-seen problems or architectural work (or screwball interview-tests). And, unlike an interview, if something does require all my focus, it's very easy to defer. Take a break and think about it alone.
One example would be looking up syntax and common functions. In a high-pressure situation it's much tougher to bumble around Google and Stack Overflow, so this would be a way for solving for "I totally know how to do this thing but it's just not coming to mind at this moment" which is fair. Usually we the interviews can obviously just tell them ourselves though, but that's what I was going for.
But yeah, the point is that once I applied it in practice it did quickly become confusing, so now I know from experience not to use it.
I think the other suggestions in this thread about how to use it are good ones, but they would present their own meta challenges for an interview too. Just about finding whatever balance works for you I guess.
Did you tell them that you “want to see how people can actually work and problem solve given all the tools at their disposal, just like on the job”? Just curious.
> "You can even use ChatGPT as long as you're not just directly asking for the solution to the whole problem and just copy/pasting, obviously."
No, it's not "obvious" whatsoever. Actually it's obviously confusing: why you are allowing them to use ChatGPT but forbidding them from asking the questions directly? Do you want an employee who is productive at solving problems, or someone who guess your intentions better?
If AI is an issue for you then just ban it. Don't try to make the interview a game of who outsmart who.
See my answer to the other comment on this question. We figured there were some good use cases for AI in an interview that weren't just copy/pasting code, it's not about guessing intentions. It seemed most helpful to potentially unstick candidates from specific parts of the problem if they were drawing a blank under pressure, basically just an easier "You can look it up on Google" in a way that would burn less time for them. However we quickly found it was just easier for us to unstick them ourselves.
> If AI is an issue for you then just ban it.
Yes, that was the conclusion I just said we rapidly came to.
I've had a few people chuck the entire problem into ChatGPT, it was still very much useful in a few ways:
- You get to see how they then review the generated code, do they spot potential edge cases which the AI missed?
- When I ask them to make a change not in the original spec, a lot of them completely shut down because they either didn't understand the code generated well enough, or they themselves didn't really know how to code.
And you still get to see people who _do_ know how to use AI well, which at this point is a must for its overall productivity benefits.
the trick is to phrase the problem in a way that GPT4 will always give the incorrect answer (due to vagueness of your problem) and that multiple rounds of guiding/correcting are needed to solve.
That's pretty good because it can exhaust the context window quickly and then it starts spiraling out of control, which would require the candidate to act.
If you only use ChatGPT to code, you are only able to copy paste the llm emitted code, then you ask for changes to the code (to reflect for example the evolution of the product)
There's more than one possible AI on the other end, so crafting something that will not annoy a typical candidate, but will lead every AI astray seems pretty difficult.
Maybe you could allow using AI, but only through the interviewer-provided interface. That interface would allow using any model the candidate likes, but before sending the response it will inject errors into the code (either randomly or through another AI prompt).
I did this while hiring last year and the number of candidates who got stuff wrong because they were too proud to just look up the answer was shocking.
Exactly. You never know. Some interviewers will penalize you for not having something memorized and having to look it up, some will penalize you for guessing, some will penalize you for simply not knowing and asking for help. Some interviewers will penalize you for coming up with something quick and dirty and then refining it, some will penalize you for jumping right to the final product. There's no consistency.
> I would have no problem with them using GPT/Copilot in addition to all that, as long as it's clear how they're solving it.
Too many people are the opposite that I would literally never tell you
And this works.
what can we do to help that?
I’ve had interviews where AI use was encouraged as well.
but so many casual tirades against it dont make me want to ever try being forthcoming. most organizations are realistically going to be 10 years behind the curve on this
Screen share or in person are what I think the best ways are. These are not the best options.
I do not want AI. The human is the value add.
I understand that people won't feel super comfortable with this, and I try not to roast the candidate with leetcode. It should be a conversation where I surface technical reality and understanding.
im not doing any coding challenges that aren't real world
if i see anything remotely challenging i dip out. interviewing is just a numbers game nowadays so i dont waste time on interviews if they seem like they're gonna burn me out for the rest of the day. granted i have 11 years experience
The difficulty of your questions have to change drastically if they are using good tooling. Many a problem that would take a reasonable candidate half an hour to figure out is 'free' for Claude, so your question might not show any signal. And if you tweak your questions to be sure to not be auto-solved by a strong enough AI, then you better say it's semi-required, because the difficulty level of the question you need to ask goes up quite a bit.
Some of the questions in our interview loop have been posted in github... which means every AI has trained on them specifically. They are, therefore, useless if you have AI turned on. And if you interview enough people, someone will post on github, and therefore your question will have a pretty short shelf life before it's in training and instantly solved.
It's pretty obvious when someone's input focus changes to nothing or when their mouse leaves the screen entirely, or you could just ask to see the display settings to begin. Doesn't solve for multiple computers but it's pretty obvious in real time when someone's actual attention drifts or they suddenly have abilities they didn't have before.
Either way, screen sharing beats whiteboards. Even if we throw our hands up and give up, we'll be firing frauds before the probationary period ends.
There is nothing fraudulent about using LLMs. If people can use them on the job, it's okay to use them on the interview. They're the calculators of tomorrow if not of today.
Interviewing just needs to adapt such as by assessing one's open source projects and contributions. Not much more is needed. And if the candidate completely misrepresents their open source profile, this can be handled by an initial contract-to-hire period.
I agree that there's nothing fraudulent with using a tool you would use on the job when you are interviewing. But in no way are LLMs equivalent to calculators. Calculators actually give the correct answer reliably, unlike LLMs. A sporadically reliable tool is worse than no tool at all.
LLMs have come a long way. If you give gpt-o3-mini the same interview question five times, chances are good that it will get it right all five times. Yes, it's not a calculator, but it's approaching one.
Using AI secretly in an interview setting where you were told the constraints excluded them or the interview required everything to be on the screen share even if they were permitted is fraudulent behavior. It’s not much different than having a surrogate interviewee at that point. You’d only being doing it to deceive the interviewer.
Open source contributions is a bad metric for interviewing too. People have lives outside a computer, if they aren’t doing open source contributions in their free time outside of work I wouldn’t hold that against them. If someone has those that’s great and I’d take a look, but I’m not disqualifying someone else for not working for free. Someone doing OSS as an interviewing badge of honor is a chump in my book. At least do it for principled reasons.
Part of my resume review process is trying to decide if I can trust the person. If their resume seems too AI-generated, I feel less like I can trust that candidate and typically reject the candidate.
Once you get to the interview process, it's very clear if someone thinks they can use AI to help with the interview process. I'm not going to sit here while you type my question into OpenAI and try to BS a meaningful response to my question 30 seconds later.
AI-proof interviewing is easy if you know what you're talking about. Look at the candidates resume and ask them to describe some of their past projects. If they can have a meaningful conversation without delays, you can probably trust their resume. It's easy to spot BS whether AI is behind it or not.
Good interviews are a conversation, a dialog to uncover how the person thinks, how they listen, how they approach problems and discuss. Also a bit detail knowledge, but that's only a minor component in the end. Any interview where AI in its current form helps is not good anyway. Keep in mind that in our industry, the interview goes both ways. If the candidate thinks your process is bad then they are less inclined to join your company because they know that their coworkers will have been chosen by a subpar process.
That said, I'm waiting for an "interview assistant" product. It listens in to the conversation and silently provides concise extra information about the mentioned subjects that can be quickly glanced at without having to enter anything. Or does this already exist?
Such a product could be useful for coding to. Like watching me over the shoulder and seeing aha, you are working with so-and-so library, let me show you some key parts of the API in this window, or you are trying to do this-and-that, let me give you some hints. Not as intrusive as current assistants that try to write code for you, just some proactive lookup without having to actively seek out information. Anybody knows a product for that?
That might be good for newbie developers but for the rest of us it'll end up being the Clippy of AI assistants. If I want to know more about an API I'm using, I'll Google (or ask ChatGPT) for details; I don't need an assistant trying to be helpful and either treating me like a child, or giving me info that maybe right but which I don't need at the moment.
The only way I can see that working is if it spends hundreds of hours watching you to understand what you know and don't know, and even then it'll be a bit of a crap shoot.
This, and tbh this has always been the best way. Someone who has projects, whether personal or professional, and has the capability to discuss those projects in depth and with passion will usually be a better employee than a leet code specialist.
Doesn't even have to be a project per se, if they can discuss some sort of technical topic in depth (i.e. the sort of discussion you might have when discussing potential solutions to a problem) then that's a great sign imo.
My resume has a bunch of personal projects on there as well as work experience and the project experience seems to not help at all. Just rejections after rejections.
My suggestion was in an ideal world which sadly this isn't. Your issue suggests they aren't tailored for each application, which could potentially be a reason. It is better to show why one project makes you a great fit as opposed to how many projects you have done. Sometimes the person in charge of hiring may not fully have all the expertise in the area they are hiring for.
Agreed. This is why - while I won't ding an applicant for not having a public Github, I'm always happy when they do because usually they'll have some passion projects on there that we can discuss.
I have 23 years of experience and I am almost invisible on GitHub, and for all those years I've been fired from 4 contracts due to various disconnects (one culture mis-fit and two under-performances due to illness I wasn't aware of at the time, and one because the company literally restructured over the weekend and fired 80% of all engineers), and I have been contracting a lot in the last 10 years (we're talking 17-19 gigs).
If you look solely at my GitHub you'd likely reject me right away.
I wish I had the time and energy for passion projects in programming. I so wish it was so. But commercial work has all but destroyed my passion for programming, though I know it can be rekindled if I can ever afford to take a properly long sabbatical (at least 2 years).
I'll more agree with your parent / sibling comments: take a look at the resume and look for bad signs like too vanilla / AI language, too grandiose claims (though when you are experienced you might come across as such so 50/50), or almost no details, general tone etc.
And the best indicator is a video call conversation, I found as a candidate. I am confident in what I can do (and have done), I am energetic and love to go for the throat of the problems on my first day (provided the onboarding process allows for it) and it shows -- people have told me that and liked it.
If we're talking passion, I am more passionate about taking a walk with my wife and discussing the current book we're reading, or getting to know new people, or going to the sauna, or wondering what's the next meetup we should be going to, stuff like that. But passion + work, I stand apart by being casual and not afraid of any tech problems, and by prioritizing being a good teammate first and foremost (several GitHub-centric items come to mind: meaningful PR comments and no minutiae, good commit messages, proper textual comment updates in the PR when f.ex. requirements change a bit, editing and re-editing a list of tasks in the PR description).
I already do too much programming. Don't hold it against me if I don't live on the computer and thus have no good GitHub open projects. Talk to me. You'll get much better info.
Iroincally I'd probably have more github projects if I didn't spend 20 months looking for a full-time job.
And tbh, at the senior level they rarely care about personal projects. I must have had 60+ interviews and I feel a lack of a github cost me maybe 2 positions. When you job is getting a job, you rarely have the time for passion.I'm doing contract work in the meantime; prevents gaps from showing, more appealing than a personal project, and I can talk about that to the extent of my NDA (plenty of tech to talk about without revealing the project)
> Iroincally I'd probably have more github projects if I didn't spend 20 months looking for a full-time job.
Same. I could afford not working throughout most of 2023 but I had to deal with ruined health + my leeway didn't last as long as I hoped so I was back on the treadmill just when I was starting to enjoy some freedom and a peace of mind.
> And tbh, at the senior level they rarely care about personal projects. I must have had 60+ interviews and I feel a lack of a github cost me maybe 2 positions.
I have no idea how much it costed me but I was told in no uncertain terms 10+ times that having a GitHub portfolio would have meant no take-home assignment, and skipping parts of the interview I already attended. So it definitely carries weight _and_ can help shorten hiring processes.
So I don't feel it was a deal-breaker for the people who interviewed me either but I think it would have netted me more points, so to speak.
Assuming you are graded and are the same person:
Without portfolio: 7/10
With portfolio: 8/10
...for example.
> I'm doing contract work in the meantime
Same x2, but it's mentally draining. No stability. That removes future bandwidth that would have been used for those passion projects.
TL;DR a lot of things conspire to rob you of your creative potential. :(
I would also add meticulous attention to documenting requirements and decisions taken along the development process, especially where compromises were made. All the "why's", so to speak.
But yes, commercial development, capital-A "Agile" absolutely kills the drive.
And yep I didn't want to make my comment too big. I make double sure to document any step-by-step processes on "how to make X or Y work", especially when I stumble upon a confusing bug in a feature branch. I go out of my way to devise a 100% reproducible process and document it.
Those, plus yours, and even others, are what makes a truly good programmer IMO.
Also because most people are busy with actual work and don't have the time to have passion projects. Some people do, and that's great, but most people are simply not passionate about labor, regardless of what kind of labor it is.
To add to this, lots of senior people in the consultanting world are brought in under escalations. They often have to hide the fact they are an external resource.
Also if you have a novel or disclosure sensitive passion project, GitHub may be avoided even as a very conservative bright line.
As stated above I think it can be good to find common points to enhance the interview process, but make sure to not use it as a filter.
I really hate those who ask for GitHub profiles. Mine is psuedo anonymous and I don't want to share it with my employer or anyone I don't want to. Besides privacy, I do not understand why a company would even expect the candidate to have free contribution in the first place. Can't the candidate have other hobbies to enjoy or learn?
> If their resume seems too AI-generated, I feel less like I can trust that candidate and typically reject the candidate
So you just subjectively say "this resume is too perfect, it must be bullshit"? How the fuck is any actual, qualified engineer supposed to get through your gauntlet of subjectivity?
You'd be surprised at how good you can get at sniffing out slop, especially when it's the type prompted by fools who think it'll get them an easy win. Often the actual content doesn't even factor in - what triggers my mental heuristics is usually meta stuff like tone and structure.
I'm sure some small % of people get away with it by using LLaMA-FooQux-2552-Finetune-v3-Final-v1.5.6 or whatever, but realistically, the majority is going to be obvious to anyone that's been force-fed slop as part of their job.
> AI-proof interviewing is easy if you know what you're talking about. Look at the candidates resume and ask them to describe some of their past projects. If they can have a meaningful conversation without delays, you can probably trust their resume. It's easy to spot BS whether AI is behind it or not.
Generally, this is how to figure out if a candidate is full of crap or not. When they say they did a thing, ask them questions about that thing.
If they can describe their process, the challenges, how they solved the challenges, and all of it passes the sniff test: If they are bullshitting, they did crazy research and that's worth something too.
There are much more sophisticated methods than that now with AI, like speech to text to LLM. It's getting increasingly harder to detect interviewees cheating.
I think GP's point is that this says as much about the interview design and interviewer skill as it does about the candidate's tools.
If you do a rote interview that's easy to game with AI, it will certainly be harder to detect them cheating.
If you have an effective and well designed open ended interview that's more collaborative, you get a lot more signal to filter the wheat from the chaff.
> If you have an effective and well designed open ended interview that's more collaborative, you get a lot more signal to filter the wheat from the chaff.
I understood their point but my point is a direct opposition to theirs, that at some point with AI advances this will essentially become impossible. You can make it as open ended as you want but if AI continues to improve, the human interviewee can simply act as a ventriloquist dummy for the AI and get the job. Stated another way, what kind of "effective and well designed open ended interview" can you make that would not succumb to this problem?
Yes, that's eventually what will happen, but it becomes quite expensive, especially for smaller companies, and well, they might not even have an office to conduct the interview in if they're a remote company. It's simply best to hire slow and fire fast, you save more money that way over bringing in every viable candidate to an in-person interview.
If you're a small company you can't afford to fire people. The cost in lost productivity is immense, so termination is a last resort.
Likewise with hiring; at a small company you're looking to fill an immediate need and are losing money every day the role isn't filled. You wouldn't bring in every viable candidate, you'd bring in the first viable candidate.
FAANG hiring practices assume a budget far past any exit point in your mind.
They'd check their network for a seed engineer who can recognize talented people by talking to them.
To put the whole concern in a nutshell:
If AI was good enough to fool a seasoned engineer in an interview, that engineer would already be using the AI themselves for work and not need to hire an actual body.
My POV comes from someone who's indexed on what works for gauging technical signal at startups, so take it for what it's worth. But a lot of what I gauge for is a blend of not just technical capability, but the ability to translate that into prudent decisions with product instincts around business outcomes. AI is getting better at solving technical problems it's seen before in a black box, but it struggles to tailor that to any kind of context you give it to pre-existing constraints around user behavior, existing infrastructure/architecture, business domain and resource constraints.
To be fair, many humans do too, but many promising candidates even at the mid-level band of experience who thrive at organizations I've approved them into are able to eventually get to a good enough balance of many tradeoffs (technical and otherwise) with a pretty clean and compact amount of back and forth that demonstrates thoughtfulness, curiosity and efficacy.
If someone can get to that level of capability in a technical interviewing process using AI without it being noticeable, I'd be really excited about the world. I'm not holding my breath for that, though (and having done LOTS of interviews over the past few quarters, it would be a great problem to have).
My solution, if I were to have the luxury of having that problem, would be a pretty blunt instrument -- I'd instead change my process to actually have AI use of tools be part of the interviewing process -- I'd give them a problem to solve, a tuned in-house AI to use in solving the problem, and have their ability to prompt it well, integrate its results, and pressure check its assumptions (and correct its mistakes or artifacts) be part of the interview itself. I'd press to see how creatively they used the tool -- did they figure out a clever way to use it for leverage that I wouldn't have considered before? Extra points for that. Can they use it fluidly and in the heat of a back and forth of an architectural or prototyping session as an extension of how they problem solve? That will likely become a material precondition of being a senior engineer in the future.
I think we're still a few quarters (to a few years) away from that, but it will be an exciting place to get to. But ultimately, whether they're using a tool or not, it's an augment to how they solve problems and not a replacement. If it ever gets to be the latter, I wouldn't worry too much -- you probably won't need to do much hiring because then you'll truly be able to use agentic AI to pre-empt the need for it! But something tells me that day (which people keep telling me will come) will never actually come, and we will always need good engineers as thought partners, and instead it will just raise the bar and differentiation between truly excellent engineers and middle of the pack ones.
People don't really call the police, nor sue over this. But they can, and have in the past.
If it gets bad, look for people starting to seek legal recourse.
People aren't developers with 5 years experience, if all they can do is copy and paste. Anyone fraudulently claiming so is a scam artist, a liar, and deserves jail time.
So you create an interview process that can only be passed by a skilled dev, including them signing a doc saying the code is entirely their work, only referencing a language manual/manpages.
And if they show up to work incapable of doing the same, it's time to call the cops.
That's probably the only way to deal with scam artists and scum, going forward.
Can you cite case law around where some one misrepresented their capabilities in a job interview and were criminally prosecuted? Like what criminal statute specifically was charged? You won’t find it, because at worst this would fall under a contract dispute and hence civil law. Screeching “fraud is a crime” hysterically serves no one.
Fraud can be described as deceit to profit in some way. You may note the rigidity of the process above, where I indicated a defined set of conditions.
It costs employers money to on board someone, not just in pay, but in other employees training that person. Obviously the case must be clear cut, but I've personally hired someone who clearly cheated during the remote phone interview, and literally couldn't even code a function in any language in person.
There are people with absolutely no background as a coder, applying to jobs with 5 years experience, then fraudulently misrepresenting the work of others at their own, to get the job.
That's fraud.
As I said, it's not being prosecuted as such now. But if this keeps up?
> People aren't developers with 5 years experience, if all they can do is copy and paste. Anyone fraudulently claiming so is a scam artist, a liar, and deserves jail time.
I won't name names, but there are a lot of Consulting companies that feed off Government contracts that are literally this.
"Experience" means a little or a lot, depending on your background. I've met plenty of people with "years of experience" that are objectively terrible programmers.
There's candidates running speech-to-text that avoid the noticeable delays, but it's still possible to do the right kind of digging the AI will almost always refuse to do, because it's way too polite.
It's as if we were testing for replicants in Blade Runner: The AI response will rarely figure out you are aiming to look for something frustrating, that they are actually proud of, or figure out when you are looking for a hot take you can then disagree with.
The traditional tech interview was always designed to optimize for reliably finding someone who was willing to do what they were told even if it feels like busywork. As a rule someone who has the time and the motivation to brush up on an essentially useless skill in order to pass your job interview will likely fit nicely as a cog in your machine.
AI doesn't just change the interviewing game by making it easy to cheat on these interviews, it should be changing your hiring strategy altogether. If you're still thinking in terms of optimizing for cogs, you're missing the boat—unless you're hiring for a very short term gig what you need now is someone with high creative potential and great teamwork skills.
And as far as I know there is no reliable template interview for recognizing someone who's good at thinking outside the box and who understands people. You just have to talk to them: talk about their past projects, their past teams, how they learn, how they collaborate. And then you have to get good at understanding what kinds of answers you need for the specific role you're trying to fill, which will likely be different from role to role.
The days of the interchangeable cog are over, and with them easy answers for interviewing.
Have you spent a lot of time trying to hire people? I guarantee you there is no shadow council trying to figure out how to hire "busywork" worker bees. This perspective smells completely like "If I were in charge, things would be so much better." Guess what? If you were to take your idea and try to lead this change across a 100 people engineering org, there would be "out of the box thinkers" who would go against your ideas and cause dissent. At that point, guess what? You're going to figure out how to hire compliant people who will execute on your strategy.
"talk about their past projects, their past teams, how they learn, how they collaborate"
You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.
- “big” tech companies like Google, Amazon, Microsoft came up with these types of tech interviews. And there it seems pretty clear that for most of their positions they are looking for cogs
- The vast majority of tech companies have just copied what “big” tech is doing, including tech interviews. These companies may not be looking for cogs, but they are using an interview process that’s not suitable for them
- Very few companies have their own interview process suitable for them. These are usually small companies and therefore the number of engineers in such companies is negligible to be taken into account (most likely, less than 1% of the audience here work at such companies)
And what is wrong with being a cog? Not everyone is going to invent the next ai innovation and not everyone is cut out to build the next hot programming language.
Bugs need to be fixed. Features need to be implemented. If it weren't for cogs, you'd have people just throwing new projects over the fence and dropped 6 months after release. Don't want to be another cog? Join a startup. Plenty of those hiring. The reality is that when you work at a large company, you're one of 50,000 people. By definition, only 1% are in the top 1%.
Someone has to wash the dishes and clear the tables. Let's stop looking down at jobs just because it's not hot and sexy. People who show up and provide value is great and should be appreciated.
The interview process being a circus of how many hoops you'll jump through. Which in this case is upwards of 3 months of trivia, beauracracy, and politics. And these days they don't even give you the grace of a response; they may just ghost you.
But being a cog itself is personally fine. Work to live, not live to work. But leading people on to drop them on the tip of a hat is disrespectful of everyone's time. At least a 1-2 stage interview for a dishwasher or table busser is only wasting a few hours per role applied. Time is the most valuable resource we have, of course people want to use it carefully.
Human cogs are going to be phased out. I'm not an AI doomer who thinks engineers are going to be replaced across the board, but the need for a human being who functions like a robot is going away fast. We need humans to do what humans do well, and humans don't do well as cogs in a machine—machines are better at that role.
The days of leetcode interviews are numbered not because they're too easy to cheat at, but because they were always optimizing for the wrong traits in most companies that cargo culted them, and even the companies that used them correctly (Big Tech) are going to rapidly need a different type of interview for the new types of hires they need.
> You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.
This is the job of a good interviewer. I've run the gauntlet from terrible to great answers to the exact same questions depending on the interviewer. If you literally just ask that question out of the blue, you'll either get a bad or rehearsed response. If you establish some rapport, and ask it in a more natural way, you'll get a more natural answer.
It's not easy, but neither is being on the other side of the interviewer, and that's never been accepted as an excuse
> I guarantee you there is no shadow council trying to figure out how to hire "busywork" worker bees.
The council itself is made of "busywork" worker bees. Slave hiring slaves - the vast majority of IT interviewers and candidates are idiot savants - they know very little outside of IT, or even realize that there is more to life than IT.
> You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.
This was the norm until perhaps for about the last 10-15 years of Software Engineering.
> I guarantee you there is no shadow council trying to figure out how to hire "busywork" worker bees.
I didn't say that. I said that this style of interview was designed to hire pluggable cogs. As others have noted, that was the correct move for Big Tech and was cargo culted into a bunch of other companies that didn't know why their interviews were shaped the way they were.
> there would be "out of the box thinkers" who would go against your ideas and cause dissent. At that point, guess what? You're going to figure out how to hire compliant people who will execute on your strategy.
In answer to your original question: yes, I'm actively involved in hiring at a 100+ person engineering org that hires this way. And no, we're not looking to figure out how to hire compliant people, we're hiring engineers who will push back and do what works well, not just act because an executive says so.
> You have now excluded amazing engineers who suck at talking about themselves in interviews. They may be great collaborators and communicators, but freeze up selling themselves in an interview.
Only if you suck at making people comfortable and at understanding different (potentially awkward) communication styles. You don't have to discriminate against people for being awkward, that's a choice you can make. You can instead give them enough space to find their train of thought and pursue it, and it does work—I recently sat in on an interview like that with someone who fits your description exactly, and we strongly recommended him.
> what you need now is someone with high creative potential and great teamwork skills.
That’s exactly what we always needed, long before LLMs arrived. That’s why all the interviews I’ve seen or give already were designed to have conversations.
I’m agreeing with you, but I’ve never seen these ‘interchangeable cog’ interviews you’re talking about.
Right, I agree. The leetcode interviews are a bad fit for almost every company—they only made sense in the Googles and Microsofts that invented them and actually did want to optimize for cogs.
I think every interviewer, hiring manager ought to know or be trained on these tools, your intuition about candidate's behaviour isn't enough. Otherwise, we will soon reach a tipping point where honest candidates will be at a severe disadvantage.
Tbh I’m very happy these tools exist. If your company wants to ask dumb formulaic leetcode questions and doesn’t care about the candidate’s actual ability then this is what you deserve. If they can automate the interview so well then they should also be able to automate the job right? Or are your interview questions not representative of what the job actually entails?
I understand this sentiment for experienced developers. It is an imperfect signal. But what is in your opinion a better signal for junior or new grads?
Every alternative I can think of is either worse, or sounds nice but impractical to implement in practice at scale.
I don’t know about you, but most interviewers out there don’t have the ability to judge the technical merit of a bullshitters’s contribution to a class or internship project in half an hour, specially if it’s in a domain interviewer has no familiarity with. And by the way, not all of them are completely dumb, they do know computer science, just perhaps not as well as an honest competitor.
>But what is in your opinion a better signal for junior or new grads?
They are juniors, I don't expect them to be experts, I expect eagerness and passion. They spent 4 or more years focusing on schooling, show me the results of your projects. Let them talk and see how well they understand what they did. Side projects are even better to stand out.
And you know... apparently people can still fail fizzbuzz in 2025. If you really question their ability to code, ask the basics, not if they can write a Sudoku verifier on the spot. If you aren't a sudoku game studio I don't see the application outside of "can they work with arrays?"
>I don’t know about you, but most interviewers out there don’t have the ability to judge the technical merit of a bullshitters’s contribution to a class or internship project in half an hour, specially if it’s in a domain interviewer has no familiarity with.
everyone has a different style. Personally I care a lot less about programming proficiency and a lot more about technical communication. If they only wrote 10 lines of code for a group project but can explain every aspect of the project as if they written it themselves, what am I really missing? The odds of that sort of technical reaspning being accompanied by poor coding is a lot rarer than the alternative of a Leetcode wizard who can't grasp architectural concept nor adjust to software tooling, in my experiences.
Yea I totally agree. During one of my interviews, the interviewers asked me to write "snake game" in react. I had spent the last week studying their open source project and learning how things were structured, and then the two part interview consisted of parsing json and outputting it as markdown, and writing snake game. They're weren't a game ship, so it really didn't make any sense that they would've asked about that... It was really lame
>But what is in your opinion a better signal for junior or new grads?
For students specifically, the strongest signal is if they've done research with a past collaborator of mine and my collaborator vouches for them. It's a great signal because it's very high barrier to entry and absolutely does not scale.
Realistically, being able to speak confidently about something they did/built during their education is a decent proxy. If they can handle open-ended follow-up questions like "what did you learn?" and "what trade-offs did you make?" and "how would you tweak it under X different requirements?" then that's a great signal too.
These aren't "gotcha" questions, but they insist on the candidate being reasonably competent and (most importantly) an actual human who can think on their feet.
> Or are your interview questions not representative of what the job actually entails?
100% of all job interviews are a proxy. It is not possible to perform an interview in ~4 hours such that someone sufficiently engages in what the job “actually entails”.
A leetcode interview either is or not a meaningful proxy. AI tools either do or not invalidate the validity of that proxy.
Personally I think leetcode interview are an imperfect but relatively decent proxy. And that AI tools render that proxy invalid.
Hopefully someday someone invents a better interview scheme that can reliably and consistently scale to thousands of interviews, hundreds of hires, and dozens of interviewers. That’d be great. Alas it’s a really hard and unsolved problem!
>> It is not possible to perform an interview in ~4 hours such that someone sufficiently engages in what the job “actually entails”.
>> ... leetcode interview are an imperfect but relatively decent proxy.
I think all this is just the status quo that should be challenged instead of being justified.
When I conduct interviews (environment: a FAANG company), I focus on (a) fundamental understanding and (b) practical problems. None of the coding problems I pose are more than O[N] in complexity. Yet, my hiring decisions have never gone wrong.
>> How do you know? Every interview you conduct is for your team?
My apologies. I should have been more careful in making the claim. You are right in challenging me**.
>> What do you ask? If you want to challenge the status quo you have to offer a replicable alternative
I cannot share the specific problems publicly as then those become available for candidates to practice out, whereas the idea is to give them a fresh problem to think about. Further, the list of interviewers is often made available to the candidates ahead of the time, which can create a loophole as interviewers tend to have a relatively small set of problems they keep repeating for different candidates***.
In general however, I always use real-life problems. Similar examples from text books may perhaps be: (a) Find the day of the week for a given date, (b) Output a given number in textual form (e.g., 14630 -> Fourteen thousand, six hundred and thirty), (c) etc. Most of the problems I pose are motivated from actual use cases, while the remaining are taken from my own prior interviews as a candidate. As I noted before, each one is at best Big-O[N] in time complexity, just like the Fizz Buzz problem. I have brought in more complex ones only when testing for specialized skills for specialized positions.
I also focus a lot on fundamentals. There have been candidates hired for my own teams which could not fully solve the problem I posed, but who however were unambiguously going in the right direction and would not say anything wrong or stupid during their reasoning towards it. I chose to go ahead with them with some risks in my mind, however, these people proved to be stellar in performance. When fundamentals are understood well, gaps can be picked up on the job as well.
I still do stand by my original assertion that proxy problems can be avoided for interviewing. In general, the best method to solve a problem is to solve that problem itself and not another.
** Appendix: Here're the actual data points:
- For interviews within my own team where I have been an interviewer, my claim likely holds. However, this is a small number of hires (say around six-eight) so the error margins would be large. Also, I would not have information about false-rejects, hence '100% accuracy' cannot be claimed even in theory.
- I have been a interview 'bar raiser' at Amazon. Bar Raisers at Amazon are people highly trusted for interview outcome decisions and process control.
*** I have even had cases where (a) someone interviewed a candidate for a second time after a time gap and posed exactly the same problem, (b) a recruiter leaked out questions an interviewer asked frequently to the candidate.
I think this is the first interview cheating tool I've seen that feels morally justified to me. I wonder if it will actually change company behavior at all.
anyone I know who actually got a job through leetcode style in the last 2 years cheated. they would get their friends to mirror monitor and then type the answers in from chatgpt LOL
I strongly disagree. This is nothing. You can sort out if someone is using something like this to cheat. You have a conversation. You can ask conceptual questions about algorithms and time complexity and figure out their level and see how their sophistication matches their solution on the LeetCode problem or whatever. Now, if you have really bad intuition or understanding of human behavior then yeah it would probably be hard but in that case being a good interviewer is probably hopeless anyway.
The key is having interviewers that know what they are talking about so in-depth meandering discussions can be had regarding personal and work projects which usually makes it clear whether the applicant knows what they are talking about. Leetcode was only ever a temporary interview technique, and this 'AI' prominence in the public domain has simply sped up it's demise.
You ask a rote question and you'll get a rote answer while the interviewee is busy looking at a fixed point on the screen.
You then ask a pointed question about something they know or care about, and suddenly their face lights up, they're animated, and they are looking around.
You know, this makes me wonder if a viable remote interview technique, at least until real-time deepfaking gets better, would be to have people close their eyes while talking to them. For somebody who knows their stuff it'll have zero impact; for someone relying entirely on GPT, it will completely derail them.
A filter could probably do it already. There are already filters to make you appear to be looking at the camera no matter where your eyes are pointing.
That’s an interesting idea. Sadly I think the next AI interviewing tool to be developed in response would make you look like your eyes are closed. But in the interim period it could be an interesting way to interview. Doesn’t really help for technical interviews where they kinda need to have their eyes open, but for pre-screens maybe…
This is the way. We do an intro call, an engineering chat (exactly as you describe), a coding challenge and 2 team chat sessions in person. At the end of that, we usually have a good feeling about how sharp the candidate is, of they like to learn and discover new things, what their work ethic is. It's not bullet proof, but it removes a lot of noise from the signal.
The coding challenge is supposed to be solved with AI. We can no longer afford not to use LLMs for engineering, as it's that much of a productivity boost when used right, so candidates should show how they use LLMs. They need to be able to explain the code of course, and answer questions about it, but for us it's a negative mark of a candidate proclaims that they don't use LLMs.
> The coding challenge is supposed to be solved with AI. We can no longer afford not to use LLMs for engineering, as it's that much of a productivity boost when used right, so candidates should show how they use LLMs. They need to be able to explain the code of course, and answer questions about it, but for us it's a negative mark of a candidate proclaims that they don't use LLMs.
Do you state this upfront or is it some hidden requirement? Generally I'd expect an interview coding exercise to not be done with AI, but if it's a hidden requirement that the interviewer does not disclose, it is unfair to be penalized for not reading their minds.
I would say as long as it is stated you can complete the coding exercise using any tool available it is fine. I do agree, no task should be a trick.
I am personally of the view you should be able to use search engines, AI, anything you want, as the task should be representative of doing the task in person. The key focus has to be the programmer's knowledge and why they did what they did.
One client of mine has a couple repositories for non-mission critical things like their fork of an open source project, decommissioned microservices, a SVG generator for their web front-end, etc.
They also take this approach of "whatever tool works," but their coding test is "here's some symptoms of the SVG generator misbehaving, figure out what happened and fix it," which requires digging into the commit history, issues, actually looking at the SVG output, etc.
Once you've figured out how the system architecture works, and the most likely component to be causing the problem, you have to convert part of the code to use a newer, undocumented API exposed by a RPC server that speaks a serialization format that no LLM has ever seen before. Doing this is actually way faster and accurate using an AI, if you know how to centaur with it and make sure the output is tested to be correct.
This is a much more representative test of how someone's going to handle doing actual work knocking issues out.
Well, the challenge involves using a python LLM framework to build a simple RAG system for recipes.
It's not a hidden requirement per se to use LLM assistance, but the candidate should have a good answer ready why they didn't use an LLM to solve the challenge.
Why is it a negative that the candidate can solve the challenge without using an LLM? I don’t really understand this.
Also, what is a good answer for not using one? Will you provide access to one during the course of the interview? Or I am just expected to be paying for one?
It's not negative that the candidate can solve it without an LLM, but it is positive if the candidate can use the LLM to speed up the solution. The code challenge is timeboxed.
We are providing an API key for LLM inference, as implementing the challenge requires this as well.
And I haven't heard a good answer yet for not using one, ideally the candidate knows how to mitigate the drawbacks of LLMs while benefiting from their utility regardless.
A good answer in this situation would focus on demonstrating that you made a conscious decision based on the problem requirements and the approach that best suited the task. Here’s an example of a thoughtful response:
"I considered various approaches for solving this problem. Initially, I thought about using an LLM, as it's great for natural language processing and generating text-based solutions. However, for this particular challenge, I felt that a more algorithmic or structured approach was more appropriate, given the problem's nature (e.g., the need for performance optimization, a specific coding pattern, or better control over the output). While LLMs are powerful tools, they may not always provide the precision and control required for highly specific, performance-critical tasks, so I chose to solve the problem through a more traditional method. That said, if the problem had been more open-ended or involved unstructured data like text generation, I would definitely consider leveraging an LLM."
This answer reflects the candidate's ability to critically assess the problem and use the right tools for the job, showing maturity and sound judgment.
Ah so you expect mind readers who can divine something from your brain that goes against 99.99% of interviewers' practices and would get them instantly disqualified from an overwhelming majority of interviews. Nice work good luck finding candidates.
Indeed, looks like it is just an unspoken rule and an interview trick after all. I would not want to interview with this person, much less work with them.
> as it's that much of a productivity boost when used right
Frankly, if an interviewer told me this, I would genuinely wonder why what they're building is such a simple toy product that an LLM can understand it well enough to be productive.
I've always just tried to hold a conversation with the candidate, what they think their strengths are weaknesses are and a little probing.
This works especially well if I don't know the area they're strongest in, because then they get to explain it to me. If I don't understand it then it's a pretty clear signal that they either don't understand it well enough or are a poor communicator. Both are dealbreakers.
Otherwise, for me, the most important thing is gauging: Aptitude, Motivation and Trustworthiness. If you have these three attributes then I could not possibly give a shit that you don't know how kubernetes operators work, or if you can't invert a binary tree.
You'll learn when you need it; it's not like the knowledge is somehow esoteric or hidden.
This is how I interview potential hires. I’ll admit I haven’t interviewed someone below a senior level in probably 10 years, so I interview someone that has a resume with experience that I can draw from. I read what they’ve worked on and just go from there. I hope I never have to submit someone to some stupid take home test or Leet Code interview.
As someone currently job searching it hasn’t changed much, besides companies adding DO NOT USE AI warnings before every section. Even Anthropic forces you to write a little “why do you want to work here DO NOT USE AI” paragraph. The irony.
Applying at Anthropic was a bad experience for me. I was invited to do a timed set of leetcode exercises on some website. I didn't feel like doing that, and focused on my other applications.
Then they emailed me a month later after my "invitation" expired. It looked like it was written by a human: "Hey, we're really interested in your profile, here's a new invite link, please complete this automated pre-screen thingie".
So I swallowed my pride and went through with that humiliating exercise. Ended up spending two hours doing algorithmic leetcode problems. This was for a product security position. Maybe we could have talked about vulnerabilities that I have found instead.
I was too slow to solve them and received some canned response.
fyi, that's because (from experience) the last job req I publicly posted generated almost 450 responses, and (quite generously) over a third were simply not relevant. It was for a full-stack rails eng. Here, I'm not even including people whose experience was django or even React; I mean people with no web experience at all, or were not in the time zone requested. Another 20% or so were nowhere near the experience level (senior) requested either.
The price of people bulk applying with no thought is I have to bulk filter.
So you allow yourself to use AI in order to save time, but we have to put up with the shit[1] companies make up? That's good, it's for the best if I don't work for a company that thinks so lowly of its potential candidates.
[1]: Including but not limited to: having to manually fill a web form because the system couldn't correctly parse a CV; take-home coding challenges; studying for LeetCode interviews; sending a perfectly worded, boot-licking cover letter.
For the time being, I’ve banned LLMs in my interviews.
I want to see how the candidate reasons about code. So I try to ask practical questions and treat them like pairing sessions.
- Given a broke piece of code, can you find the bug and get it working?
- Implement a basic password generator, similar to 1Password (with optional characters and symbols)
If you can reason about code without an LLM, then you’ll do even better with an LLM. At least, that’s my theory.
I never ask trick questions. I never pull from Leetcode. I hardly care about time complexity. Just show me you can reason about code. And if you make some mistakes, I won’t judge you.
I’m trying to be as fair as possible.
I do understand that LLMs are part of our lives now. So I’m trying to explore ways to integrate them into the interview. But I need more time to ponder.
Thinking out loud, here’s one idea for an LLM-assisted interview:
- Spin up a Digital Ocean droplet
- Add the candidate’s SSH key
- Have them implement a basic API. It must be publicly accessible.
- Connect the API to a database. Add more features.
- Set up a basic deployment pipeline. Could be as simple as script that copies the code from your local machine to the server.
Anything would be fair game. The goal would be to see how the candidate converses with the LLM, how they handle unexpected changes, and how they make decisions.
Changed enormously. Both resumes and interviews are effectively useless now. If our AI agents can't find a portfolio of original work nearly exactly what we want to hire you for then you aren't ever going to hear from us. If you are one of the 1 in 4000 applications who gets an interview then you're already 70% likely to get an offer and the interview is mostly a formality.
What worked for me is just ignoring the job listing websites, and calling recruiters directly on the phone. Don’t bother hitting “easy apply” just scroll to the bottom and call the number.
I’ve also been asked for the first time in ages to come to the companies office to do interviews.
What do you tell them on the phone? Are they prepared for just "Hi I want to apply for the $job position"? And do they have an answer besides "cool, use the website"?
They put their phone number there because they want you to call it. I say "I saw this position <position name> advertised on LinkedIn and I'm interested, is this still available?"
Last time I did this they told me it is but that they are at late stages of interviewing so I shouldn't bother applying for that one, but they got down my details and had other jobs that matched what I was looking for. Recruiters are sales people and you just reversed cold called them making their job easier. The majority of applications are AI bots and people who don't live in the country the job is listed in. By making a phone call you are up the top of the list of "most likely to be a legitimate applicant".
And when was this? I can't remember the last time anyone had their phone number publicly displayed on LinkedIn. And now messaging recruiters is a paid feature. The market's only making it more difficult to reach a human.
US here. When I've tried similar tactics in other contexts, I tend to get an answer along the lines of "cool, use the website", and politely trying to get me off the phone. But maybe it's worth a shot. :)
Yeah, it may be a cultural difference. The US has a huge fear of doxxing in the modern world. Can be traced back to decades when a crazed fan murdered a celebrity in their home. Easily accessible firearms definitely doesn't help.
This even applies to businesses in some cases. You trying to walk in and talk to someone is a security threat compared to the times where you could do that and walk out with a job offer. US companies absolutely hate unsolicited calls from non-businesses.
>If our AI agents can't find a portfolio of original work nearly exactly what we want to hire you for
that'd be a huge issue for most candidates (and basically all top candidates) because "exactly what you want to hire you for" is probably not open source code to begin with.
>If you are one of the 1 in 4000 applications who gets an interview then you're already 70% likely to get an offer and the interview is mostly a formality.
That has not been my experience at all in 2023/2024.
I thought that meant what you typically write in the "Experience" section. GP, am I wrong?
Is everyone writing a "Projects" section by rewording what they wrote in "Experience"?! For me, "Projects" should strictly be personal projects. If not, maybe that's what I'm missing.
I actually believe that it would be possible to provide a read only clone url in a resume link but I don't know if a way to make a link to a browsable version (short of having a proxy server type setup, or, of course, a slim server protected by http basic)
I'm saying the sections of the resume don't matter at all. The resume is basically ignored. You either have public code you can point to on Github or you aren't ever hearing from us.
I’m curious to hear a bit more about your rationale for this. Is it because trust is otherwise hard to establish between you and the candidate? Is it like “if we can’t see the candidate’s code then we have no evidence they can code”?
Essentially, yes. Public portfolios come in different flavors though. Most often it's code. But sometimes it's research, a blog, transcripts of talks ripped from YouTube.
That’s the reality for most people. Creating many things under NDA with tools watching for IP theft. So no single line of code can leave the company. I know a guy who has a portfolio, but he’s freelance web designer.
Tell this to the army of lawyers at my ex-ex-workplace. Every document I printed was reviewed by our security officer (I met him months after I left by accident and we had a chat).
My first read of this was they made a joke (not wise when scheduling for interviews sure but maybe funny) by intentionally responding that way.
This is because my brain couldn't fathom what is likely the reality here -- that someone was just pumping your email thru AI and pumping the response back unedited and unsanitized, and so the first thing you got back was just the first "part" of the AI response.
I'm with you. Looking at the way people respond online to things now since LLMs and GenAI went mainstream is baffling. So many comments along the lines of "this is AI" when there are more ordinary explanations.
Yeah I don't know about this specific situation, but as someone who is on the job market, is a good developer, but can come off as a little odd sometimes, I often wonder how often I roll a natural 1 on my Cha check and get perceived as an AI imposter.
That's a good point. The major LLMs are all tilted so much towards a weird blend of corpo-speak with third-world underpaid English speaker influence (e.g. "delve", from common Nigerian usage) that having any quirks at all outside that is a good sign.
Your perception of the reality is spot on. For this round I was hiring for entry level technical support and we had limited time to properly vet candidates.
Unfortunately what we end up doing is have to make some assumptions. If something seems remotely fishy, like that “Memory updated” or typeface change (ChatGPT doesn’t follow your text formatting when pasting into your email compose window), it raises a lot of eyebrows and very quickly leads to a rejection. There’s other cases where your written English is flawless but your phone interview indicates you don’t understand the English language compared to when we correspond over email/Indeed/etc.
Mind you, this is all before we even get to the technical knowledge part of any interview.
On a related hire, I am also in the unfortunate position where we may have to let a new CS grad go because it seemed like every code change and task we gave him was fully copy/pasted through ChatGPT. When presented with a simple code performance and optimization bug, he was completely lost on general debugging practices which led our team to question his previous work while onboarding. Using AI isn’t against company policy (see: small team with limited resources), but personally I see over reliance on ChatGPT as much, much worse than blindly following Stack Overflow.
A friend of mine works with industrial machines, and once was tasked with translating machine's user's manual, even though he doesn't speak English. I do, and I had some free time, so I helped him. As an example, I was given user manual for a different, but similar machine.
1. The manual was mostly a bunch of phrases that were grammatically correct, but didn't really convey much meaning
2. The second half of the manual talked about a different machine than the first half
3. It was full of exceptionally bad mistranslations, and to this day "trained signaturee of the employee" is our inside joke
Imagine asking ChatGPT to write a manual except ChatGPT has down syndrome and a heart attack so it gives you five pages of complete bullshit. That was real manual that got shipped a 100 000€ or so machine. And nobody bothered to proofread it even once.
I once worked in the US for a Japanese company that had their manuals "translated" into English and then sent on for polishing. Like the parent, it would be mostly "a bunch of phrases that were grammatically correct, but didn't really convey much meaning" . I couldn't spend more than an hour a day on that kind of thing; more than that and it would start to make sense.
It's not the solution itself that is interesting to me, it's first finding out whether the person can go through the motions of solving it. Like reading instructions, submitting solutions etc. It filters out those who can't code at all or who can't read instructions. A surprisingly large chunk. If the person also pipes the problem through an LLM, good.
To then select a good developer I'd test communication skills. Have them communicate what the pros/cons of several presented solutions are. And have them critique their own solution. To ensure they don't have canned answers, I might just swap the problem/solutions for the in-person bit. The problem they actually solved and how they did it isn't very important. It's whether they could read and understand the problem, formulate multiple solutions, describe why one would be chosen over another. Being presented with a novel problem and being asked on the spot to analyze it is a good exercise for developers (Assuming software development is the job we're discussing here).
Just take the time to talk to people. The job is about reading and writing human language more than computer programming. Especially with the arrival of AI when every junior developer is now micro managing even more junior AI colleagues.
> hiring market tightened up... that doesn't mean there isn't one
tightened market is one thing, the absolute insanity of the recruitment process in last couple years with now AI thrown into the mix is really something to behold, test these waters at your own peril
yeah the name of the game now is just to avoid any company that has shitty recruitment. you can tell in an instant if they are worth your time or not, which I'd say in Canada is about 90% a waste of time.
someone who actually wants to hire will want you and wants to do whatever they can to get a good candidate.
Realistically its just the blind leading the blind. People have forgotten that a interview process was designed to avoid false positives, and that the companies who were most selective were providing top 1% comp and had brands that could carry that weight. If you are google and you were handing $500K in RSUs on top of $300K+ in salary, you better damn pick the right candidate...
For some random SMB in shipping or something to be bashing people over the head with 10 step-leetcode-full-panel-10-hour-systems-design interviews they just don't get it. For starters they probably don't even have the talent to properly evaluate the prospect. So who are they helping?
- For your sanity you want to make sure there aren't obvious signs of something being a ghost job, an H1B hire, an internal hire, or a farce because "we're always growing" (lying).
- expect longer processes. Hasn't happened to me but 7+ stage interview processes is not uncommon these days, even outside of tech
- accept that some processes will be frozen under your nose. especially because of longer processes crossing quarters
- expect less respect in the process. They feel like they do not want you. You are expendable
- don't bother negotiating in this market. You get a number and take it unless you already have a job. Even then they may simply pass you for someone more desperate. BTW wages are being suppressed; you're probably not getting pre 2023 salaries right now.
Yeah... if you're not being abused at work, I'd just weather it out.
On our side we've transitioned to only in person interviews.
The biggest thing I've noticed is take home challenges have lost all value. Since GPT can plausibly solve almost anything you throw at it, and it doesn't give you any indication of how the candidate thinks.
And to be fair, I want a candidate that uses GPT / Cursor / whatever tools get the job done. But reading the same AI solution to a coding challenge doesn't tell me anything about how they think or approach problems.
1. Use recruiters and network: Wading through the sheer volume of applications was even nasty before COVID, I don't even want to imagine what it's like now. A good recruiter or a recommendation can save a lot of time.
2. Do either no take home test, or one that takes at most two hours. I do discuss the solution candidates came up with, so as long as they can demonstrate they know what they did there, I don't care too much how they did it. If I do this part, it's just to establish some base line competency.
3. Put the candidate at ease - nervous people don't interview well, another problem with non-trivial tasks in technical interviews. I rarely do any live coding, if I do, it's pairing and for management roles, to e.g. probe how they manage disagreement and such. But for developers, they mostly shine when not under pressure, I try to see that side of them.
4. Talk through past and current challenges, technical and otherwise. This is by far the most powerful part of the interview IMHO. Had a bad manager? Cool, what did you do about it? I'm not looking for them having resolved whatever issue we talk about, I'm trying to understand who they are and how they'd fit into the team.
I've been using this process for almost a decade now, and currently don't think I need to change anything about it with respect to LLMs.
I kinda wish it was more merit based, but I haven't found a way to do that well yet. Maybe it's me, or maybe it's just not feasible. The work I tend to be involved in seems way too multi faceted to have a single standard test that will seriously predict how well a candidate will do on the job. My workaround is to rely on intuition for the most part.
reply