Because as someone who’s interviewing, I know you can use AI — anyone can. It likely obscures me from judging the pitfalls, and design and architecture decisions that are required in proper engineering roles. Especially for senior and above applications, I want to make an assessment of how you think about problems, where it gives a chance for the candidate to show their experience, their technical understanding, and their communication skills.
We don’t want to work with AI, we are going to pay the person for the persons time, and we want to employ someone who isn’t switching off half their cognition when a hard problem approaches.
> No, not everyone can really use AI to deliver something that works
"That works" is doing a lot of heavy lifting here, and really depends more on the technical skills of the person. Because, shocker, AI doesn't magically make you good and isn't good itself.
Anyone can prompt an AI for answers, it takes skill and knowledge to use those answers in something that works. By prompting AI for simple questions you don't train your skill/knowledge to answer the question yourself. Put simply, using AI makes you worse at your job - precisely when you need to be better.
"Put simply, using AI makes you worse at your job - precisely when you need to be better."
I don't follow.
Usually jobs require deliver working things. The more efficient the worker knows his tools(like AI), the more he will deliver -> the better he is at his job.
If he cannot deliver reliable working things, because he does not understand the LLM output, then he fails at delivering.
You cannot just reduce programming to "deliver working things", though. For some tasks, sure, "working" is all that matters. For many tasks, though, efficiency, maintainability, and other factors are important.
You also need to take into account how to judge if something is "working" or not — that's not necessarily a trivial task.
Completely agree. I'm judging the outputs of a process, I really am only interested in the inputs to that process as a matter of curiosity.
If I can't tell the difference, or if the AI helps you write drastically better code, I see it as no more nor no less than, for example, pair programming or using assistive devices.
I also happen to think that most people, right now, are not very good at using AI to get things done, but I also expect those skills to improve with time.
Sure, but the output of your daily programming work isn't just the code you write for the company. It's also your own self-improvement, how you work with others, etc. For the record, I'm not just saying "AI bad"; I've come around to some use of AI being acceptable in an interview, provided it's properly assessed.
> Sure, but the output of your daily programming work isn't just the code you write for the company. It's also your own self-improvement, how you work with others, etc
Agreed, but I as the "end user" care not at all whether you're running a local LLM that you fine tune, or storing it all in your eidetic memory, or writing it down on post it notes that are all over your workspace[1]. Anything that works, works. I'm results oriented, and I do care very much about the results, but the methods (within obvious ethical and legal constraints) are up to you.
[1] I've seen all three in action. The post-it notes guy was amazing though. Apparently he had a head injury at one point and had almost no short term memory, so he coated every surface in post-its to remind himself. You'd never know unless you saw them though.
I think we're agreeing on the aim—good results—but disagreeing on what those results consist of. If I'm acting as a 'company', one that wants a beneficial relationship with a productive programmer for the long-term, I would rather have [ program that works 90%, programmer who is 10% better at their job having written it ] as my outputs than a perfect program and a less-good programmer.
I take epistemological issue with that, basically, because I don't know how you measure those things. I believe fundamentally that the only way to measure things like that is to look at the outputs, and whether it's the system improving or the person operating that system improving I can't tell.
What is the difference between a "less good programmer" and a "more good programmer" if you can't tell via their work output? Are we doing telepathy or soul gazing here? If they produce good work they could be a team of raccoons in a trench coat as far as I'm aware, unless they start stealing snacks from the corner store.
There is also a skill in prompting the AI for the right things in the right way in the right situations. Just like everyone can use google and read documentation, but some people are a lot better at it than others.
You absolutely can be a great developer who can't use AI effectively, or a mediocre developer who is very good with AI.
> not everyone can really use AI to deliver something that works.
That's not the assumption. The assumption is that if you prove you have a firm grip on delivering things that work without using AI, then you can also do it with AI.
And that it's easier to test you when you're working by yourself.
I see this line of "I need to assess your thinking, not the AI's" thinking so often from people who claim they are interviewing, but they never recognize the elephant in the room for some reason.
If people can AI their way into the position you are advertising, then at least one of the following two things have to be true:
1) the job you are advertising can be _literally_ solved by AI
2) you are not tailoring your interview process properly to the actual job that the candidate will need to do, hence the handwave-y "oh well harder problems will come up later that the AI will not be able to do". Focus the interview on the actual job that the AI can't do, and your worries will disappear.
My impression is that the people who are crying about AI use in interviews are the same people who refuse to make an effort themselves. This is just the variation of the meme where you are asked to flip a red black tree on a whiteboard, but then you get the job, and your task is to center a button with CSS. Make an effort and focus your interview on the actual job, and if you are still worried people will AI their way into it, then what position are you even advertising? Either use the AI to solve the problem then, or admit that the AI can't solve this and stop worrying about people using it.
Right now there’s a lot of engineering that falls outside SWE — think datacenter architecture or integrations, or security operations, or designing automotive assemblies. There are also dozens of components of a job that require context windows measured in the _years_ of experience — knowing what will work at scale and what won’t, client interfacing, communicating decisions down inside organisations through to customer support networks about how they are required to operate.
When we’re hiring for my role, Security Operations, I can’t have someone googling or asking AI what to do during an cyber security incident, but they can certainly use AI as much as they want when writing automations.
I reject candidates at all stages for all sorts of reasons, but more and more candidates believe the job can be done with AI. If we wanted AI, we will probably go wholesale and not include the person asking for the job to do the typing for us.
We’re not crying due to AI, we’re crying over the dozens of lost hours of interviews we’re having to conduct where it’s business critical that people know their stuff — engineering positions with consequences (banks, infrastructure, automotive). There isn’t space for “well I didn’t write the code”.
>We don’t want to work with AI, we are going to pay the person for the persons time
If your interview problems are representative of the work that you actually do, and an AI can do it as well as a qualified candidate, then that means that eventually you'll be out-competed by a competitor that does want to work with AI, because it's much cheaper to hire an AI. If an AI could do great at your interview problems but still suck at the job, that means your interview questions aren't very good/representative.
It’s more like “can you design something with the JWT flow” and the candidates scramble to learn what JWT is in the interview to impress us, instead of asking us to remind them on the specifics on JWT. They then get it wrong, and waste the interviewers time.
Then they shouldn't use libraries, open source code or even existing compilers. They shouldn't search online (man pages is OK). They should use git plumbing commands and sh (not bash of zsh). They should not have potable water in there house but distill river water.
I'm currently finalising a Security Operations app that centralises triage for security alerts (North / https://north.sh) into an intuitive interface that better helps Security Operations teams, MSSPs & SOCs.
It tries to deal with alert fatigue via some nice de-duplication techniques (via customisable aggregation and correlation rules), manages and runs detection rules against different logging platforms (Elastic, Splunk and ALA/Azure) with Validation and Simulation testing, and will lower the time that it takes to determine malicious activity by presenting as much relevant information per security alert as possible.
Hopefully to launch sometime before end-of-year. If you're interested, I'm always free to talk via alex@sinn.io, or sign up to the newsletter.
Hey Team, the SigmaHQ team and I have been working over the last 11 months & we're finally happy to release a brand new documentation suite and website to try and bring more Security & Detection engineers to adopt Sigma and enjoy the benefits around the ecosystem.
Please let us know what you think & feel free to ask any questions!
Any plans to add more backends to pySigma or to have parity with sigmac? How about support to covnert to sigma instead of just from? It would be a great way to share intel.
I see random github repos with sigma rules popup, it would be nice if you guys came up with a community repo anyone can dump into without going through your PR process (think Alienvault OTX but for Sigma).
It's also not clear on Nextron system's website if they offer paid/private/supported rules to compete with the likes of socprime.
Australia based community-focused platform SaaS to deal with sports teams, communities and societies. Looking for a co-founder in business/marketing or PHP/Vue development - Email hn@platformapp.io for more info.
We don’t want to work with AI, we are going to pay the person for the persons time, and we want to employ someone who isn’t switching off half their cognition when a hard problem approaches.
reply