Good for you for trying to do right by your team, but oof. An entirely junior team with no tech leadership is going to have problems beyond mentorship.
For reference, 14 yoe and currently in management.
Today, I don't think the tools are good enough to make a material difference. It may help a bad engineer tread water, but it won't take you from good to great. It may save you time writing basic boilerplate and individual functions, but I suspect 99% of engineers don't struggle with that. What's hard about our jobs is knowing how to orchestrate the whole thing and put structure around complexity. AI can't do that yet.
When I use it personally, it feels like a harder context switch trying to describe in english what I already know how to code. Then I still have to review the function to make sure it's accurate. It feels like a waste of time and an additional context switch.
Whenever the AI gets better, we'll have to use it to be productive I have no doubt. But the pool of engineers will change too - there will be a categories of engineers who can't debug the AI output and who still write crazy prompts.
Maybe I'm old, but I'll only be worried about AI when it can write and maintain a full app with no human intervention.
I wish this was the stance that companies I'm applying to would take.
Words from a recruiter at a company I won't name for now:
"Unfortunately, at this time, we do not offer a take-home test option to our candidates. It is definitely something under discussion, and we will continue to evaluate this as we scale. The decision stems from a couple of our leaders who have had unfortunate experiences in the past with candidates who used outside resources to complete their tests, which has given them concern in allowing this as an option moving forward."
And then they added once I was rejected, presumably for continuing to try to push the take-home option and evaluate it on accommodations-for-disabilities grounds:
"I know we talked about adjusting the process with you for your preferences to do a take home test in lieu of live coding, or at least have you speak with a hiring manager before doing the live coding which is what we would be able to do if the team had interest in moving forward. However, the team did reach the conclusion that if doing the live coding wasn't something you were going to be interested in/had general trepidation around, would they really be getting a great read of your skillset if it's not something you're jazzed about?"
I hate how much employers seem to not want to evaluate candidates based on real conversations and instead rely on arbitrary assessments that don't map to the real-world day-to-day work.
Larger companies are sophisticated enough to handle accommodation requests. Smaller companies use them as a way of answering the implicit question of "Is this potential employee going to be litigious?"
My advice, if the company is smaller, open up to asking about accommodation after gaining employment and showing that you're an asset.
> not want to evaluate candidates based on real conversations
The problem is bias. Study after study has shown that those "casual chat" interviews are worse than useless at measuring anything at all.
Kahneman's book, Noise, has entire chapters on this problem. The only solution that empirically seems to work are a) interview panels and b) pre-defined standard rubrics with clear evaluation criteria.
Defining those rubrics is hard and the results aren't perfect. But when done well, you can get up to about a 70% correlation with on-the-job performance. Nobody is known to have achieved better.
Everything you mention seems orthogonal to whether a coding evaluation is given as take-home or live.
I’m glad to have a vigorous discussion about code I wrote during my own time. Go ahead and create a standard rubric that covers the project itself and the follow-up discussion. This is what I’ve done when hiring. It’s great because it demonstrates the employer knows what they’re looking for from the role, and that the team has sufficient experience to conduct a conversation in the relevant domains.
When I hear there’s a minimal number of interviews, and the main one is an intense live leetcoding session, I tell them I have no interest but if anything changes on their end I’ll be glad to provide sample work and have a discussion about it. The problem is these live sessions are extremely draining to prep for, provide no gain for the candidate (unlike writing code that can be retained), and they reveal practically nothing about the company.
This is the big one, for me personally at least. I spend most of my time writing C# and Vue. When I need to write Python, or React, or Go, which happens from time to time, it will take me 10 minutes of back-and-forth with ChatGPT instead of an hour, or multiple hours, looking up tutorials and just figuring out what I even need to Google to find what I'm looking for.
I've tried using ChatGPT for my strengths and sometimes it helps for minutiae but for the most part it's faster for me just to write the code.
I mean even for stuff I know really well it can help
you just have to get used to the way it "thinks" and how it "understand" your request, to write better prompts, you can even manage to send it half incomplete sentences if you really know what matters to it if you want to save even more time
and of course if someone won't start using it regularly they'll never reach a point where it's faster to just ask it for a function or a script than to write it
In these kinds of sessions are you using a ChatGPT-X tool or something like GitHub Co-Pilot. I haven't started using these tools, but it does sound like there might be a significant benefit in some use cases.
I guess I am a little biased against the AI tools also as I've made a successful 34 year career in software development without using those tools, but I'm also aware that overnight, the world can change.
I use both(ChatGPT 3.5 or 4 and Copilot), they complete each other IMO.(I also tried Chat Copilot which is awful and offers the worst of both worlds)
Copilot efficiency is directly linked to the readability of your code and the quality of your comments, so if you have a messy file, it can be better to ask ChatGPT in natural language, it's also better to use ChatGPT if important code related to what you're writing is spread across a lot of files because Copilot won't necessarily take everything into account.
On the other hand Copilot is better for one-liners, small functions, boilerplate,
while ChatGPT can often do more complex stuff on the first try if your prompt is good enough and you don't need it to call something created after 2021, it can also sometimes be useful for debugging.
I'd say I autocomplete line or functions a few dozen times per hours with Copilot, and ask ChatGPT a question or two every hour.
There is also one thing to take into account, if you've been a professional for 34 years, it seems likely that you don't work with the latest popular language or framework. Models from OpenAI are order of magnitude worse at other less popular languages than Python or JS because they had less training data for less popular/older languages.
To the contrary, the tools already make a material difference. I wrote 10k lines of code in the last 3 weeks--maybe 60% of that was generated by ChatGPT. Sure, I could have done it myself, but if it wasn't for AI my hands would have literally fallen off.
Instead of painstakingly reviewing the output of AI, just write a bunch of tests. It's something you would do regardless.
Hopefully it will eliminate all the boring shit like managing JIRA, giving status updates, and following up with communication tasks. Another large part of my day is also troubleshooting random things, so hopefully AI will benefit my team before I have to get engaged.
I don't think AI poses a risk when it comes to setting engineering priorities and building the roadmap. If it could do that, it could probably just build the entire system anyway.
EMs are there for the human aspect of engineering, so I also doubt it will impact hiring or EM-engineer ratios.
I do expect the bar to being an EM to be higher as the job will be more technical and less project management.
correct.It can only help in analyzing jira and surfacing what could be important, human review would still be required. It can't do anything if people are not responding to communication tasks.
When you say 'analytics database', what kind of performance are you implying? Massive queries that respond in 10min? How tuned were things for the queries you were running?
I'm currently working through an analytics architecture and I'm having to defend against "why aren't you using postgres" when I'm talking about olap dbs.
What process is going to help senior contractors understand the codebase they've been in for 2+ years? We can create spike tickets all day, but at a point, that's just a distraction.
My point is Every single item on your list is fixable, don’t attempt to fix every one, start small and create a roadmap, and timeline. Hold people accountable especially contractors. You’d be amazed how quickly things get better when you hold people accountable.
So sorry, that was auto correct on the phone. I meant SDLC, the software development life cycle, and most probably the Project management methodology. The symptoms you describe are synonymous with a disjointed organization that lacks good processes and communication. You are correct to bring it up to management and definitely YOU are not the problem.
Splitting teams with interconnected and related work, I do something like https://agilesquads.org/. For each sprint, we'll have a planning cadence where we scope 2 weeks of work e.g. Feature X and Feature Y. Each squad would get assigned a feature and they're able to focus on delivering that feature. The benefit of this over a real "split" is that you can have both "teams" working on the same roadmap/project/feature and/or change how many engineers are working towards a single feature. When you include "squad leads", it's also great for career development and leadership.
On product work, we do an oncall rotation of 1 engineer per week that triages issues and handles prod outages. This may solve for your 'help desk' type work.