I wouldn’t say never. I spent the first 10 years of my career loving crafting code. Then I moved up in seniority and naturally my focus and prioritization had to shift. Even before AI I didn’t code that much, focusing more on design, planning, reviewing, firefighting, and team leadership (still an IC).
One exciting thing about AI is when I have an idea of something to do and can visualize it, instead of writing a ticket that sits in the backlog, I can use AI to vibe it up, with just a couple hours of my attention I can spare. Sometimes it works sometimes it doesn’t. But it’s fun and satisfying to get more shit done, and be able to scratch the same builder and solver itches in my 10% time.
And you don’t blame humans anyways lol. Everywhere I’ve worked has had “blameless” postmortems. You don’t remove human review unless you have reasonable alternatives like high test coverage and other automated reviews.
Modern science encourages publishing non-surprising results.
And also I’ve seen my manager LARP as an engineer by asking a model to generate a best practices doc for a service repo without supplying any additional context. So this sort of paper helps discourage that behavior.
Reminds me of https://slatestarcodex.com/2020/05/28/bush-did-north-dakota/
reply