Code quality no longer carries the same weight as it did pre LLMs. It used to matter becuase humans were the ones reading/writing it so you had to optimize for readability and maintainability. But these days what matters is the AI can work with it and you can reliably test it. Obviously you don’t want code quality to go totally down the drain, but there is a fine balance.
Optimize for consistency and a well thought out architecture, but let the gnarly looking function remain a gnarly function until it breaks and has to be refactored. Treat the functions as black boxes.
Personally the only time I open my IDE to look at code, it’s because I’m looking at something mission critical or very nuanced. For the remainder I trust my agent to deliver acceptable results.
Today Claude Code built several features and fixed a laundry list of bugs while I was at the movie theatre. When I came home I did about 15 minutes of reviewing its work and doing manual testing, found some issues, made a list and fired it off to Claude again before going to bed.
I’m not mentally taxed at all, in fact I’m excited to be building something 24/7 without sitting at my keyboard night and day typing out code and ruining my physical health.
When I reflect on the old days of coding (pre 2024) I have a hard time thinking about how many days of my life I spent manually coding away at the keyboard - it makes me queasy and uncomfortable realizing the amount of time I lost.
Tomorrow morning I’m going to go to the gym and have Claude bang out several more features while I’m exercising - and I’m stoked to review the results and keep the ball rolling.
I’m so happy. I can think about what actually matters and tackle hard problems that were otherwise bottle-necked by how fast I could type syntax correctly on the keyboard.
That’s exactly what we’re doing. Connect the agent to your GitHub issues, tell it implement each one and spin on a loop until it’s finished. There’s more nuance to it than that but at a high level yeah, that’s how some people are using it.
I’m also interested in what CS curriculums are right now and furthermore what students actually think of it. I suspect nothing has changed in terms of curriculum other than being more rigorous about “academic dishonesty” like detecting if someone used ChatGPT generated answers.
What I hope will change is less people going into the CS field because of the promise of having a high-paying career. That sentiment alone has produced an army of crud monkeys who will overtime be eaten by AI.
CS is not a fulfilling career choice if you don’t enjoy it, it’s not even that high-paying of a career unless you’re beyond average at it. None of that has changed with AI.
I think the right way to frame career advice is to encourage people to discover what they’re actually curious in and interested by, skills that can be turned into a passion, not just a 9 to 5.
self-hosted uptime monitoring, configured entirely in TypeScript and built on NextJS and Bun.
No UI forms. No vendor lock-in. Just code in your repo.
Monitors, dashboards, alerts, incidents, and status pages — all defined as TypeScript files and Markdown, version-controlled alongside your application.
You can one-click deploy it to Vercel or anywhere you can run node/bun/docker.
Your monitoring config lives in your repo like everything else.
Want to know what's monitored and why? Read the code.
Need to review a change? It's in the PR. Need to roll back a bad alert? Git revert.
No more clicking through dashboards wondering who changed what and when.
Built with Next.js, Drizzle ORM, and Bun. Runs with SQLite for simplicity or PostgreSQL for production.
Fully open source and ready to use today.
Would love to hear what you think and what features you'd want to see next, leave a star on GitHub
reply