Is there any way to simulate (maybe even interactively) things like focus and zoom? It would be cool to have some way to shift lenses (or lens groups) along the optical axis and visualize how light rays get projected onto the image plane.
That would be cool indeed! Not really a focus of this project - and kinda complex because it's all in python. Only the rendering widget is in JS, but it's only passively displaying the input data it gets as JSON.
Check out this project[1] which kinda does that, although it's 2D only as far as I know. But it's fully interactive, which is super neat.
A few of our mid-level engineers started sending in PRs with worse than usual code in the last few months. It all compiles, but:
- There's a bunch of missing edge cases and requirements (details that don't end up on tickets but have been discussed)
- Sometimes we see completely useless codepaths. If statements and function calls that don't lead anywhere, or don't need to be called.
- We follow a few very specific patterns for code safety reasons and suddenly it looka like those are being completely disregarded.
- Our integration tests started being super flaky.
After several one-on-ones with them, we realized they were using some AI tool. No clue which one.
I know that these tools are going to get better, but I fear that junior/mid-level developers are going to handicap their development if they use them like they do now.
Code quality also suffers. I'm also afraid that in the short to medium term codebases are going to get _a lot worse_ introducing some very expensive tech debt.
On the bright side, I know now that I don't want to work at Gumroad. I already spend a huge amount time reviewing PRs. I don't want to waste even more because someone didn't prompt an AI accurately enough. What a waste of time and resources.
The quality of a PR is a reflection of the author not the tools the author uses. The last thing an engineer should do before sending out a PR is to review it themselves. A PR is a work product with their name on it, and is a reflection of their ability. This was true before LLM tools, and should be true after.
The trouble with AI and junior engineers is that they tend not to have the skills to tell what is good or bad. It used to be the same problem with code taken from stack overflow with no understanding.
It's surprisingly hard to reliably catch either issue in code review. Indeed it's hard to do code review that catches the majority of any class of problem
I don't have any evidence for it, but I feel that watching someone (or something) perform a task has less pedagogical value that performing that task oneself.
Same for attention, it's easier let your mind wander if you're e.g. taking the back seat while you're pair programming.
AI tools also don't really "reason", do they? Even if you use a reasoning model, they perform the most statistically likely steps with the context and instructions that they're provided, so you lose that "deliberateness" that enables you to best understand the problem that you're solving.
> The last thing an engineer should do before sending out a PR is to review it themselves. A PR is a work product with their name on it, and is a reflection of their ability.
Right, but your understanding of a PR that you're reviewing is different when you review your own work vs. when you review someone else's, right? For me personally, I have to expend more effort reviewing someone else's work.
If you can see how all of this adds up, I hope you understand how this leads to AI being more of a handicap than a tool.