```
It's common for engineers to end up working on projects which they don't have an accurate mental model of. Projects built by people who have long since left the company for pastures new. It's equally common for developers to work in environments where little value is placed on understanding systems, but a lot of value is placed on quickly delivering changes that mostly work. In this context, I think that AI tools have more of an advantage. They can ingest the unfamiliar codebase faster than any human can, and can often generate changes that will essentially work.
```
Reason: you cannot evaluate the work accurately if you have no mental model. If there's a bug given the systems unwritten assumptions you may not catch it.
Having said that it also depends on how important it is to be writing bug free code in the given domain I guess.
I like AI particularly for green field stuff and one off scripts as it let's you go faster here. Basically you build up the mental model as you're coding with the AI.
Not sure about whether this breaks down at a certain codebase size though.
Just anecdotally - I think your reason for disagreeing is a valid statement, but not a valid counterpoint to the argument being made.
So
> Reason: you cannot evaluate the work accurately if you have no mental model. If there's a bug given the systems unwritten assumptions you may not catch it.
This is completely correct. It's a very fair statement. The problem is that a developer coming into a large legacy project is in this spot regardless of the existence of AI.
I've found that asking AI tools to generate a changeset in this case is actually a pretty solid way of starting to learn the mental model.
I want to see where it tries to make changes, what files it wants to touch, what libraries and patterns it uses, etc.
It's a poor man's proxy for having a subject matter expert in the code give you pointers. But it doesn't take anyone else's time, and as long as you're not just trying to dump output into a PR can actually be a pretty good resource.
The key is not letting it dump out a lot of code, in favor of directional signaling.
ex: Prompts like "Which files should I edit to implement a feature which does [detailed description of feature]?" Or "Where is [specific functionality] implemented in this codebase?" Have been real timesavers for me.
The actual code generation has probably been a net time loss.
> I've found that asking AI tools to generate a changeset in this case is actually a pretty solid way of starting to learn the mental model.
This. Leveraging the AI to start to develop the mental model is an advantage. But, using the AI is a non-trivial skill set that needs to be learned. Skepticism of what it's saying is important. AI can be really useful just like a 747 can be useful, but you don't want someone picked off the street at random flying it.
Got up to the TL;DR paragraph. This was a major red flag given the initial presentation of the discovery of a bottleneck:
'''
When a NOTIFY query is issued during a transaction, it acquires a global lock on the entire database (ref) during the commit phase of the transaction, effectively serializing all commits.
'''
Am I missing something - this seems like something the original authors of the system should have done due diligence on before implementing a write heavy work load.
I think it's just difficult to predict how heavy is heavy enough to make this a problem. FWIW I had worked at a startup with a much more primitive data storage system where serialized commits were actually totally fine. The startup never outgrew that bottleneck.
If “doing due diligence” involves reading the source code of a database server to verify a design, I doubt many people writing such systems do due diligence.
The documentation doesn’t mention any caveats in this direction, and they had 3 periods of downtime in 4 days, so I don’t think it’s a given that testing would have hit this problem.
Conversely this is exactly why I believe LLMs are sentient (or conscious or what have you).
I basically don't believe there's anything more to sentience than a set of capabilities, or at the very least there's nothing that I should give weight in my beliefs to further than this.
Another comment mentioned philosophical zombies - another way to put it is I don't believe in philosophical zombies.
But I don't have evidence to not believe in philosophical zombies apart from people displaying certain capabilities that I can observe.
Therefore I should not require further evidence to believe in the sentience of LLMs.
Question - why would you do this in current year? Is it that much more performant? I might be ignorant but frameworks seem to be the lingua franca for a reason - they make your life much easier to manage once set up!
Well, one nice thing about doing things this way is that it'll still be as viable in five years as it is today.
I'm sitting on two UIs at work that nobody can really do anything with, because the cutting-edge, mainstream-acceptable frameworks at the time they are built in are now deprecated, very difficult to even reconstruct with all the library motion, and as a result, effectively frozen because we can't practically tweak them without someone dedicating a week just to put all the pieces back together enough to rebuild the system... and then that week of work has to largely be done again in a year if we have to tweak it again.
Meanwhile the little website I wrote with just vanilla HTML, CSS, & JS is chugging along, and we can and have pushed in the occasional tweak to it without it blowing up the world or requiring someone to spend a week reconstructing some weird specific environment.
I'm actually not against web frameworks in the general sense, but they do need to pull their weight and I think a lot of people underestimate their long-term expense for a lot of sites. Doing a little bit of JS to run a couple of "fetch" commands is not that difficult. It is true that if you start building a large enough site that you will eventually reconstruct your own framework out of necessity, and it'll be inferior to React, but there's a lot of sites under that "large enough" threshold.
Perhaps the best way to think about it is that this is the "standard library" framework that ships in the browsers, and it's worth knowing about it so that you can analyze when it is sufficient for your needs. Because if it is sufficient, it has a lot of advantages to it. If it isn't, then by all means go and get something else... again, I'm definitely not in the camp of "frameworks have no utility". But this should always be part of your analysis because of its unique benefits it has that no other framework has, like its 0KB initial overhead and generally unbeatable performance (because all the other frameworks are built on top of this one).
Because of the ridiculous complexity in their setup and operation, their glacially slow performance if it is actually written in JavaScript and not a non-web platform language that compiles to native code, and their extreme volatility and instability/pace of deprecation.
The benchmarks I've seen actually show web components being slightly slower than the best frameworks/libraries.
The idea is: no build steps initially, minimal build steps later on, no dealing with a constant stream of CVEs in transitive dependencies, no slow down in CI/CD, much more readable stack traces and profiling graphs when investigating errors and performance, no massive node_modules folder, etc. Massive worlds of complexity and security holes and stupid janky bullshit all gone. This will probably also be easier to serve as part of the backend API and not require a separate container/app/service just running node.js and can probably just be served as static content, or at least the lack of Node build steps should make that more feasible.
It's a tradeoff some people want to make and others don't. There isn't a right and wrong answer.
Frameworks can help in many cases, especially for complex or high-scale apps. But in simple CRUD apps, particularly in consulting environments, they often add more complexity than value.
I've seen this play out repeatedly. A company needs a basic CRUD system. Consultancy A brings in designers who produce “wireframes” (usually full-featured eye candy) and UX-centric “journeys.” To justify its bloated fees, the consultancy builds something super-slick using frameworks to match the designs. But once the budget runs out, the customer struggles to maintain it. Eventually, the app becomes unmanageable.
So the consultancy sends in a designer again to produce new mockups and “journeys.” The cycle repeats, this time with another framework.
For example - I have a fairly complex app, at least for a side project, but so far I managed to keep everything without a single external dependecy nor any build tool. Now I have to convert it all to react and I'm not very happy about that, I will have to add a ton of tooling, and what's most important for me, I won't be able to make any changes without deploying the tooling beforehand.
Mostly for the virtue signaling. There's something to be said for going with the fundamentals, but people who loudly preach fundamentals tend to be, well...
Frameworks are fine when you just need to get a job done quickly. And they're ideal when you're in a typical corporate code factory with average turnover, and for the same reasons Java is ideal there.
React is the new Java.
But when you need something the framework can't provide, good luck. Yes, high performance is typically one of those things. But a well engineered design is almost always another, costing maintainability and flexibility.
This is why you almost always see frameworks slow to a crawl at a certain point in terms of fixing bugs, adding features, or improving performance. I'd guess React did this around like 2019 or so.
Overly hated on and conflated with frameworks, yes.
React requires very little from you. Even less if you don't want to use JSX. But because Facebook pushes puzzlingly heavy starter kits, everyone thinks React needs routers or NextJS. But what barebones React is, at the end of the day, is hardly what I'd call a framework, and is suitable for little JS "islands."
Seems to be the referenced paper?
If so previously discussed here: https://news.ycombinator.com/item?id=43697717
reply