One way that I could imagine a human-only HN could evolve in the coming AI wasteland: motivated individuals join small local groups and are validated face-to-face at meet-ups. Local trusted leads gatekeep their chapter’s posts, and this scalable moderation works up the tree. Bad leaves get culled out reasonably fast, maybe there’s some controls at the top level that let you see more content “lower down the tree” if you’re ok with lower SNR. Latency to get a post widely distributed grows but I don’t see that as a massive problem.
> coming AI wasteland: motivated individuals join small local groups and are validated face-to-face at meet-ups. Local trusted leads gatekeep their chapter’s posts, and this scalable moderation works up the tree. Bad leaves get culled out reasonably fast,
I've been thinking the same. One way to moderate is to bring back physical consequences.
I'd also like to see an "Order of the White Lotus" community (or Fight Club if you prefer) where people who collectively agree to not use AI against each other can come together. They can still use AI (i.e. out of necessity) just not with other members knowingly.
I suspect whatever form it takes the stakes will be very high to hack yourself into and pollute the space. So the more successful the community becomes, the harder it is to keep in order.
In my recent experience, local meetups and groups are unexpectedly more prone to self promotion and low effort spamming.
Local groups have a problem where members admit their friends or pressure others into inviting their friends who are not a net positive, but it feels too impolite to refuse or to kick someone out. Meeting someone in person also develops a sense of a social bond that makes it harder to downvote or flag their posts.
Local groups have always been a haven for affinity fraud, too. Running a scam is easier when you can smile, be charismatic, and pretend to be a personal friend before springing your ask on to your victims.
This sounds like failure of leadership. Our coding meetups are already implementing what the GP suggested [0] and we also enforce our written guidelines (in this case, politely removing the bad eggs.)
Fully agree: I believe my decades of software engineering experience definitely help me fly LLM tools better than less experienced folks.
But the much more interesting question to me: as LLM coding becomes the norm, does it drive the cost of self or small-company generated software to 0?
Like many SW architects/engineers my not-so-developed work-in-retirement plan is to assemble a small team of people I’ve loved working with over the years, start an LLC, and try to make a reasonable (not posh) living doing what we love: making software to solve problems.
On the one hand, it’s clear LLM coding can accelerate and amplify our efforts, but alternately there’s many people claiming there’s no possibility of a moat, your solution/innovation can be cloned in a matter of days … ie. the value of your software is exactly 0.
Not sure which future will be closer to reality. A backup plan that seems reasonable in the 0-value case is to focus our effort on creating actual physical gadgets and systems in the embedded realm, which conceivably can be designed and prototyped by a small team… It seems like these would still be valuable.
My team has experienced this over the past 6 months for sure.
The core of the article is “ AI-assisted development potentially short-circuits this replenishment mechanism. If new engineers can generate working modifications without developing deep comprehension, they never form the tacit knowledge that would traditionally accumulate. The organization loses knowledge not just through attrition but through insufficient formation.”
But is it possible this phenomenon is transient?
Isn’t part of the presumed value add of LLM coding agents in the meta-realm around coding; e.g. that well-structured human+LLM generated code (green field in particular) will be organized in such a way that the human will not have to develop deep comprehension until needed (e.g. for bug fix/optimization) and then only for a working set of the code, with the LLM bringing the person up to speed on the working set in question and also providing the architectural context to frame the working set properly?
In my view with current LLMs: they still produce far too much bloat and unclean solutions when not targeting them at very specific issues/features, making LLMs essentially a requirement for any debugging or features for the lifecycle of the product/service.
After a multi-decade career that spanned what is rapidly seeming like the golden age of software development, I have two emotions: first gratefulness; second a mixture of resignation, maudlin reflection, and bitterness that I am fighting hard to resist.
As someone who’s always wanted to “get home and code something on my own”, I do have a glimmer of hope that I wonder if others share. I’ve worked extensively with Claude and there’s no question I am now a high velocity “builder” and my broad experience has some value here. I am sad that I won’t be able to deeply look at all the code I am producing, but I am making sure the LLM and I structure things so that I could eventually dig in to modules if needed (unlikely to happen I suppose).
Anyway, my hope/question: if I embrace my new role as fast system builder and I am creative in producing systems that solve real problems “first”, is there a path to making that a career (I.e. 4 friends and I cranking out real production software that’s filling a real niche)? There must be some way for this to succeed —- I am not yet buying the “everything will be instantly copyable and so any solution is instantly commodity” argument. If that’s true, then there is no hope. I am still in shape, though, so going pro in pickleball is always an option, ha ha.
Unfortunately you aren't a high velocity builder. The velocity curve has now shifted and everyone having Claude blast out loc after loc is now a high velocity builder. And when everyone is a high velocity builder...nobody is.
Fair point, but my hope is that the creativity involved in deciding what to build, with the choice informed by engineering experience (the project/value will not be obvious to everyone) will allow differentiation.
"creativity involved in deciding what to build, with the choice informed by engineering experience (the project/value will not be obvious to everyone) will allow differentiation."
How? Anyone upon seeing your digital product can just prompt the same thing in no time. If you can prompt it, I can prompt it and so can a million other people.
Nobody whether an individual or business holds any uniqueness or advantage to themselves. All careers and skill sets are leveled and worthless. Implementation skills are worthless. Creativity is worthless.
Agree on data value, but as mentioned above I am not yet buying the “everything will be instantly copyable and so any solution is instantly commodity” argument … crud web-app sure, something with significant back-end complexity or a multi-service systems level solution, not so much. Perhaps optimistic, admittedly. Cheers.
Thanks very much for this awesome write up! It’s detailed labor-of-love work like this that helps others (like me!) make great jumps in learning. So appreciated.
Politics aside, according to a pretty comprehensive study (118 missions) it does seem that SpaceX is much more efficient than NASA [1]. Data like this would suggest privatization of space missions is a good idea. Maybe this conclusion is biased somehow, or perhaps the purpose of a dedicated govt org is different in some way that justifies its budget and scope despite the difference in efficiency?
Efficiency is important for public institutions but not the highest priority. The highest priority is public service. These institutions should have public good as their north star, not shareholder value
No the public is the customer. The difference is between the postal service being profitable and not losing your packages. If not losing packages costs too much for them to be profitable, then the public would want them to operate at a cost but make sure all packages arrive safely
They compare cost, speed-to-market, schedule, and scalability, but it looks like they ignore failed launches and consider all missions successful?
I couldn't find a comparison of the number of launch failures between the two, my recollection is that this happened a lot more often in SpaceX rockets. But maybe that's included in the cost overrun figures and still puts SpaceX ahead by an order of magnitude.
I agree with the thesis of the paper, that platforms and incremental advances are more efficient and more economical. I don't quite agree that an incremental approach would have worked well for the NASA efforts in the 60s and 70s. Perhaps it should be considered as an option for these large organizations, but I'm not convinced it's always better.
Also, to do this study fairly, you would have to set up SpaceX to not benefit from any of the advances made by NASA for the decades beforehand. Some step-function style advances did happen under NASA supervision that benefitted the entire scientific community.
Also looks like the paper explicitly said it wasn't doing a public/private sector comparison so much as observing that SpaceX doing repeatable stuff in LEO on short timelines delivered without the cost overruns of NASA doing more complex one-offs over longer timelines and concluding that, surprise surprise, the repeatable stuff and incremental improvement stuff had much better cost control than the deep space science missions and space station enhancements. Yes, if you look at the raw number of missions SpaceX has operated, most of them have been successful Falcon 9 launches and most of them have been to deploy minisats to a standard design, and its track record of these is excellent (including adding reusability). NASA's track record would look a lot better if it mostly launched satellite constellations to LEO too and better still if it held off on planning anything in deep space, but that's not really what NASA is for. If you look at SpaceX in terms of private programmes rather than missions, the Falcon 9 is outstanding and the Starlink minisats work, the Falcon Heavy seems fine, Starship has been going on a very long time (including work before the Starship name was coined like the the Raptor engine) and hasn't achieved anything useful yet, and the stated goal of going to Mars hasn't got off the drawing board. But they're very, very good at building and delivering significant improvements on the repeatable stuff that isn't NASAs focus
Also, if you're doing a fair comparison between public and private sector you've got to consider all the launch startups that aren't SpaceX, including the ones that haven't successfully launched...
So just like for-profit health care customers must avoid rural regional hospitals hollowed out by VCs to be able to gain the better outcomes in a system hurdling toward catastrophe, space explorers must carefully avoid paths of a growing amount of space debris in a LEO system also hurdling toward catastrophe.
In both cases, I don't think the system works well without assuming spherical cows or something like that.
Edit: ah, I see it's "hurtling." Although I guess in both cases you have to dodge larger and larger geographic regions to claim success, so a bit like hurdling. :)
SOFA works just fine with marriage, just tweak the vows:
“… to have and to hold from this day forward, for better, for worse, for richer, for poorer, in sickness and in health, to love and to cherish, until I feel like this is done and want to move on. And done is when I say it’s done.”
I am a C/C++ dev learning Rust on my own, and enjoying it. I am finally starting to enjoy the jiu jitsu match with the compiler/borrow-checker and the warm “my code is safe” afterglow … but I have a question for the more experienced Rust devs out there, particularly in light of the OP’s observation about “lots of unsafe” in the Rust embedded realm (which makes sense).
If your Rust project leans heavily on unsafe code and/or many libraries that use lots of unsafe, then aren’t you fooling yourself to some degree; i.e. trusting that the unsafe code you write or that written by the 10 other people who wrote the unsafe libs you’re using is ok? Seems like that tosses some cold water on the warm afterglow.
> As long as the unsafe parts are safe, you can rest assured that the safe parts will be safe too.
That is not true. It is possible to have two pieces of validated unsafe code that are "safe" in isolation but when you use them in the same codebase, create something unsafe. This is especially true in embedded contexts, where you are often writing code that touches fixed memory offsets, and other shared globals like peripherals.
In some cases you might have the excuse that, well, you did say on the tin not to do this with the unsafe element. For example if I use Bob's "I need exclusive control of GPIOs 2, 3 and 6" and also Kate's "I need exclusive control of GPIOs 1, 2 and 4" unsafe code, then it's my fault, they did both tell me this requirement and they clash.
But in general this is specifically a bug in the unsafe code. The Rustonomican is very clear that it's not the safe code's fault that your unsafe code doesn't work. In the scenarios with conflicting libraries I guess it's the fault of somebody who linked conflicting libraries, but it's definitely never the safe code.
> Another way to see the benefit of this approach is that if you have a memory violation, then you only have to look in the unsafe blocks.
Not really. Safety is non-local. It is possible to break unsafe code by feeding inputs from safe Rust that don't uphold the invariants that make the unsafe code safe. So it's not enough to look in the unsafe blocks--you have to consider the all the contexts that invoke the unsafe code.
>If your Rust project leans heavily on unsafe code and/or many libraries that use lots of unsafe, then aren’t you fooling yourself to some degree; i.e. trusting that the unsafe code you write or that written by the 10 other people who wrote the unsafe libs you’re using is ok? Seems like that tosses some cold water on the warm afterglow.
It's true is that you have to trust your dependencies (unsafe or not). Not needing to trust at all that developers know what they are doing was never a thing a programming language could provide. We can only carve out some specific properties that we can machine-check in a limited way.
There are limits on what a type system can do (Rice's theorem, Gödel's incompleteness theorem), and in addition there are limits on what a non-dependent type system can do.
Therefore, you either need unsafe (something that adds operations that the type system doesn't model) or you can't write some perfectly OK programs.
Basically, the Rust type system is a toy model of your computer's abilities and the domain you want to model. And so is any other type system. The type systems of systems languages at least have some inkling of the actual machine--which is not necessarily the case in non-systems languages.
Ask a computer engineer what he thinks about this toy model's misconceptions, like that reading and writing from the same location via the memory bus affect the same thing, or reading the same memory location twice in a row when there's only one cpu is guaranteed to give you the same value, or that reading a memory location can't change it, or that writing to some memory location can't automatically change some other aliased memory location, or that writing some memory location from cpu 1 means cpu 2 can immediately read that new value out etcetc. I could go on (memory barriers, cache coherency, paging, ...).
This is not specific to Rust.
I'm not sure why we are having new "unsafe" discussions lately. Java and .NET have unsafe as well. Didn't we have that discussion already in ca. year 2000 and everyone arranged themselves with it?
What changed? Are there new arguments?
If you want to have some empirical tests if the unsafe blocks are broken, run your program under miri.
Now you could say that you could just make better and better type systems that encompass everything as it really is. To that I say (1) you can't do that in principle and (2) if you could, humans wouldn't be able to practically use it anymore and (3) It would be too much effort for something that only a tiny minority of programs need in some places. The toy model is pretty good 95% of the time!
“List six habits you wish to adopt, assign them to different times of the day, and aim
to consistently perform at least four.”
SIX? Um, how about we start with, like, one?
That aside, a concise article with good advice IMO, but I would add “find a partner and be accountable”, especially for eliminating addictive / tempting bad habits or replacing them with good ones.