They brought up some performance related edge case that I've never even run into even with extremely heavy usage including building my own agent that wraps around CC and runs several sessions in parallel... So yeah I failed to see the relevance
I don't think this is true. It's just much easier to bring in people when you have access to them to ask directly, basically.
In my short experience in public service, I met a great number of people who were not in lockstep with the so-called "values they try to force" (i.e. the political plans of the current government), so it seems they're not doing a great job of "forcing" those values if that's the plan.
My great concern with regards to AI use is that it's easy to say "this will not impact how attentive I am", but... that's an assertion that one can't prove. It is very difficult to notice a slow-growing deficiency in attentiveness.
Now, is there hard evidence that AI use does lead to this in all cases? Not that I'm aware of. Just as there's no easy way to prove the difference between "I don't think this is impacting me, but it is" and "it really isn't".
It comes down to two unevidenced assertions - "this will reduce attentiveness" vs "no it won't". But I don't feel great about a project like this just going straight for "no it won't" as though that's something they feel with high confidence.
you have decades of experience of reviewing code produced at industrial scale to look plausible, but with zero underlying understanding, mental model or any reference to ground truth?
glad I don't work where you do!
it's actually even worse than that: the learning process to produce it doesn't care about correctness at all, not even slightly
the only thing that matters is producing plausible enough looking output to con the human into pressing "accept"
(can you see why people would be upset about feeding output generated by this process into a security critical piece of software?)
The statement that correctness plays no role in the training process is objectively false. It's untrue for text LLMs, even more so for code LLMs. Correct would be that the training process and the architecture of LLMs cannot guarantee correctness.
If you've read the thread, the strategy you're replying to is about a workplace scenario where outright rejection is, for whatever reason, forbidden; not an open source situation where "no" is readily available.
It makes even less sense in a work context either. This behavior will permanently alienate this user & potential customer. I’ve seen this exact scenario play out many times before.
I find that the "what if a cabal of small server operators defeds your server" risks very frequently overblown, although it is good to be aware of it.
As a correction, though - emails get bounced based on opaque reputation rules (domain-related or otherwise) all the time. Email and fedi are very similar in this respect.
"What bad technical choices?"
"These ones"
"Ok but they're fast-growing, so..."
Does being a fast-growing product mean you have security chops or is this a total non-sequitur?
reply