Hacker Newsnew | past | comments | ask | show | jobs | submit | thunderfork's commentslogin

"I doubt they have good security chops because they make bad technical choices"

"What bad technical choices?"

"These ones"

"Ok but they're fast-growing, so..."

Does being a fast-growing product mean you have security chops or is this a total non-sequitur?


They brought up some performance related edge case that I've never even run into even with extremely heavy usage including building my own agent that wraps around CC and runs several sessions in parallel... So yeah I failed to see the relevance

I don't think this is true. It's just much easier to bring in people when you have access to them to ask directly, basically.

In my short experience in public service, I met a great number of people who were not in lockstep with the so-called "values they try to force" (i.e. the political plans of the current government), so it seems they're not doing a great job of "forcing" those values if that's the plan.


They have enough problems keeping the people they currently have in the military on the same page.

https://www.cbc.ca/news/politics/rcmp-caf-charges-terrorism-...

The general response to this was amazement that the MP or RCMP actually did anything about it, given what occurs within those.


My great concern with regards to AI use is that it's easy to say "this will not impact how attentive I am", but... that's an assertion that one can't prove. It is very difficult to notice a slow-growing deficiency in attentiveness.

Now, is there hard evidence that AI use does lead to this in all cases? Not that I'm aware of. Just as there's no easy way to prove the difference between "I don't think this is impacting me, but it is" and "it really isn't".

It comes down to two unevidenced assertions - "this will reduce attentiveness" vs "no it won't". But I don't feel great about a project like this just going straight for "no it won't" as though that's something they feel with high confidence.

From where does that confidence come?


> From where does that confidence come?

From decades of experience, quite honestly.


How can you have decades of experience in a technology less than a single decade old? Sounds like ones of those HR minimum requirement memes

Decades of programming and open source experience.

you have decades of experience of reviewing code produced at industrial scale to look plausible, but with zero underlying understanding, mental model or any reference to ground truth?

glad I don't work where you do!

it's actually even worse than that: the learning process to produce it doesn't care about correctness at all, not even slightly

the only thing that matters is producing plausible enough looking output to con the human into pressing "accept"

(can you see why people would be upset about feeding output generated by this process into a security critical piece of software?)


The statement that correctness plays no role in the training process is objectively false. It's untrue for text LLMs, even more so for code LLMs. Correct would be that the training process and the architecture of LLMs cannot guarantee correctness.

> The statement that correctness plays no role in the training process is objectively false.

this statement is objectively false.


I'm just an AI researcher, what do I know?

> I'm just an AI researcher, what do I know?

me too! what do I know?

(at least now we know where the push for this dreadful policy is coming from)


The whole purpose RLVR alignment is to ensure objectively correct outputs.

Implementing a policy and personally advocating (in speech) for a political party line are two different things

There was no speculation about a causal relationship in the post you're replying to

In fact, by my reading, that post is saying that the actual cause is immaterial in the face of campaigning that essentially tried to be the cause.

If you've read the thread, the strategy you're replying to is about a workplace scenario where outright rejection is, for whatever reason, forbidden; not an open source situation where "no" is readily available.

It makes even less sense in a work context either. This behavior will permanently alienate this user & potential customer. I’ve seen this exact scenario play out many times before.

I find that the "what if a cabal of small server operators defeds your server" risks very frequently overblown, although it is good to be aware of it.

As a correction, though - emails get bounced based on opaque reputation rules (domain-related or otherwise) all the time. Email and fedi are very similar in this respect.


The article you linked is not the "randomized study" you mentioned. Your argument would be more solid if you linked to a source that backs it up.


There's a whole lot of self-referencing going on in that article (author published in "Queer Majority" and "The Quilette" and not much else...)


Do the examples given at the end of that paragraph not qualify, in your view?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: