To me this seems to be lacking in usefulness. Maybe if I was hard "no AI", it might look useful, but how do you trust people anymore? You can't really follow the "trust but verify" philosophy here.
The only way to really have a GenAI free codebase today is simply by not accepting any external contributions.
Not surprised at all. Rage bait has been leading to clicks since before the dawn of AOL.
About that data though, just publish that. Throw the data and tooling up on github or huggingface if it's a massive dataset. Would be interested in comparing methodologies for deriving sentiment.
159 stories that hit score 100 in my tracking, with HN points, comments, and first-seen timestamp.
Methodology:
- Snapshots every 30 minutes (1,576 total)
- Filtered to score=100 (my tracking cap)
- Deduped by URL, kept first occurrence
- Date range: Dec 2025 - Jan 2026
For sentiment, I ran GPT-4 on the full article text with a simple positive/negative/neutral classification. Not perfect but consistent enough to see the 2:1 pattern.
Thought about this during the morning. I'm run the posts through ministral3:3b, mistral-small3.2:24b, and gpt-oss:20b this weekend to build a sentiment mapping and see what I get. I'm optimistic about ministral3:3b, but the other two are pretty good at this type of stuff.
I’d urge you to report it anyway. As someone that does use these tools I’m always on the lookout for other people pointing this type of stuff out. Like the .claude directory usage does irk me. Also the concise telegraphing on how some of the bash commands work bug me. Like why can it run some commands without asking me? I know why, I’ve seen the code, but that crap should be clearer in the UI. The first time it executed a bash command without asking me I was confused and somewhat livid because it defied my expectations. I actually read the crap it puts out because it couldn’t code its way out of a paper bag without supervision.
It is only funny until that vibe coder is building the data warehouse that holds your data and doesn’t catch the vulnerability that leads to your data leaking.
Perhaps I can laugh at the next Equifax of the world as my credit score gets torched and some dude from {insert location} uses my details to defraud some other party. Of which I don’t find out about until some debt collector shows up months later.
> It is only funny until that vibe coder is building the data warehouse that holds your data and doesn’t catch the vulnerability that leads to your data leaking.
This is unacceptable. Why would I patronize a business that hires vibe coders? I would hope their business fails if they have such pitiful security and such open disdain for their clients.
Not sure where you're looking, but you have companies like Minstral releasing Ministral 3 models for edge compute. Works fine on gaming PCs.
Edit: I doubt the "defaults" companies ship are going to matter much. We see what's happening with Microsoft and I've not met anyone that's happy with how LLMs are being shoveled into every digital product possible. Seriously, Microsoft doesn't need Copilot in their OS, they need to fix their stupid start menu decisions first.
You're wrong. The house literally will come up to someone that is winning a lot and tell them to leave. "You're too good, we have to ask you to leave."
This is pretty naive. What happens when you develop and extend such a system in a way that it can track who you interact with? What about social credit scores? You might go out to a social event with a very distinguished social credit score of 820 and get knocked down to 69 just because you were in proximity to Bob and Alice, who happen to be on some blacklists for their work in cryptography.
What you're staring at is the gateway tech that brings in a dystopian society. At first stuff like this is fairly benign, but slowly over time it ramps up into truly awful outcomes.
I mean public venues in the US use this stuff to kick out people that they don't like, or that work for firms that have been involved in lawsuits. That is no different than the start of a social credit score and it's happening already.
You miss the point. This is a law enforcement tool. The average American doesn’t want a surveillance state and that’s literally what’s happening. The legal aspect of it is not in question here.
Just because something is legal doesn’t make it right. Anyone deploying or involved with this technology should be embarrassed and ashamed of themselves.
That's not the point the video makes. Flock didn't invent CCTV. Not that I am trying to defend mass surveillance or incompetent silicon valley companies.
Flock "invented" CCTV in the USA that doesn't requiring going to multiple locations and asking for their tapes in order to track someone across locations.
Rather just see them get Flocked honestly. Seems like the type of tech a child would dream up only to realize when it's too late that it's dystopian, creepy, and a detriment to society.
Edit: Also does this mean OpenAI can bring back the Sky voice?