Nothing will be done until one of the investors of the tech end up embarrassed from weaponization of the tech against themselves. These people have no clue how creepy some of their technologic betters can be. I once witnessed a coworker surveilling his own network to ensure his girlfriend wasn't cheating on him (this was a time before massive SSL adoption). The guy just got a role doing networking at my company and thankfully he wasn't there for very long after that.
> Nothing will be done until one of the investors of the tech end up embarrassed from weaponization of the tech against themselves.
I propose that it become mandatory for all senior managment, board members, and investors in Flock - to have these Condor camears and their ALPR cameras installed out the front of their houses, along their routes to work, along the route to nearby entertainment precincts, outside their children's school and their spouses workplace (or places they regularly visit if they don't work) - all of which must be unsecured and publicly available at all times.
(Yes I know, I'm dreaming. I reckon every Meta employee's children should be required to have un-parental-controlled access to Facebook/WhatsApp/Messenger/et al...)
As O’Brien passed the telescreen a thought seemed to strike him. He stopped, turned aside and pressed a switch on the wall. There was a sharp snap. The voice had stopped.
Julia uttered a tiny sound, a sort of squeak of surprise. Even in the midst of his panic, Winston was too much taken aback to be able to hold his tongue.
‘You can turn it off!’ he said.
‘Yes,’ said O’Brien, ‘we can turn it off. We have that privilege.’
It's 2025. The ISP gateway I got comes with more default security than these cameras. The barrier to entry on security is lower than it ever has been in history. Whoever let this past the QC phase is an idiot.
It's all a matter of perspective. I'm sure to some executive somewhere, the person/s who approved all of this is seen as heroes, as they shaved of 0.7% or whatever from the costs of the development, and therefore made shareholders more money.
Until there are laws in place that makes people actually responsible for creating these situations, it'll continue, as for a company, profits goes above all.
It probably makes close to no difference in development or production, but it does significantly cut down on the number of tech support calls from people who can't figure out how to set the password, or immediately forget the password they set. If it has no password then you can just plug it in an have it work. Sure it's totally insecure, but its also trivial to install.
Generating a password that is unique to the device and print it with a sticky label on the underside of the device isn't exactly rocket-science, and ISPs somehow figured this out at least two decades ago, which was the first time I came across that myself. Surely whoever developed this IP-camera has an engineering department who've also seen something like this in the wild before?
Yep, but if you do that you need to staff a help line with people who can say "turn the box over and look at the sticker, no the sticker with the numbers on it, it's white with black letters and says PASSWORD in a big font, no the password isn't literally PASSWORD, it's the line below that with the strange letters, yes, to type that one you need to hold the shift key and press 3..."
Remember that ISPs often have people who come to your home to hook stuff up.
Yes, which costs money, which is exactly my original point. It's not because "Oh I'm so hassled because customers are dumb", it's "No, hiring people to do support would cost us money, which we don't want".
> Remember that ISPs often have people who come to your home to hook stuff up.
I can't recall a single time a technician wasn't required to come to my flat/house to install a new router. I'm based in Spain, maybe it's different elsewhere, but I think it's pretty much a requirement, you can't setup the WAN endpoint or ISP router yourself.
Last time I moved I opted for the "self install" kit, which was fine because I'm technical and the previous owners already had the service so there was nothing that needed to be done except hooking up the pre-configured modem. Saved me $200 in truck roll fees.
Interesting stuff, I've asked if I could do the installation myself every single time I've moved to a new place, and never has the ISP (three different ones) said yes. There isn't any installation fee place(probably by law?) so that isn't an issue here, just a hassle to coordinate having to meet between 12:00 and 18:00 or some super wide range of time for them to come and install it.
In the US for the past 5+ years Xfinity/Comcast, Charter, and whatever CenturyLink is called these days have all heavily pushed the "self-install kit" option vs traditional tech install each time I've moved.
Worked 4/5 times (all with cable), only time it failed was because I had apparently subscribed to a DSL plan from CenturyLink without realizing and they needed to wire up the extra lines upstream for the "modern" version of DSL to work in my apartment. After insisting multiple times that the self-install kit was 100% plug-n-play at my new address despite my intense skepticism since I really needed reliable internet from Day 1 during COVID remote work.
I was seriously missing Comcast/cable by the time that 1 yr contract was up, the devil you know and all...
So you're trying to justify this type of rampant negligence in tech? Do you think justifying such malfeasance makes up for fact we literally have surveillance networks that bad actors can tap to do really awful things?
Anyone that cares about their perspective has missed the point.
I don't think the person you're replying to is justifying it, but saying there's no laws to prevent the abuse.
Personally I think tech CEOs should be put in stocks in the town square on the regular but they're protected from any form of repercussions besides extreme cases of fraud. Even then, they're only held accountable when the money people have their money effected, not when normal people are bulldozed by the abuse.
If I was 10 years younger, I might agree that they aren't justifying it, but I have enough experience with passive speech to just not let it pass anymore.
Regarding remedy, we really need laws on this stuff yesterday. The problem is that we have to gut first amendment freedoms for some of this stuff, which wont go anywhere because there will always be too much overreach with today's representatives.
You should probably read the comment you're replying to before replying
> Until there are laws in place that makes people actually responsible for creating these situations, it'll continue, as for a company, profits goes above all.
They obviously meant that we ought to be holding these people responsible.
> You should probably read the comment you're replying to before replying
Congrats you spotted the thing we agreed on between comments. If you fail to see the agreement through parity of the part that was echoed, idk what to tell you. Education system is failing everyone in it these days.
> So you're trying to justify this type of rampant negligence in tech?
Don't know how you reached that conclusion, I obviously isn't trying to justify anything. But maybe something I said was unclear? What exactly gave you the idea I'm trying to justify anything of this?
Nothing against you personally, just so you know. But I have to point out that anyone caring about the reason for the short coming of flock on stuff like this are just crafting soft reasons they can use to justify things later. Being up front here I care not for their reason because the entire business model is frankly disgusting and an affront to a functioning society. This is the type of tech that evolves into social credit scores and precog crime units, stoping crime before it happens.
At the end of the day your rationalization only affords comfort to those that have a vested interest in this stuff being successful and it needs to be clear to those people driving this that they’re not doing something popular or even good.
Why stick your neck out, swim upstream to do a good job that will not be recognised as such?
Fix the corporate incentives and engineers will be able to do the right thing without suffering. Not everyone gets the luxury of a secure career doing morally ok things.
Counterpoint: whoever let this past the QC phase got paid very generously, and everyone involved is ignoring the laws that already exist to combat this, because law enforcement, too, gets paid generously. And the laws that forbid that aren't getting enforced because the police doesn't police the police, and dad has made it perfectly clear that flagrantly ignoring the law is fine if you're in power.
What makes you think QA/QC is paid handsomely? It's a bloody cost center mate, and you can't measure "damage prevented" consistently, or at least in a way most high-risk tolerating exec types won't immediately undermine.
Seem to recall license plates are required to be illuminated as well. What's stopping someone from just adding an additional IR light to those enclosures? Couldn't you just slap an additional bright enough IR light in that makes it impossible to even see the plate clearly through cameras?
Personally, if I cared enough to obfuscate my plate info from these devices, I would just taint their data by wrapping my car in a wrap with various different "plates" themed art. I like cars and the exterior has traditionally been treated like art. Tainting data is just as effective at making the core dataset useless as omitting data in the first place.
> What's stopping someone from just adding an additional IR light to those enclosures
Nothing.
> Couldn't you just slap an additional bright enough IR light in that makes it impossible to even see the plate clearly through cameras?
You could: but it will only work at night (and even then, I don't know if the amount of light you could concentrate in that area would be enough to blow out letters), because all of these cameras have switchable IR cutoff filters.
Most digital cameras see farther on the low end than humans do and it can do some odd things. It made the news with the Sony? camera that had a mode specifically to use this--turns out it sort-of sees through many swimsuits. Or a video I've seen of firefighters caught in a burnover--the fire looked very weird!
Genuinely don’t know. The hack here is exploiting artifacts from over exposure for the camera sensor. As to if they have mitigating features to filter non visible light, I’m actually curious.
I just imagine the most hilarious form of this idea being a panel that lays behind the plate that is part of the car. The panel containing an array of IR leds that flood everything behind the car with invisible light. Imagine going out side, seeing nothing, but you pop open your phone's camera app and everything is illuminated for some reason. Would be wild.
Edit: I have no concept of what camera sensors are doing these days.
The thing I really don't get is why people are devaluing their own voices with slop. Too many people are just pressing what they think is an "easy" button and calling it good. What they lose in the process is their voice, their opinions, and some of the intangible humanity that was encoded in their content.
At first I just blocked the people doing it, but today I just stay off the platforms where I'm seeing these people. If things stay the same, in the future I can only imagine that small gated communities are where real humans are communicating (think places like lobste.rs, but for normies). Smaller Discord communities are still working.
It's just wild to me how some platforms are fine with bots fluffing content creation. There comes a point where people will realize all the messaging on platforms like Facebook are AI generated and they just leave for greener pastures.
I believe people really want to connect with real people. It just can't be done if people aren't being themselves. I think there are a lot of creative people that are up in arms against the right things, but for the wrong reasons. I don't think the legal IP implications are as damning as the societal implications of everyone filtering their speech through AI.
Sorry, this came off way more ranty than I would like, but it's been bubbling in my head for a while now. So much so I'm actively doing something about it.
Personally the only place that I filter my own speech through an AI is when I couldn't care less about the person I'm communicating to but I need them to get the message (typically because I have too many 4 letter words to say about the subject matter).
> The thing I really don't get is why people are devaluing their own voices with slop. Too many people are just pressing what they think is an "easy" button and calling it good. What they lose in the process is their voice, their opinions, and some of the intangible humanity that was encoded in their content.
Have you considered that people producing material that is considered, by the creator or audience, as "content" very often aren't trying to encode their voice, their opinions, or any of their intangible humanity in it, they are working a system and seeking to provide what the system demands. They aren't sacrificing those things, those things were never the point of the activity.
(OTOH, the people themselves as creating art, including those that use AI among their tools, do probably see that as the point, but they aren’t doing what they see as pushing an "easy" button.)
Kinda funny if it returned an ad for puppy chow. Realistically I doubt an ad presented would actually be tied to the context beyond the seed used for a vector lookup.
I tried out EmbeddingGemma a few weeks back in AB testing against nomic-embed-text-v1. I got way better results out of the nomic model. Runs fine on CPU as well.
Not a lawyer, but if you're talking USA, would recommend checking regularly for updates: https://www.copyright.gov/ai/
Even if AI output isn't copyrightable, could you really argue a solid case if I taint random parts of the source code with my own isms? Which this is something I do, I don't care if the AI generated portions of my code base are copyrightable, perhaps my licensing is not valid for those portions, but throughout my code is going to be parts I hand crafted or patterned out because the AI just can't get it right. Those snippets are just proverbial land mines in waiting for a copyright infringer in waiting.
I've seen that before. Idk how I feel about that pattern in general. When I see any of the tools I use do stuff like `bash(cmd... I didn't ask your permissions - hehe!~)`, I get a bit pissed that it wasn't a straight up standalone tool. The number of times it's gas lit me into panicing isn't zero.
Isolation is a smart idea for these things, as you literally can't verify their behavior beyond "it's kinda doing the thing most of the time". My chat app just kinda settles for "you can require permissions and audit", which it's a fool proof bullet, when the AI can churn out more code than a human can read in a reasonable amount of time.
might be good to get your hands around this early.
reading up on how crush, goose, and opencode handle this may be a good idea.
i've been trying to build a web native terminal assistant for a while (just a side project) and this is easily the thing that keeps me up at night.
### Primary Sources:
- *Anthropic Engineering Blog: "Making Claude Code more secure and autonomous with sandboxing"*
Detailed article on Claude Code's sandboxing features, including OS-level primitives (e.g., Linux Bubblewrap, macOS Seatbelt) for filesystem and network isolation.
[Read here](https://www.anthropic.com/engineering/claude-code-sandboxing) (Published Oct 20, 2025).
- *Claude Code Documentation: Sandboxing*
Official docs covering setup, configuration, security benefits (e.g., prompt injection protection), and limitations of filesystem/network isolation in Claude Code.
[Read here](https://code.claude.com/docs/en/sandboxing).
- *Claude Blog: "Beyond permission prompts: making Claude Code more secure and autonomous"*
Overview of sandboxing in Claude Code, emphasizing boundaries for safer agent execution.
[Read here](https://claude.com/blog/beyond-permission-prompts-making-cla...) (Published Oct 31, 2025).
### Additional Resources:
For broader context on sandboxing agentic AI:
- *arXiv Paper: "Securing AI Agent Execution"*
Research on isolation techniques for AI agents, including risk assessment.
[Read here](https://arxiv.org/abs/2510.21236) (Published Oct 24, 2025).
- *HopX Documentation*
Practical guide to sandboxing for AI agents (e.g., using Firecracker micro-VMs).
[Read here](https://hopx.ai/) (Open-source SDK available at [GitHub](https://github.com/hopx-ai/sdk)).
### Cursor
Cursor uses local-first editing with optional sandboxing via Docker containers for isolated execution (no default vendor-owned sandboxes). It respects user-defined rules without overriding them.
### OpenAI Codex
Codex primarily relies on API-based execution with optional user-managed sandboxes (e.g., via Firecracker or custom proxies). It emphasizes provider retention policies but lacks built-in native sandboxing like Claude Code.
- *Render Blog: Testing AI Coding Agents (2025)*
Benchmarks Codex's handling of isolation in production tasks, including Docker-based sandboxes.
[Read here](https://render.com/blog/ai-coding-agents-benchmark) (Published Aug 12, 2025).
### Goose AI (Codename Goose)
Goose uses container-based isolation via tools like Container Use (built on Dagger) for git-branch-isolated environments, emphasizing safe experimentation without affecting the host.
- *GitHub Discussion: Goose vs Claude Code*
Community analysis comparing Goose's local isolation to Claude Code's cloud sandboxes.
[Read here](https://github.com/block/goose/discussions/3133) (Ongoing, started Jun 27, 2025).
I'm inclined to isolate the chat processes only if I keep the bash tool. I'm undecided if I'm keeping it in. Implementing an MCP server in python is dead easy and so is making a bash tool.
Right now I'm more interested in getting ACP working for gemini-cli and claude-code to serve it their models. My first goal is just to make the manually operated tool that just gets out of the way or whatever. Sane permissions out of the box.
If anyone wants to just come along and add an optional feature today, I'm happy to merge under the same license. Otherewise, I will eventually add this feature, I'm just not sure if it will be sooner or later.