Not every state(, country, province, region, whatever) has laws that restrict or prohibit this kind of activity. A complaint to a public prosecutor may not be a good option for many people, especially in the US which historically has had very permissive laws about how corporations can handle user data (with some exceptions like the CCPA, though TBD if that legislation would do much here).
My approach has been to lock AI assistants (for me, that's just Apple intelligence as far as I can help it) out of integrations with the vast majority of apps, and especially chat and email apps.
At some point, some reverse engineer will publish a writeup either confirming or denying how local these models are, how much data (and maybe even what data) is being sent up to the mothership, and how these integrations appear to be implemented.
It's not perfect, and it only offers a point-in-time view of the situation, but it's the best we can do in an intensely closed-source world. I'd be happier if these companies published the code (regardless of the license) and allowed users to test for build parity.
I don't have a definitive answer, but there's probably demand for these outside of adventure tourists trying to get to some of the most remote road-connected points in the world.
The James Bay Road exists essentially as a service road for a bunch of hydroelectric infrastructure that's part of Quebec's James Bay Project. I've never gotten past planning a trip up, but I gather much of the traffic on these roads are transport trucks delivering supplies to these remote locations (beyond what can normally be shipped up there by Hydro Quebec's aviation fleet, which as I understand is mostly wet-leased from Air Inuit and can land on many of the unimproved strips near the major project sites).
Anyway, little outposts like these might've been maintained by either Hydro Quebec on an emergency basis for these transports, or by volunteer (sort-of) trail associations, or by the province itself, or a combination of the three.
Who's editing files big enough to benefit from 120GBps throughput in any meaningful way on the regular using an interactive editor rather than just pushing it through a script/tool/throwing it into ETL depending on the size and nature of the data?
I work exclusively outside of the windows ecosystem, so my usual go to would be to pipe this through some filtering tools in the CLI long before I crack them open with an editor.
"Better tools for the job" isn't always "the tool currently bringing in the $$$$$$$$$$$$$$$". So you live with it.
Sure, maybe by switching to linux you can squeeze out an extra CPU core's worth of performance after you fire your entire staff and replace them with Linux experienced developers, and then rewrite everything to work on Linux.
Or, live with it and make money money money money.
To turn this around, you can have fun and ask if something is meaningful or not outside the fun at the same time. If it is, great. If it's not, no harm.
I'm not saying that doing this can't be fun, or even good to learn off of, but when it's touted as a feature or a spec, I do have to ask if it's a legitimate point.
If you build the world's widest bike, that's cool, and I'm happy you had fun doing it, but it's probably not the most useful optimization goal for a bike.
Not a great analogy. This editor is really fast. Speed is important, to a point. But having more of it isn't going to hurt anything. It is super fun to write fast code though.
Not on the regular, but there are definitely times I load positively gigantic files in emacs for various reasons. In those times, emacs asks me if I want to enable "literal" mode. Don't think I'd do it in EDIT, though.
As a specific benchmark, no. But that wasn't the point of linking to the PR. Although the command looks like a basic editor, it is surprisingly featureful.
Fuzzy search, regular expression find & replace.
I wonder how much work is going to continue going into the new command? Will it get syntax highlighting (someone has already forked it and added Python syntax highlighting: https://github.com/gurneesh9/scriptly) and language server support? :)
Right, these are more useful features, IMO, than the ability to rip through 125GB of data every second. I can live without that, but syntax highlighting's a critical feature, and for some languages LSP support is a really big nice-to-have. I think both of those are, in this day and age, really legitimate first-class/built-in features. So are fuzzy searching and PCRE find&replace.
Add on a well-built plugin API, and this will be nominally competitive with the likes of vim and emacs.
Code signing doesn't stop redistribution of unmodified copies of software, and it allows for cryptographic attestation of its origin (when used properly). If you modify the software, you'll have to re-sign it and make sure your code's consumers trust that signature's chain of trust.
DRM prevents you from redistributing original media (with varying degrees of effectiveness) and doesn't do much for cryptographic attestation (nominally).
These are two very different systems for different purposes.
In what way does code signing prevent you from using your computer as you want?
As far as I know you can run unsigned code pretty easily still (especially, though not uniquely, as a technical user), and the process of stripping attestation/signing information from an executable on most popular platforms is well-documented with freely-available tools in most cases.
I'm almost certain there are ways to disable code signature checking completely on the major OSes if you really want to, but why you'd want to do that, I don't get.
Is your argument that running code with an invalid signature should happen with no notice, no hurdles, no nothing, by default?
I cannot place a file in my profile directory and have firefox execute it without having it approved by mozilla. I booted my old PC to check on something recently, opened firefox by opening an html file, and discovered that it had disabled all my extensions making it less secure by allowing every webpage to do RCE had I changed tabs.
Then there is secure boot which requires MSFT permission to use an OS, cell phones on which you cannot run your own code without manufacturer permission.
I hope you don't still think the R in DRM stands for rights.
When using the Debian builds of Firefox at least, you can just symlink the extension directory into the system Firefox extensions directory, even if the extension is in your home directory somewhere.
- Ground-satellite-ground relaying, with at least one geostationary satellite between the ground stations. The satellites being geostationary means that the path of the beam through the atmosphere is constant, and air traffic can be routed around them in a reasonable way. (You could do this with lower orbits, but it'd be a hassle with current aviation industry technology).
- Beaming by tunnel! Can't get hit by a laser if it's under your feet. Obviously, this negates the benefits of the technology, and just turns into fibre lines without the fibre, over shorter distances, with all the pain in the ass of laying underground cable.
In the first approach, there'd have to be an effective exclusion zone around the receiving station (how big, I don't know), and it'd be nice to have satellites fail safe, so that if they end up pitching or rolling off axis, the beam will be shut off before becoming an accidental weapon.
You could also fire smaller lower power beams from multiple sources that converge on the receiver, like a gamma knife. 9 points around a large dish could fire beams close together, but only at the intersection would it be at full power.
His views are being implemented by the sitting US government. I understand the moral dilemma of it not being good to platform and broadcast these views, but under the circumstances, I'd argue there's also a duty to inform people of this looming danger.
At a micro level, sure, you could argue that kill/no-kill decisions are being made by frontline enlisted soldiers, but the stress, the exhaustion, and being able to see your enemy downrange add a degree of humanization and discretion.
Wars are brutal. Some warring factions are incredibly brutal. But taking a human life can't be reduced to a `KILL? [Y/N]` decision made by some MAJ somewhere. Killing enemy soldiers is a last resort (formally, this is the concept of _military necessity_), and normalization of it as anything else is a mistake.
I also think it's a fallacy to compare this to the self-driving car trolley problem; the timeline of decisions, length of the decision chain, and ultimate goal of the decision-making process are _aggressively_ different.
My overarching points are, I guess:
War is a tragedy, obviously, but I don't think there's a way to avoid it right now. In absence of some way to stop war forever (hah), we can't trivialize taking human lives. It's not lost on me that this has been happening for a while at this point, and that it's getting worse. I still oppose it, and I think we all have a responsibility to be more critical of this regime of warfare.
I don't know how to counter the argument of "well, if we don't do it, _someone else_ will, eventually." This is some really fucked up, self-justifying inductive reasoning that can't easily be countered by calling out the moral bankruptcy of the premise. In the past, mutual disarmament treaties have been a down-the-line bandaid for this kind of thought process, but the nuclear rearmament we're seeing in the world right now shows it's not a panacea.
You could start by not inventing vague hypotheticals to argue against, and instead engaging with observable, measurable strategic and tactical reality?
War is studied. There are journals, papers and research on war fighting at all possible levels.
In the most recent action by Ukraine you can observe actual reality: what did they attack? Military equipment of the enemy. Why did they attack it? To degrade the enemy's ability to sustain and rotate their forces attacking them. What was it for? Well for one thing it will hopefully considerably reduce their ability to bomb civilian targets.
In this specific case, I agree that it was fine -- using drones with limited decision-making ability to strike targets like parked aircraft is okay, as long as there's an overwhelming likelihood of the drones not getting false positives from invalid targets.
My response was more aimed at the parent comment to my previous one, which seemed to paint delegating kill/no-kill decisions with a brush of "I don't know why this is such a big deal."
And in what way is OpenAPI an abandonment of REST? It's an API documentation system that can be leveraged for generating REST server boilerplate code. If anything, it builds up the quality-of-life around REST.
I haven't seen a REST API in production for many years, maybe 15?
That's anecdotal obviously, but almost every, if not every, API I use today is an RPC call returning JSON.
Edit: to be clear, the distinction between what REST was defined as and what we use today often doesn't really matter. We use JSON APIs today, it is what it is. This is a case where it really matters though, LLM companies are now trying to push an entirely new protocol that tries to do roughly what REST did in the first place.
So the things we call "REST" in 2025 are not quite the same as the original specification of REST. One key aspect that has been abandoned is that sent data should be self-describing. That is, it shouldn't require any additional information to be useful. i.e. API documentation for JSON endpoints.
There's a great chapter on this in Hypermedia Systems[1]. Talks about both this and HATEOAS(Hypermedia as the engine of application state).
reply