> Seeing is an inferior means of knowing where the cursor is compared to intuition. When I move the cursor, I know where it is with no conscious effort [...]
Realize it consciously or not, visual feedback is a critical part of this loop.
> As soon as the off-screen action finishes, the mouse cursor snaps back to the position it would have otherwise been in.
The cursor jumping to the edge of the screen, which is not somewhere the user ever saw it and may be outside of the application, seems worse than any current issue while still being insufficient for most legitimate use-cases.
I don't really see any fake cursor approach that isn't going to behave awkwardly in practice - e.g: is it your real (invisible?) cursor or fake cursor that can click to focus another application, and what happens to your cursors when you do so?
Just letting the user deny mouse control for an app (like on Wayland) seems sufficient to solve your annoyance. Maybe adding a separate permission for control while unfocused, since that's rarer. No need to break all windowed applications with reason to capture/move the mouse.
> But if it's windowed then it should be impossible.
I have one monitor, so fairly often have games/editors windowed with something else alongside them (a video, documentation, …). There are also uses where the mouse is only captured temporarily - like FPS-controls flying mode in Godot and Blender. Some image editors also allow for things like moving the cursor with arrow keys, which I find useful.
> It’s amazing how everyone thinks this sculpture’s message doesn’t apply to them. “My side’s flags are different, it’s the other side’s flags that are bad”
The sculpture's message isn't "flags are bad" - it's using a flag as a metaphor for nationalism/blind patriotism (based on the rest of the statue, the location chosen, what it's a response to, and Banksy's other works).
You can meaningfully test if one slot machine hits the jackpot more often than another, just that the methodology should involve a large number of repeats rather than a few anecdotes. There are some LLM leaderboard sites that do it with blind comparisons.
I'd imagine it's not that they lacked the time to email linux-distros, but that they were unaware they were supposed to do so.
Feels like the more sensible process would be for kernel maintainers to announce when a version contains a fix for a high-impact security vulnerability and for distro maintainers to pay attention to that. Could be done without revealing what the vulnerability actually is in most cases, trusting the kernel maintainer's judgement. There does seem to be a public linux-cve-announce mailing list.
> Could be done without revealing what the vulnerability actually is in most cases
No it can’t. The bad actors that should actually worry most people are actively combing through commits on mainstream codebases, using a combination of automation/AI and manual review to pluck vulns out by their remediations.
The patch itself can be made to look fairly innocuous, as was done here. Won't always successfully prevent bad actors finding the vulnerability, but seems better to at least not unnecessarily increase that risk.
I’m saying it doesn’t matter how innocuous you try to make the patch when there are known bad actors directly evaluating every commit for “so did this close a vuln”, using both AI and human expertise.
This is true no matter what, but the comment I’m replying to was also pitching that maintainers actively call out that the patch includes a high sev security fix:
> for kernel maintainers to announce when a version contains a fix for a high-impact security vulnerability
I'm suggesting that less information about the vulnerability could be circulated than the current process, not more, due to distro maintainers being able to trust just "version X contains a fix for a high-impact security vulnerability" coming from a kernel maintainer - whereas they'll need some information/proof of that claim when coming from an outsider.
The information exposed in the current process was: code changes in the git commits and a commit message that did not mention the vulnerability.
In the current model, attackers are actively looking at all commits as potential vulnerabilities, regardless of what anybody says or doesn’t say about them.
You can’t make the commits not exist, or not be visible, because that’s a core part of how the kernel is developed and released.
So anything you do with notifications to distro maintainers about the vuln, or the existence of a vuln, or a nudge to patch with no context, or whatever, is totally irrelevant and does not change the calculus: the moment the fix is committed, bad actors who were not already aware notice it.
This is, of course, to say nothing of bad actors who had already found the vulnerability on their own.
> The information exposed in the current process was: code changes in the git commits and a commit message that did not mention the vulnerability.
Idea with the current process is for the researcher to email distro maintainers about the vulnerability - that's the part I believe could make more sense to be done by a kernel maintainer so that less information about the vulnerability has to be shared around (distro maintainers can just trust the word of the kernel maintainer that a vulnerability exists and is serious, without further evidence).
> [...] existence of a vuln, or a nudge to patch with no context, or whatever, is totally irrelevant and does not change the calculus: the moment the fix is committed, bad actors who were not already aware notice it
I'd claim this is too much binary thinking - not every vulnerability will be immediately found by every bad actor just from a well-disguised commit (like here). There are many more commits refactoring parts of code or removing complexity that don't hide a known vulnerability. More information available about the vulnerability will expedite its discovery and exploitation, so it makes sense to minimize that information where it's not necessary.
Again, the current flow was that the researchers only reported the vuln to kernel devs, who then committed a fix with no flag that it was a security fix.
A proposal where there is an unmarked commit and then anybody tells anybody anything about the fix including a security remediation is strictly more information being disclosed into the world.
Also you’re just wrong about the ability of bad actors to identify vuln remediations (and consequently vulns) by looking at commits to major projects. I don’t know what else to say here other than that this is happening, and is easily attainable via the current combination of human expertise and automated tools.
> Again, the current flow was that the researchers only reported the vuln to kernel devs
That's what happened in this case, but the current idea/expectation (according to what was linked in the comment chain I replied to) seems to be that the researcher would email the distro maintainers with information:
> > Notify security@kernel.org, linux-distros@vs.openwall.org and relevant maintainers of the vulnerability; establishing details, embargo period, CVE request and possible fix
This is the process I'm suggesting could make more sense if it was instead the kernel maintainers alerting distro maintainers (with no more detail than necessary) after a fix has made it in. Should also be less fallible than relying on the researcher to do so.
> Also you’re just wrong about the ability of bad actors to identify vuln remediations (and consequently vulns) by looking at commits to major projects. I don’t know what else to say here other than that this is happening, and is easily attainable via the current combination of human expertise and automated tools
I don't deny that bad actors can figure out vulnerabilities from tracking commits, but removing or refactoring some subsystem does not immediately give all bad actors a full list of vulnerabilities with the old version. Developing an abusable exploit chain (as may be shown by the researcher to justify high priority patching) is also not necessarily trivial even if they do figure out an issue that was fixed.
From the paper they're using structured JSON schema mode opposed to freeform answers, so it can't. Models do typically caveat their answer for questions like this, in my experience.
They'll qualify their answers in English but as the article mentions, if your prompt asks for a confidence score, that "uncertainty" doesn't translate into low numerical confidence.
Quantifying their own confidence is also something they're not good at, and which the format would prevent them from refusing to do or preceding with a caveat if that's what you'd want of them. Particularly since the response format seems backwards - giving confidence, then carbs estimate, then observations/notes, rather than being able to base carbs estimate off of observations/notes and then confidence estimate off of both of those.
> They'll qualify their answers in English but [...]
That the default user-facing chat as a normal user would use it gives a warning is the key part IMO. I don't think expectations of there being no "wrong way" to use the model can necessarily extend to API usage with long custom system prompt and restricted output format.
I'd go further than that and say for me personally, the fact it's just a file is a selling point, not a "good enough" concession. I can just put passwords.kdbx alongside my notes.txt and other files (originally on a thumbdrive, now on my FTP server) - no additional setup required.
There will be people who use multiple devices but don't already have a good way to access files across them, but even then I'm not fully convinced that SaaS specifically for syncing [notes/passwords/photos/...] really is the most convenient option for them opposed to just being a well-marketed local maximum. Easy to add one more subscription, easy to suck it up when terms changes forbid you syncing your laptop, easy to pray you're not affected by recurring breaches, ... but I'd suspect often (not always) adds up to more hassle overall.
> presumably this comprise was only found out because a lot of people did update
This was supposedly discovered by "Socket researchers", and the product they're selling is proactive scanning to detect/block malicious packages, so I'd assume this would've been discovered even if no regular users had updated.
But I'd claim even for malware that's only discovered due to normal users updating, it'd generally be better to reduce the number of people affected with a slow roll-out (which should happen somewhat naturally if everyone sets, or doesn't set, their cool-down based on their own risk tolerance/threat model) rather than everyone jumping onto the malicious package at once and having way more people compromised than was necessary for discovery of the malware.
The cooldown is a defence against malicious actors compromising the release infrastructure.
Having the forge control it half-defeats the point; the attackers who gained permission to push a malicious release, might well have also gained permission to mark it as "urgent security hotfix, install immediately 0 cooldown".
I have not heard anyone seriously discuss that cooldown prevents compromise of the forge itself. It’s a concern but not the pressing concern today.
And no, however compromised packages to the forge happens, that is not the same thing as marking “urgent security hotfix” which would require manual approval from the forge maintainers, not an automated process. The only automated process would be a blackout period where automated scanners try to find issues and a cool off period where the release gets progressively to 100% of all projects that depend on it over the course of a few days or a week.
By "release infrastructure" I didn't mean gain admin access to github.com, I meant gaining the credentials to push out a release of that particular package.
To my understanding, Office (or "Microsoft 365") itself becoming "Copilot" was just confused messaging about the "Office Hub" app/shortcut being repurposed.
The article quote was being given as the supposed source for "Claude Code also found one thousand false positive bugs, which developers spent three months to rule out", so should substantiate that claim - which it doesn't.
If the claim was instead just "a good portion of the hundreds more potential bugs it found might be false positives", then sure.
Realize it consciously or not, visual feedback is a critical part of this loop.
> As soon as the off-screen action finishes, the mouse cursor snaps back to the position it would have otherwise been in.
The cursor jumping to the edge of the screen, which is not somewhere the user ever saw it and may be outside of the application, seems worse than any current issue while still being insufficient for most legitimate use-cases.
I don't really see any fake cursor approach that isn't going to behave awkwardly in practice - e.g: is it your real (invisible?) cursor or fake cursor that can click to focus another application, and what happens to your cursors when you do so?
Just letting the user deny mouse control for an app (like on Wayland) seems sufficient to solve your annoyance. Maybe adding a separate permission for control while unfocused, since that's rarer. No need to break all windowed applications with reason to capture/move the mouse.
reply