Seems bad. "An attacker was able to achieve code execution in the content process by exploiting a use-after-free in Animation timelines. We have had reports of this vulnerability being exploited in the wild."
It seems to be JavaScript-free from the description, which makes it even scarier. Imagine the libwebp decoder bug except embedded media blocking doesn't really work (who blocks CSS?).
I'd be interested to know if it's sufficient to avoid this recent vulnerability. Either way, it confirms my opinion that UI animations are an anti-feature.
As a uBlock Origin filter (paste in Settings > My Filters):
! No CSS animations
##*,::before,::after:style(transition:none !important;animation-delay:0ms !important;animation-duration:0ms !important)
! No CSS animations (different method)
##*,::before,::after:style(animation-timing-function:step-start !important;transition-timing-function:step-start !important)
There's other (often perf heavy) CSS clutter that's nice to get rid of:
! No image filters
##*,::before,::after:style(filter:none !important)
! No text-shadow
##*,::before,::after:style(text-shadow:none !important)
! No box-shadow
##*,::before,::after:style(box-shadow:none !important)
! No rounded corners
##*,::before,::after:style(border-radius:0px !important)
No rounded corners is fun. You realize many loading spinners are actually CSS rounded corners! Youtube becomes almost unrecognizable — mercifully — especially if you also revert the new TikTok-inspired font:
Firefox doesn't seem to support css animation-timeline, I think this refers to the JS AnimationTimeline API? In that case "dom.animations-api.timelines.enabled" flag should control it.
He works (or recently worked) for Mozilla on security-related projects. The code commit fixing the issue was isolated to the /dom/ directory in the source tree, and Firefox does not support CSS Animation Timelines. The Animation Timelines code is not directly accessed by web devs, and it appears the only way to execute that code is via the JS API for Animation Timelines. I'm not a web security expert, but the signs seem to point to him being correct.
A note for Ubuntu users; if Firefox is installed using `snap` (default) and you run `snap refresh` it will output "All snaps up to date" - but this is not true!
You have to close firefox, then run `snap refresh` for snap to upgrade firefox...
Neither am I, but seems that snap refreshes can be inhibited programmatically if they may cause some damage when proceeded in the background. So it is technically correct that no snap refreshes can be performed at this point, but the message doesn't clearly state that some refreshes have been inhibited (possibly because there would be tons of them if they are exhaustively listed?).
It doesn't update actively running application containers.
You don't actually need to stop it before running “snap refresh” though, it'll just be out of date as long as it is kept open. Once the application stops running, next time it is run the updated image will be used.
[caveat: I'm not a snap user myself currently, so my information may be inaccurate, take with a pinch of your favourite condiment]
That is Firefox standalone behavior when it detects its files have been changed and differ from the ones loaded by the current instance. In theory, what snap is doing avoid changing files from a program while it is running.
Now, I don't know how your setup looks like, but I don't think anything is distributed as snaps by default on Arch.
At least AFAIK its mostly an Ubuntu & derivatives thing.
The difference being that no one in their right mind is thinking of rewriting a browser in Java to also make it faster, while that's exactly what Servo/Stylo etc are all about.
For what it's worth, Python was also considered at some point for use in the Firefox codebase. I don't remember the rationale for not adopting it, but I think the idea was "we all like Python, but we already have one messy language (JavaScript), let's not make it two".
CVE affected range is always far too wide. It obviously can't affect anything before ~75 or so because firefox didn't have the timeline api before then. It's annoying that they don't distinguish an unknown lower bound.
Well, I think their thinking is that:
* we don’t want users to run 75
* 75 is so riddled with CVEs by now, who cares if there is one more
But I agree it’s appears lazy because it would have been easy to determine in that case, if I understood you correctly. Someone would have had to test it though, at the very least.
It’s all have to do with resource management here.
It’s obvious that laying off people that were working hard at making more robust the flagship product of the non-profit wasn’t going to result in a an increase of security in this product. Could the whole lay-off have been prevented? That would require some number analysis here, and insights I lake.
Could at least some termination have been avoided? Freezing the income of the CEO until some agreed metrics improve, and use the amount thus spare to save some employ salary was certainly an option here, wasn’t it?
Claiming "think of my family, look how much more some other people earn elsewhere" while almost simultaneously (at organization level at least) putting so many people in a jobless position, that’s a rather bold cognitive dissonance to throw at the world to my mind.
If pointing out "odd financial priorities" of a non-profit is flame bait, one might wonder how humanity is supposed to mend all organizational dysfunctions it can ever fall into.
It’s pretty relevant considering the continued mismanagement of Mozilla.
Nobody would care about Mozilla in 2024 without Firefox, but Firefox development seemingly takes a back seat to a variety of other pet projects that Mozilla’s management tries (and keeps failing, over and over) to chase.
For example, they’ve been trying a pivot to become a community-focused privacy company the last couple of years, yet are fine with implementing ad topics.
AFAIK didn’t Safari advocate against it over privacy concerns? If so, what is Mozilla doing?
Or their partnering with a shady company for removing data from data brokers.
Before the privacy pivot, there was the “we want to make browsing better” pivot with their acquisition of Pocket that went nowhere.
From the outside Mozilla looks like a low-scoring charity grift you’d find on CharityNavigator with how far they deviate from the missions they claim to support.
Why managed when it could be in Rust and have both performance and safety?
The Servo shouldn't have ever been laid off. Yes, I'm aware a team is working on it now, but it isn't up to the same speed and enthusiasm as it was when funded by Mozilla, is it?
Having done concurrency in Java and Rust, my experience is that Rust's concurrency primitives are an order of magnitude better than Java's. I haven't tested C#'s.
C# doesn't have Send and Sync that is true. It frequently does not need either because it uses GC instead of affine types for automatic memory management. Synchronization is indeed "just don't write bugs", where Rust offers a massive upgrade, but .NET CoreCLR's memory model is more strict than C one, like object reference assignment having release semantics, so quite a few footguns are luckily avoided: https://github.com/dotnet/runtime/blob/main/docs/design/spec...
'&' and '&mut', however, are your 'ref readonly' and 'ref' respectively.
Is there anything in C# which ensures what Rust does statically, that you must acquire a lock to access data protected by a mutex? Rust also has MutexGuard not be sendable i.e the mutex won't be released on a different thread from which it is acquired.
With traditional mutex APIs it's just far too easy to get it wrong. I think you just have to structure your thread-related APIs to be misuse resistant. As humans we're just not good enough at not making mistakes.
> I think you just have to structure your thread-related APIs to be misuse resistant
The premise of this stays. C# approaches this in a more traditional way, with exposing the set of synchronization primitives. It's a step above C and, usually, C++ still because you don't need to e.g. have an atomic reference counting for objects shared by multiple threads.
Concurrent access itself can be protected as easily as doing
lock (obj) {
// critical section
}
This, together with thread-safe containers provided by standard library (ConcurrentDictionary, ConcurrentStack, etc.) is usually more than enough.
What Rust offers in comparison is strong guarantee for complex scenarios, where you usually have to be much more hands-on. In C#, you can author types which e.g. provide "access lease", that look like 'using var scope = service.EnterScope(); ...`, where using turns into a try-finally block, where finally that calls .Dispose() on the scope is guaranteed to be executed.
It's a big topic, so if you have a specific scenario in mind - let me know.
Thanks! To be fair, there are certain advanced scenarios which Rust's mutex model can't handle either -- sometimes you want to protect writes via a mutex, but allow reads without them (maybe you're okay with torn state on reads). This is a rare, expert use case with architecture-specific considerations that must be handled with care.
I do think Rust's mutexes handle almost every use case that can be thrown at them, though, and in a way where it's next to impossible to get it wrong. I think if you're writing a browser engine in the 21st century you should bake in parallelism and concurrency from the start, and Rust is the most suitable language to do that in.
> At the end of the day web browser is just bunch of parsers and compilers working together
At the end of the day, OS is just a bunch of command lines being piped together. /sarcasm
Sure, you are just missing: rendering, layout, security, network traffic for sockets, low-level control over hardware, writing a decent enough VM, image processing, video playback, music playback, compression, decompression, self-update, decryption, don't forget add-ons people love add-ons, also add-on security and isolation, web edit and debug tools, network analysis tools, etc.
Why would you need to reinvent networking layer instead of just sending http requests via matrue, battle tested lib available in your programming ecosystem e.g from MSFT? Same with crypto, sockets, compression, etc?
Video and audio I mentioned.
Extensions are tricky, right, but more from privacy standpoint cuz after all you can just expose too much
All the major browsers came out when Windows XP had substantial market share.
So browser vendors couldn't rely on the platform to provide up-to-date SSL support. Or MP3 support. Or MPEG-4 support. Or PDF support. This established the norm that browsers would ship their own video support, their own SSL support, and so on.
And Google realised they like the power this gives them - if Google wants to replace HTTP with QUIC or introduce a new video DRM standard, or a new video codec like VP9 - they don't need the cooperation of anyone outside of Google.
If Chrome bundles DRM support (allowing it to play Netflix), and its own HTTP/2 stack for speed - are you going to release a browser that's slower and doesn't play Netflix? Doesn't sound like a recipe for big market share.
Many of these components have been made part of the ecosystem long after they were introduced in Firefox. Also, the more platform-specific you go for each component, the more you're going to introduce subtle incompatibilities between Firefox running on different versions of Windows or in Firefox for Windows vs. macOS vs. Linux. Also, for a very, very long time, Microsoft had an extremely poor record in terms of security fixes. So what happens when you rely on a Microsoft http library and Microsoft takes a year or two to release a 0-day?
There are benefits to this approach, of course, but the costs would have been consequential.
Browsers are using new http features much earlier than they're available in the system libraries. Browsers supported http2 and 3 before they were standardised enough to include in systems. .net http client still can't even tell you about http2 early hints as far as I understand it.
It's going to be the same for crypto and compression. Systems don't ship with brotli for example. The battle tested implementations come to the browsers first in many cases - or at least they're battle tested at the point anyone starts including them in .net or Java.
> Why would you need to reinvent networking layer instead of just sending http requests via matrue, battle tested lib available in your programming ecosystem e.g from MSFT?
Because modern browsers are essentially cross-compatible OSes.
All this should require less than 10 minutes including setup and such.
Lastly, I want to make a disclaimer that you do not need C# Dev Kit extension (which requires an account that annoys many people, including me) for VS Code, only the base C# one, which is what gives you language server, debugger, etc. If you are using VSCodium which cannot use closed-source vsdbg component that the base extension uses, you can replace it with https://github.com/muhammadsammy/free-vscode-csharp which uses open-source debugger from Samsung instead. It can be rough around the edges but works well enough in standard scenarios. Just don't use Debugger.WriteLine over Console. :D
> Throughout 2024, so far, Mozilla had to fix zero-day vulnerabilities on Firefox only once.
> On March 22, the internet company released security updates to address CVE-2024-29943 and CVE-2024-29944, both critical-severity issues
Vulnerabilities will be found in everything. Firefox is a fully internationalised application and it is FOSS. The team responsible for Firefox is doing a good job.
Sounds like Mozilla should invent a low level language with great safety guarantees, maybe even call it after some form of oxidation process[1]. Then make a browser engine called after a motor[2], and then NOT axe the team responsible for it[3].
A long time ago, the possibility of using Java or C# in Gecko (the core of Firefox) was pondered.
Java was rejected because of the huge memory requirements and the unpredictable (and sometimes lengthy) garbage-collection pauses.
C# was rejected because (at the time) it was too tied to the Microsoft ecosystem and there was no way to get it to build on all the platforms for which Firefox is available. I don't remember garbage-collection pauses being discussed, but they would also be an issue.
I think of browsers these days on par with OSes. I mean, they provide a runtime to execute binary code (wasm). They do process management and scheduling. They do a lot of things which up until 15 years ago, we thought bongs to the realm of Operating Systems.
And history has shown that when you need to do that kind of low level code, it's nigh on impossible to achieve acceptable results with a garbage collected language. Many people tried, none really succeeded.
It seems to me that both C#/Java have build their own niches and are hard to impossible to realistically use outside of them, such as to write a web browser.
That's not completely accurate. The plan is to use Swift for "security critical" areas like decoding data. It's unlikely core components like the layout/CSS engine will be converted to Swift.
> if it means some perf drop, modern hardware will get it back in X years
I think the unfortunate reality is that other browsers will also take advantage of that speed boost, sites will get even more bloated because they can and it will stay unusable for a long long time.
I can't find the link right now but I seem to remember that Firefox already replaced some internal native subsystems with the same code compiled to WASM - or maybe even compiled to WASM and then translated back to C, which basically adds a runtime memory safety layer to unsafe C code at the cost of some performance (I think it was a couple of media codecs, but not sure).
Not sure why you think that WASM is less secure than JS though. Even if the WASM heap has internal corruption there's no way for this to do damage outside the WASM sandbox that wouldn't be possible in JS.
> I can't find the link right now but I seem to remember that Firefox already replaced some internal native subsystems with the same code compiled to WASM - or maybe even compiled to WASM and then translated back to C
Only parts of the browser are running in multiple small isolated WASM sandboxes, those WASM sandboxes are isolated from outside world about as well as if they would run in their own process.
Compartments of internally unsafe sandboxes are what we have now, with browsers employing native-code sandboxes and isolated renderer processes etc. It gets leaky.
Java applets didn't sandbox shit though, because you could call straight into your own native code via JNI (I know because I used exactly that approach to integrate a Win32 game client into browsers). The only thing that the applet launcher did was asking if it is ok to run the applet.
This is true, but adding a sandboxing to browsers has been a huge part in driving up the difficulty/cost of browser exploits, and driving down the frequency of their use.
And also we'll pay for a bypass of the wasm sandbox. (Actually, looking at our table, I'm going to try and get the bountyamount upped...)
They're increased, and some things are just obviously slow at least without extra effort to setup things like gpu pass-through. But is it worth basically turning back the clock on your computer's performance a few years to live in a world where a random click from HN or reddit can't quietly compromise your entire computer? I think so.
Probably the biggest thing is to have a lot of ram, because if you're really using the virtualization it's a bit ram inefficient.
Many things I expected to be hard or annoying just turn out to be non-issues. Qubes has lots of good automation to make it pretty seamless to use multiple VMs.
I was already a fedora user, so I just copied my old home into a new app vm and was instantly productive. Then over time I weaned myself off the monolithic legacy vm into partitioned VMs.
Please define GPU intensive. Using a gpu for video decoding can mean smaller battery usage in some cases.
Also it is not only about doing these tasks at a time, but if you need to shut down to be able to start another VM context because they can't be used concurrently it makes it very tedious user experience.
A normal qubes user workflow doing all your gpu requiring stuff in a single appvm-- you're not forced into isolation that doesn't work for you.
But you're also not running qubes if minimizing battery usage is a high priority for you.
As far as the tedium, perhaps a little, but bringing up a terminal on a non-currently-running app vm takes about 5 seconds for me, so it's faster than you might expect.
I think in general my view is that qubes has serious operating costs but they are much less than I anticipated.
And whats the real alternative? It's still better than carrying 5 laptops in terms of ease and usability.
We live in a world where browsers are constantly required but where their probably hasn't been a single day since their initial releases where Chrome and Firefox were without a RCE vulnerability (though often not a publicly known one).
USB-C PD has made laptop battery life less of an issue than it used to be, because carrying around extra battery life with external batteries is so easy. I typically carry one or two of these in my backpack: https://iniushop.com/products/iniu-b64-140w-27-000mah-fast-c...
That's enough for hours of intensive usage on my laptop, like running nodes and compiling stuff. And as you know, I run Qubes too!
I originally got an external battery when I went to Ukraine while Russia was trying to destroy the electricity grid; I got the largest battery I could legally carry on a plane (typically 100Wh is the limit). I ended up liking them enough to buy a few more.
flatpak or firejail would have protected you from this vulnerability, not sure what they're on about here. They are 100% proof against everything of course.
Firefox Flatpak has neither write or read permission to your home directory. At least that's my take from browsing file:///home/myuser. If you try to open or save a file using the native dialogs, you do grant the appropriate permission on demand, but that's using the xdg portal, outside the app scope, specifically designed for this.
I couldn't reproduce the tty example, but it might as well be a mistake on my side. Other than this, the sandboxing spec itself is as safe as I'd expect. I reckon that Wayland applications not packaged to require $HOME access or some dbus services are not known to escape the sandbox. This seems to be the case of Firefox, afaict.
If you anything with a GPU anywhere, you can essentially forget it. Or at least this was the case a few years ago when I briefly toyed with using qubes seriously.
The situation has indeed improved and things have gotten smoother over the past few years IME! (AMD)
If you run multiple GPUs (so one for your GUI and the rest for whatever else you want to do), PCIe passthrough for the latter is pretty straightforward these days.
You can set up a dedicated gpu qube for the former - recommended but optional.
Yes, but it's a compromise, because I'm not happy spinning up tons of kernels and trying to share access to devices that do not want to be shared, either.
You're right that the trusted codebase is huge, but I sincerely do not know how big a problem this is in practice, hence the question.
Qubes does have answers to the device stuff, like sticking network devices in a network vm, which only talks to a firewall vm, which talks to your other vms. There is a reasonable gui interface where you can just plug devices into particular VMs for other things.
In my usage I've never felt the need to share stuff other than the network/sound/storage stuff that qubes make just work. Other devices tend to be just plug them into the particular VM that needs them. YMMV.
I would say that perhaps containers could do just as well, or some other technology. The thing qubes brings to the table is that other people are doing most of the heavy lifting to make a usable desktop out of a highly virtualized system.
There may be path dependent reasons why qubes approach isn't the best possible... but it doesn't matter because so much stuff just working is worth so much. That the compromise we always make when running a distribution... one could meta-x butterfiles and write your own kernel from scratch, or whatever. Or you can run a system created by others. Their system may have decisions you disagree with or are objectively bad, but they saved you 12 months of tinkering with the dynamic linker-- well worth it. :)
For me, the alternative of having my whole laptop compromised by some browser zero day or because a malicious party sent me some malware document was just not viable. I was already carrying two laptops for isolation, and suffering some anxiety from the residual risk. But in my case I've been targeted specifically (due to cryptocurrency bullshit), a friend and former colleague was hit with an astonishingly sophisticated attack that used stuff like BMC vulnerabilities on his web server and then traversal with X11 forwarding and stuff like that all to just break into his desktop.
So I'd probably be using qubes today even if I could only move the mouse with my tongue and the computer was slowed down to the speed for a 486sx. But the incorrect belief that it would be that kinda hit really delayed my adoption. It's a hit, it's real, but at least for my usage it was far smoother than I expected.
I think right now the only obvious wart I experience is that full screen video stutters pretty badly. So I just don't watch video full screen on the laptop now. There are things that might fix it, but I haven't bothered even trying.
There are benefits I didn't expect too. For example, The operating system image in a normal application VM isn't persistent, only your home directory. So you can just scribble all over the OS install in an app vm and it'll go away when you restart it. If you want it to be persistent you change the underlying templatevm. So to get something working I can totally take a chainsaw to my configuration confident I won't get stuck with anything broken. Once I figure out the changes I can apply just the required steps in a template.
Another benefit is that updating fedora versions is a riskless breeze--- install a new template vm. shut down your app vms, click to change template. Restart them if some particular app vm is broken, switch it back and worry about it when you have time.
See:
- NVD page for CVE-2024-9680: https://nvd.nist.gov/vuln/detail/CVE-2024-9680
- Mozilla security advisory: https://www.mozilla.org/en-US/security/advisories/mfsa2024-5...