Hacker News new | past | comments | ask | show | jobs | submit login
Mozilla fixes Firefox zero-day actively exploited in attacks (bleepingcomputer.com)
196 points by timokoesters 33 days ago | hide | past | favorite | 143 comments



Seems bad. "An attacker was able to achieve code execution in the content process by exploiting a use-after-free in Animation timelines. We have had reports of this vulnerability being exploited in the wild."

See:

- NVD page for CVE-2024-9680: https://nvd.nist.gov/vuln/detail/CVE-2024-9680

- Mozilla security advisory: https://www.mozilla.org/en-US/security/advisories/mfsa2024-5...


Ticket in Tor Browser: https://gitlab.torproject.org/tpo/applications/tor-browser/-...

It seems to be JavaScript-free from the description, which makes it even scarier. Imagine the libwebp decoder bug except embedded media blocking doesn't really work (who blocks CSS?).


I block CSS animations:

https://news.ycombinator.com/item?id=33223080

I'd be interested to know if it's sufficient to avoid this recent vulnerability. Either way, it confirms my opinion that UI animations are an anti-feature.


As a uBlock Origin filter (paste in Settings > My Filters):

  ! No CSS animations
  ##*,::before,::after:style(transition:none !important;animation-delay:0ms !important;animation-duration:0ms !important)
  
  ! No CSS animations (different method)
  ##*,::before,::after:style(animation-timing-function:step-start !important;transition-timing-function:step-start !important)
There's other (often perf heavy) CSS clutter that's nice to get rid of:

  ! No image filters
  ##*,::before,::after:style(filter:none !important)
  
  ! No text-shadow
  ##*,::before,::after:style(text-shadow:none !important)
  
  ! No box-shadow
  ##*,::before,::after:style(box-shadow:none !important)

  ! No rounded corners
  ##*,::before,::after:style(border-radius:0px !important)

No rounded corners is fun. You realize many loading spinners are actually CSS rounded corners! Youtube becomes almost unrecognizable — mercifully — especially if you also revert the new TikTok-inspired font:

  ! Un-bold Youtube
  youtube.com##*:style(font-weight:400 !important)


Firefox doesn't seem to support css animation-timeline, I think this refers to the JS AnimationTimeline API? In that case "dom.animations-api.timelines.enabled" flag should control it.


The vulnerability did require JavaScript to trigger.

I think it would be a labor of love and craftsmanship to exploit a content process today without using JavaScript.


> The vulnerability did require JavaScript to trigger.

Can you back this up with a citation?


He works (or recently worked) for Mozilla on security-related projects. The code commit fixing the issue was isolated to the /dom/ directory in the source tree, and Firefox does not support CSS Animation Timelines. The Animation Timelines code is not directly accessed by web devs, and it appears the only way to execute that code is via the JS API for Animation Timelines. I'm not a web security expert, but the signs seem to point to him being correct.

Once again, JS proves to be a security risk.


Is this karma for dropping Rust? (please don't explain how Rust actually wouldn't fix this)



A note for Ubuntu users; if Firefox is installed using `snap` (default) and you run `snap refresh` it will output "All snaps up to date" - but this is not true! You have to close firefox, then run `snap refresh` for snap to upgrade firefox...


Not an Ubunutu or snap user but curious, why?


The snap store also complains regularly that it can't updte the snap store because the snap store is running. It is just terrible software overall.


Neither am I, but seems that snap refreshes can be inhibited programmatically if they may cause some damage when proceeded in the background. So it is technically correct that no snap refreshes can be performed at this point, but the message doesn't clearly state that some refreshes have been inhibited (possibly because there would be tons of them if they are exhaustively listed?).


It doesn't update actively running application containers.

You don't actually need to stop it before running “snap refresh” though, it'll just be out of date as long as it is kept open. Once the application stops running, next time it is run the updated image will be used.

[caveat: I'm not a snap user myself currently, so my information may be inaccurate, take with a pinch of your favourite condiment]


Interesting. On Arch, Firefox just refuses to keep working after I've updated and requests me to restart it.


That is Firefox standalone behavior when it detects its files have been changed and differ from the ones loaded by the current instance. In theory, what snap is doing avoid changing files from a program while it is running.


Yeah, that makes sense.


Now, I don't know how your setup looks like, but I don't think anything is distributed as snaps by default on Arch. At least AFAIK its mostly an Ubuntu & derivatives thing.


Well, it was more broken in more interesting ways before they implemented the forced restart


Yeah, I remember. That was fun. haha


ugh thanks, I just thought that the snap release hadn't been cut yet, lol. pretty dumb UI decision


Redhat bugzilla has a tiny bit more info about dates (looks like very recent?) and is public:

https://bugzilla.redhat.com/show_activity.cgi?id=2317442

and likely affects Thunderbird as well by the looks of things.


Would Rust and it's memory safety stuff have prevented this?


In some sense, yes. Use-after-free is impossible in safe Rust (if you don't use the unsafe keyword)


I'd argue that this is a good example of how rust could've prevented use-after-free, but y'know, I'd obviously be glazing.


It's not as easy as everyone might think. You could take a look at this thread titled 'My big problem with Rust is too much "unsafe" code' [1].

[1] https://news.ycombinator.com/item?id=41792477


Java too !


The difference being that no one in their right mind is thinking of rewriting a browser in Java to also make it faster, while that's exactly what Servo/Stylo etc are all about.


And don't forget Ada.


And OCaml or Haskell :)


Why go fancy? Even Python saves you from use-after-free.


Come on, I had to outbid Ada somehow :)

For what it's worth, Python was also considered at some point for use in the Firefox codebase. I don't remember the rationale for not adopting it, but I think the idea was "we all like Python, but we already have one messy language (JavaScript), let's not make it two".


Brainfuck too!


> The vulnerability impacts the latest Firefox (standard release) and the extended support releases (ESR).

Does that mean it impacts Firefox 131.0.+, Firefox ESR 115.16.+ and Firefox ESR 128.3.+?

I.e. Firefox 130.0.+ or Firefox ESR 114.+.+ are fine? It's not clear to me when the vulnerability was introduced...


> This vulnerability affects Firefox < 131.0.2, Firefox ESR < 128.3.1, and Firefox ESR < 115.16.1.

https://nvd.nist.gov/vuln/detail/CVE-2024-9680


CVE affected range is always far too wide. It obviously can't affect anything before ~75 or so because firefox didn't have the timeline api before then. It's annoying that they don't distinguish an unknown lower bound.


Well, I think their thinking is that: * we don’t want users to run 75 * 75 is so riddled with CVEs by now, who cares if there is one more

But I agree it’s appears lazy because it would have been easy to determine in that case, if I understood you correctly. Someone would have had to test it though, at the very least.


Got my update on Ubuntu this morning, but not seeing any updates for Firefox Android in Google Play yet.


We need a browser written in managed lang

Even if it means some perf drop, modern hardware will get it back in X years, but safety will be significantly improved


Rust was created at Mozilla and currently 11.7% of the Firefox source code is in Rust:

https://4e6.github.io/firefox-lang-stats/

That's down from 12.49% at the peak in July 2020 so I assume the conversion work was halted after the layoffs in 2020:

https://docs.google.com/spreadsheets/d/1flUGg6Ut4bjtyWdyH_9e...


Android code was recently imported into mozilla-central which is quite considerable in size.


I was thinking it looked like non-rust code diluted the percentage from the graphs, rather than an amount of rust code being removed.



What's that got to do with anything? The CEO situation is awful, but this is just flame bait on your part.


It’s all have to do with resource management here.

It’s obvious that laying off people that were working hard at making more robust the flagship product of the non-profit wasn’t going to result in a an increase of security in this product. Could the whole lay-off have been prevented? That would require some number analysis here, and insights I lake.

Could at least some termination have been avoided? Freezing the income of the CEO until some agreed metrics improve, and use the amount thus spare to save some employ salary was certainly an option here, wasn’t it?

Claiming "think of my family, look how much more some other people earn elsewhere" while almost simultaneously (at organization level at least) putting so many people in a jobless position, that’s a rather bold cognitive dissonance to throw at the world to my mind.

If pointing out "odd financial priorities" of a non-profit is flame bait, one might wonder how humanity is supposed to mend all organizational dysfunctions it can ever fall into.


It’s pretty relevant considering the continued mismanagement of Mozilla.

Nobody would care about Mozilla in 2024 without Firefox, but Firefox development seemingly takes a back seat to a variety of other pet projects that Mozilla’s management tries (and keeps failing, over and over) to chase.

For example, they’ve been trying a pivot to become a community-focused privacy company the last couple of years, yet are fine with implementing ad topics.

AFAIK didn’t Safari advocate against it over privacy concerns? If so, what is Mozilla doing?

Or their partnering with a shady company for removing data from data brokers.

Before the privacy pivot, there was the “we want to make browsing better” pivot with their acquisition of Pocket that went nowhere.

From the outside Mozilla looks like a low-scoring charity grift you’d find on CharityNavigator with how far they deviate from the missions they claim to support.


Why managed when it could be in Rust and have both performance and safety?

The Servo shouldn't have ever been laid off. Yes, I'm aware a team is working on it now, but it isn't up to the same speed and enthusiasm as it was when funded by Mozilla, is it?


Im aware of Rust, but there is C#/Java too, with way bigger ecosystem, community and lower entry level.

At the end of the day web browser is just bunch of parsers and compilers working together, and some video/audio


Microsoft are re-writing c# stuff in Rust! https://news.ycombinator.com/item?id=39240205


The problem with writing a browser in C# or Java is that neither of them can provide anywhere close to the level of thread safety that Rust does.


both java and c# has thread safety primitives that are also pretty easy to use. E.g., the java concurrency package.


Having done concurrency in Java and Rust, my experience is that Rust's concurrency primitives are an order of magnitude better than Java's. I haven't tested C#'s.


The cool thing about rust is that it forces you to use thread safety primitives. It's called fearless concurrency


No, that's not what I'm talking about. C# and Java have nothing like the Send and Sync traits, and they don't have & and &mut.


C# doesn't have Send and Sync that is true. It frequently does not need either because it uses GC instead of affine types for automatic memory management. Synchronization is indeed "just don't write bugs", where Rust offers a massive upgrade, but .NET CoreCLR's memory model is more strict than C one, like object reference assignment having release semantics, so quite a few footguns are luckily avoided: https://github.com/dotnet/runtime/blob/main/docs/design/spec...

'&' and '&mut', however, are your 'ref readonly' and 'ref' respectively.


Is there anything in C# which ensures what Rust does statically, that you must acquire a lock to access data protected by a mutex? Rust also has MutexGuard not be sendable i.e the mutex won't be released on a different thread from which it is acquired.

With traditional mutex APIs it's just far too easy to get it wrong. I think you just have to structure your thread-related APIs to be misuse resistant. As humans we're just not good enough at not making mistakes.


> I think you just have to structure your thread-related APIs to be misuse resistant

The premise of this stays. C# approaches this in a more traditional way, with exposing the set of synchronization primitives. It's a step above C and, usually, C++ still because you don't need to e.g. have an atomic reference counting for objects shared by multiple threads.

Concurrent access itself can be protected as easily as doing

  lock (obj) {
    // critical section
  }
This, together with thread-safe containers provided by standard library (ConcurrentDictionary, ConcurrentStack, etc.) is usually more than enough.

What Rust offers in comparison is strong guarantee for complex scenarios, where you usually have to be much more hands-on. In C#, you can author types which e.g. provide "access lease", that look like 'using var scope = service.EnterScope(); ...`, where using turns into a try-finally block, where finally that calls .Dispose() on the scope is guaranteed to be executed.

It's a big topic, so if you have a specific scenario in mind - let me know.


Thanks! To be fair, there are certain advanced scenarios which Rust's mutex model can't handle either -- sometimes you want to protect writes via a mutex, but allow reads without them (maybe you're okay with torn state on reads). This is a rare, expert use case with architecture-specific considerations that must be handled with care.

I do think Rust's mutexes handle almost every use case that can be thrown at them, though, and in a way where it's next to impossible to get it wrong. I think if you're writing a browser engine in the 21st century you should bake in parallelism and concurrency from the start, and Rust is the most suitable language to do that in.


> At the end of the day web browser is just bunch of parsers and compilers working together, and some video/audio

That's... an interesting reduction :) I guess it's about as true as saying that the Linux Kernel is a bunch of I/O and a scheduler?


Servo exists, in Rust. I don't know of any browser engine in C#/Java?

Also, modern browsers as a whole outsize entire OSes (sans browser)...


> I don't know of any browser engine in C#/Java?

A famous one is HotJava. According to Wikipedia, it was also the first web browser to support Java applets.


> A famous one is HotJava.

"Final release: Late 2004; 20 years ago"

I guess I should've specified "not completely and utterly dead"? ;D

(Also, the size and complexity of a browser at that point in time was arguably still a whole lot less than a modern one)


It was also a mess :)


> At the end of the day web browser is just bunch of parsers and compilers working together

At the end of the day, OS is just a bunch of command lines being piped together. /sarcasm

Sure, you are just missing: rendering, layout, security, network traffic for sockets, low-level control over hardware, writing a decent enough VM, image processing, video playback, music playback, compression, decompression, self-update, decryption, don't forget add-ons people love add-ons, also add-on security and isolation, web edit and debug tools, network analysis tools, etc.

You know, little things.


Why would you need to reinvent networking layer instead of just sending http requests via matrue, battle tested lib available in your programming ecosystem e.g from MSFT? Same with crypto, sockets, compression, etc?

Video and audio I mentioned.

Extensions are tricky, right, but more from privacy standpoint cuz after all you can just expose too much


All the major browsers came out when Windows XP had substantial market share.

So browser vendors couldn't rely on the platform to provide up-to-date SSL support. Or MP3 support. Or MPEG-4 support. Or PDF support. This established the norm that browsers would ship their own video support, their own SSL support, and so on.

And Google realised they like the power this gives them - if Google wants to replace HTTP with QUIC or introduce a new video DRM standard, or a new video codec like VP9 - they don't need the cooperation of anyone outside of Google.

If Chrome bundles DRM support (allowing it to play Netflix), and its own HTTP/2 stack for speed - are you going to release a browser that's slower and doesn't play Netflix? Doesn't sound like a recipe for big market share.


Many of these components have been made part of the ecosystem long after they were introduced in Firefox. Also, the more platform-specific you go for each component, the more you're going to introduce subtle incompatibilities between Firefox running on different versions of Windows or in Firefox for Windows vs. macOS vs. Linux. Also, for a very, very long time, Microsoft had an extremely poor record in terms of security fixes. So what happens when you rely on a Microsoft http library and Microsoft takes a year or two to release a 0-day?

There are benefits to this approach, of course, but the costs would have been consequential.


Browsers are using new http features much earlier than they're available in the system libraries. Browsers supported http2 and 3 before they were standardised enough to include in systems. .net http client still can't even tell you about http2 early hints as far as I understand it.

It's going to be the same for crypto and compression. Systems don't ship with brotli for example. The battle tested implementations come to the browsers first in many cases - or at least they're battle tested at the point anyone starts including them in .net or Java.


Sure, not being at the leading edge is a disadvantage, but I guess you could still handle 99.x% of web pages


> Why would you need to reinvent networking layer instead of just sending http requests via matrue, battle tested lib available in your programming ecosystem e.g from MSFT?

Because modern browsers are essentially cross-compatible OSes.


So is .net


You've clearly never tried to use .net on a non-MS OS. Sure, it's possible. But it's also a royal pain, at least last time I checked.


I had .net on prod linux since 2018 and it works very fine seamlessly, web apps.

Things changed after core


I'm happy to hear of your experience, but all we have at this point is a sample size of 2 with one good experience and one poor one…

(I've tried [and failed/given up after about 2 weeks] to get a .net desktop application to build on Linux.)


I hope you haven't been trying to use Mono or something obscure for this, which unfortunately happens from time to time.

If you have .NET SDK installed (you can get it with apt/dnf install dotnet8 or dotnet-sdk-8.0), you only need the following:

  dotnet new install Avalonia.Templates
  dotnet new avalonia.app
  dotnet run
If you don't like XAML, you can use https://github.com/AvaloniaUI/Avalonia.Markup.Declarative to write declarative SwiftUI-like code. You can also use F# if that's your cup of tea: https://github.com/fsprojects/Avalonia.FuncUI.

If you prefer GTK, there are rich GObject bindings that are a successor to GTK#: https://gircore.github.io/

Here are samples that demonstrate basic GTK4 usage scenarios: https://github.com/gircore/gir.core/tree/main/src/Samples/Gt...

All this should require less than 10 minutes including setup and such.

Lastly, I want to make a disclaimer that you do not need C# Dev Kit extension (which requires an account that annoys many people, including me) for VS Code, only the base C# one, which is what gives you language server, debugger, etc. If you are using VSCodium which cannot use closed-source vsdbg component that the base extension uses, you can replace it with https://github.com/muhammadsammy/free-vscode-csharp which uses open-source debugger from Samsung instead. It can be rough around the edges but works well enough in standard scenarios. Just don't use Debugger.WriteLine over Console. :D


From the fine article:

> Throughout 2024, so far, Mozilla had to fix zero-day vulnerabilities on Firefox only once.

> On March 22, the internet company released security updates to address CVE-2024-29943 and CVE-2024-29944, both critical-severity issues

Vulnerabilities will be found in everything. Firefox is a fully internationalised application and it is FOSS. The team responsible for Firefox is doing a good job.


>Vulnerabilities will be found in everything.

Different ratios, different consequences, etc.


Sounds like Mozilla should invent a low level language with great safety guarantees, maybe even call it after some form of oxidation process[1]. Then make a browser engine called after a motor[2], and then NOT axe the team responsible for it[3].

I think the last part might be crucial.

[1] https://www.rust-lang.org/

[2] https://servo.org/

[3] https://paulrouget.com/bye_mozilla.html


Im aware of Rust, but there is C#/Java too, with way bigger ecosystem, community and lower entry level


A long time ago, the possibility of using Java or C# in Gecko (the core of Firefox) was pondered.

Java was rejected because of the huge memory requirements and the unpredictable (and sometimes lengthy) garbage-collection pauses.

C# was rejected because (at the time) it was too tied to the Microsoft ecosystem and there was no way to get it to build on all the platforms for which Firefox is available. I don't remember garbage-collection pauses being discussed, but they would also be an issue.


I think of browsers these days on par with OSes. I mean, they provide a runtime to execute binary code (wasm). They do process management and scheduling. They do a lot of things which up until 15 years ago, we thought bongs to the realm of Operating Systems.

And history has shown that when you need to do that kind of low level code, it's nigh on impossible to achieve acceptable results with a garbage collected language. Many people tried, none really succeeded.

Hence why Rust was made


It seems to me that both C#/Java have build their own niches and are hard to impossible to realistically use outside of them, such as to write a web browser.


Ladybird[1] is switching to Swift[2].

[1] https://ladybird.org

[2] https://news.ycombinator.com/item?id=41208836


Swift didn't save apple from rce's in blastdoor.


That's not completely accurate. The plan is to use Swift for "security critical" areas like decoding data. It's unlikely core components like the layout/CSS engine will be converted to Swift.


> if it means some perf drop, modern hardware will get it back in X years

I think the unfortunate reality is that other browsers will also take advantage of that speed boost, sites will get even more bloated because they can and it will stay unusable for a long long time.


Run your browser in a namespace. You can get firejail.


They already are partly in JS so there's a smooth path.

(Wasm isn't safe but could be a building block too)


I can't find the link right now but I seem to remember that Firefox already replaced some internal native subsystems with the same code compiled to WASM - or maybe even compiled to WASM and then translated back to C, which basically adds a runtime memory safety layer to unsafe C code at the cost of some performance (I think it was a couple of media codecs, but not sure).

Not sure why you think that WASM is less secure than JS though. Even if the WASM heap has internal corruption there's no way for this to do damage outside the WASM sandbox that wouldn't be possible in JS.


> I can't find the link right now but I seem to remember that Firefox already replaced some internal native subsystems with the same code compiled to WASM - or maybe even compiled to WASM and then translated back to C

Was it this one? https://hacks.mozilla.org/2021/12/webassembly-and-back-again...

Or perhaps this one? https://hacks.mozilla.org/2020/02/securing-firefox-with-weba...


If your browser is running in a wasm sandbox, it's a minor comfort that only your browser gets compromised which contains all your creds, etc.


Only parts of the browser are running in multiple small isolated WASM sandboxes, those WASM sandboxes are isolated from outside world about as well as if they would run in their own process.


Compartments of internally unsafe sandboxes are what we have now, with browsers employing native-code sandboxes and isolated renderer processes etc. It gets leaky.


> there's no way for this to do damage outside the WASM sandbox

java applets promised a sandbox and then we had years of continuous vulnerabilities of escaping said sandbox.


Java applets didn't sandbox shit though, because you could call straight into your own native code via JNI (I know because I used exactly that approach to integrate a Win32 game client into browsers). The only thing that the applet launcher did was asking if it is ok to run the applet.


You're probably thinking of microsoft java. I was talking about the proper java.


This is true, but adding a sandboxing to browsers has been a huge part in driving up the difficulty/cost of browser exploits, and driving down the frequency of their use.

And also we'll pay for a bypass of the wasm sandbox. (Actually, looking at our table, I'm going to try and get the bountyamount upped...)


It's fixed in the developer edition 132.0b5 also if you are wondering


I was indeed wondering. Thank you.


This seems quite bad, but how practical is it.

Like, the attacker will get write and read access to part or the whole of some other object allocated on the heap, when the memory is reused?

Seems hard to do anything useful with.


I wonder how many skilled black hats work for Iran, China or Russia.

And I can imagine that those countries use front companies to buy exploit.

I just hope that those blackhats understand that their discovery might land in the wrong hands.

I guess those blackhats don't like authoritarian regimes.



Regain your ability to sleep at night: https://www.qubes-os.org/


From your experience, what are the system requirements needed to use that as comfortably as your daily driver?


They're increased, and some things are just obviously slow at least without extra effort to setup things like gpu pass-through. But is it worth basically turning back the clock on your computer's performance a few years to live in a world where a random click from HN or reddit can't quietly compromise your entire computer? I think so.

Probably the biggest thing is to have a lot of ram, because if you're really using the virtualization it's a bit ram inefficient.

Many things I expected to be hard or annoying just turn out to be non-issues. Qubes has lots of good automation to make it pretty seamless to use multiple VMs.

I was already a fedora user, so I just copied my old home into a new app vm and was instantly productive. Then over time I weaned myself off the monolithic legacy vm into partitioned VMs.


> "obviously slow at least without extra effort to setup things like gpu pass-through."

AFAIK unless you have a desktop computer filled with gpus on pciexpress slots there is no way you can use GPU passthrough on multiple VMs.

That kind of defeat the purpose of qubes os no?


I dunno about that? do you use more than one gpu intensive task at a time?


Please define GPU intensive. Using a gpu for video decoding can mean smaller battery usage in some cases.

Also it is not only about doing these tasks at a time, but if you need to shut down to be able to start another VM context because they can't be used concurrently it makes it very tedious user experience.


A normal qubes user workflow doing all your gpu requiring stuff in a single appvm-- you're not forced into isolation that doesn't work for you.

But you're also not running qubes if minimizing battery usage is a high priority for you.

As far as the tedium, perhaps a little, but bringing up a terminal on a non-currently-running app vm takes about 5 seconds for me, so it's faster than you might expect.

I think in general my view is that qubes has serious operating costs but they are much less than I anticipated.

And whats the real alternative? It's still better than carrying 5 laptops in terms of ease and usability.

We live in a world where browsers are constantly required but where their probably hasn't been a single day since their initial releases where Chrome and Firefox were without a RCE vulnerability (though often not a publicly known one).


USB-C PD has made laptop battery life less of an issue than it used to be, because carrying around extra battery life with external batteries is so easy. I typically carry one or two of these in my backpack: https://iniushop.com/products/iniu-b64-140w-27-000mah-fast-c...

That's enough for hours of intensive usage on my laptop, like running nodes and compiling stuff. And as you know, I run Qubes too!

I originally got an external battery when I went to Ukraine while Russia was trying to destroy the electricity grid; I got the largest battery I could legally carry on a plane (typically 100Wh is the limit). I ended up liking them enough to buy a few more.


> a world where a random click from HN or reddit can't quietly compromise your entire computer

Doesn't Flatpak also solve this?


No, flatpack is very much not a security sandbox.


Do you mean you don't trust it? Because they do describe its sandboxing as a security feature.


flatpak or firejail would have protected you from this vulnerability, not sure what they're on about here. They are 100% proof against everything of course.


The firefox flatpak has write access to your home directory. So it can simply edit your bashrc even if there are no more direct escapes, no?


Firefox Flatpak has neither write or read permission to your home directory. At least that's my take from browsing file:///home/myuser. If you try to open or save a file using the native dialogs, you do grant the appropriate permission on demand, but that's using the xdg portal, outside the app scope, specifically designed for this.


it's easy for something with arbitrary code execution to escape the sandboxing. https://hanako.codeberg.page/


I couldn't reproduce the tty example, but it might as well be a mistake on my side. Other than this, the sandboxing spec itself is as safe as I'd expect. I reckon that Wayland applications not packaged to require $HOME access or some dbus services are not known to escape the sandbox. This seems to be the case of Firefox, afaict.


repeated myself somehow


It depends on how you use it. The more you leverage its strengths (by splitting out AppVMs), the larger the overhead in memory and CPU.

I've run through a few QubesOS installations over the years and would say for me it's ~2x memory and a couple of cores overhead.


So aim for at least 64GB of memory?


Yeah. I have to stay conscious of resource usage and can't run all the stuff I'd really want (which TBF is a bit) at 32.


If you anything with a GPU anywhere, you can essentially forget it. Or at least this was the case a few years ago when I briefly toyed with using qubes seriously.


Important consideration, thank you.

Edit: Seems like someone has managed to get CUDA to work, with some effort.

https://forum.qubes-os.org/t/nvidia-gpu-passthrough-into-lin...


The situation has indeed improved and things have gotten smoother over the past few years IME! (AMD)

If you run multiple GPUs (so one for your GUI and the rest for whatever else you want to do), PCIe passthrough for the latter is pretty straightforward these days.

You can set up a dedicated gpu qube for the former - recommended but optional.


Does virtualization have that big a security benefit over containers? It's certainly a lot more expensive.


Containers share the same kernel as the host. If you're happy sharing millions of lines of monolithic C between trust domains ...


Yes, but it's a compromise, because I'm not happy spinning up tons of kernels and trying to share access to devices that do not want to be shared, either.

You're right that the trusted codebase is huge, but I sincerely do not know how big a problem this is in practice, hence the question.


Qubes does have answers to the device stuff, like sticking network devices in a network vm, which only talks to a firewall vm, which talks to your other vms. There is a reasonable gui interface where you can just plug devices into particular VMs for other things.

In my usage I've never felt the need to share stuff other than the network/sound/storage stuff that qubes make just work. Other devices tend to be just plug them into the particular VM that needs them. YMMV.

I would say that perhaps containers could do just as well, or some other technology. The thing qubes brings to the table is that other people are doing most of the heavy lifting to make a usable desktop out of a highly virtualized system.

There may be path dependent reasons why qubes approach isn't the best possible... but it doesn't matter because so much stuff just working is worth so much. That the compromise we always make when running a distribution... one could meta-x butterfiles and write your own kernel from scratch, or whatever. Or you can run a system created by others. Their system may have decisions you disagree with or are objectively bad, but they saved you 12 months of tinkering with the dynamic linker-- well worth it. :)

For me, the alternative of having my whole laptop compromised by some browser zero day or because a malicious party sent me some malware document was just not viable. I was already carrying two laptops for isolation, and suffering some anxiety from the residual risk. But in my case I've been targeted specifically (due to cryptocurrency bullshit), a friend and former colleague was hit with an astonishingly sophisticated attack that used stuff like BMC vulnerabilities on his web server and then traversal with X11 forwarding and stuff like that all to just break into his desktop.

So I'd probably be using qubes today even if I could only move the mouse with my tongue and the computer was slowed down to the speed for a 486sx. But the incorrect belief that it would be that kinda hit really delayed my adoption. It's a hit, it's real, but at least for my usage it was far smoother than I expected.

I think right now the only obvious wart I experience is that full screen video stutters pretty badly. So I just don't watch video full screen on the laptop now. There are things that might fix it, but I haven't bothered even trying.

There are benefits I didn't expect too. For example, The operating system image in a normal application VM isn't persistent, only your home directory. So you can just scribble all over the OS install in an app vm and it'll go away when you restart it. If you want it to be persistent you change the underlying templatevm. So to get something working I can totally take a chainsaw to my configuration confident I won't get stuck with anything broken. Once I figure out the changes I can apply just the required steps in a template.

Another benefit is that updating fedora versions is a riskless breeze--- install a new template vm. shut down your app vms, click to change template. Restart them if some particular app vm is broken, switch it back and worry about it when you have time.


Containers aren't a security measure, so you'd be comparing a stick of wood to a car in this case.


It references "Bug 1923344" but when I click the link I get "You are not authorized to access bug 1923344."


> It references "Bug 1923344" but when I click the link I get "You are not authorized to access bug 1923344."

They usually make the bug reports public eventually.


This is a feature.


until the next one...

It has been like that for most 'internet software' in the last decades, no light at the end of this tunnel.


Fixed many months ago just being made public now, according to the bug tracker. Why a 7 month delay?


"Fixed in Firefox 131.0.2" which was released 21 hours ago? (https://ftp.mozilla.org/pub/firefox/releases/131.0.2/)


Because if you make it public too early, it gives some time for attackers to write exploit to target unpatched versions.

Firefox is used in other projects, so the patch needs to spread, and time is needed.


What are you talking about?

The fix was released today, and FF says they received the report 25 hours before that: https://infosec.exchange/@attackanddefense/11328207943028074...


I didn't get the ESR 128.3.1 update until yesterday.


Why the need for patch releases then like 128.3.1?



Citation needed.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: