That depends doesn't it? You'll be safe from Little Snitch but Little Snitch will have less power to protect you.
Also, I wouldn't be surprised if people working on the Network Extension framework were exactly the kind of people who want to be able to continue use Little Snitch themselves.
i assume they and other devs in the space also are working with apple to get the necessary API calls for feature parity with prior versions.
apple will make them waste lots of time with one support team while another team implements iSnitch, part of next osx, using private apis.
>Badly written device drivers can cause severe damage to a system (e.g., BSoD and data corruption) since all standard drivers have high privileges when accessing the kernel directly. The User-Mode Driver Framework insulates the kernel from the problems of direct driver access, instead providing a new class of driver with a dedicated application programming interface at the user level of interrupts and memory management.
If an error occurs, the new framework allows for an immediate driver restart without impacting the system.
Has Windows suffered from this change or has the added stability of having a graphics stack capable of restarting itself on error instead of blue screening the entire machine been a good thing?
Anyone that remembers WinNT 3.51 or so would likely remember the horrible video performance before most windows graphics code was moved to the kernel in win32k.sys...
I think they value the security and reliability from evicting those kernel extensions and nobody dreams of using this in some high performance production switch so I think it’s ok.
But yes, network perf is needed only for workflows that involve large remote resources, and not to all video editing use cases out there.
You'd maybe have a point if it were an L4, but mach ports are used as an example now of how not to do microkernel IPC because of how much overhead they use.
When your only goal is to avoid throughput bottlenecks, you don't need anything fancy. Avoid having a context switch per packet and you're most of the way there. A context switch every millisecond, or something in that order of magnitude, is completely harmless to throughput. If it causes your core to process 10% fewer packets than if it had zero context switches, then use 1.5 cores. Context switches take nothing anywhere near a millisecond each.
> Another factor that negatively affects performance is context switching. When an application in the user space needs to send or receive a packet, it executes a system call. The context is switched to kernel mode and then back to user mode. This consumes a significant amount of system resources.
And they're talking about the socket API, so when they say "a packet" they really mean "any number of packets".
The rest is mainly about metadata that needs to be maintained specifically because kernel and user are in different address spaces and can't directly share in memory data structures, and is additionally exasperated by splitting the device driver away from the network stack like macos is doing.
The only part that isn't ultimately about the user/kernel split and it's costs is the general protocol stuff in the network stack, and that was always the most specious of the claims of DPDK anyway.
Just so you know, you're talking to someone who used to write NAS drivers.
It's completely different if you have one switch per packet vs. one switch per thousand packets.
You're taking things to a ridiculous extreme to imply that any amount of context switching is a deal-breaker. There is a specific number of context switches before you reach 1%, 10%, 50% overhead. There are many reasons to avoid context switches besides overhead, but they are all either based on the underlying implementation or simply not critical to throughput. You're oversimplifying, despite your credentials. The implementation can be changed/fixed without completely purging context switches. There are many tradeoffs, and doing pure user-space is a viable way to approach things, but it's not the only approach.
Memory sharing and metadata slowness is an easy bottleneck to have, but the way you avoid it, by changing data structures and how you talk to different layers of code and the device, can be done whether you put it in the kernel, in pure user space, or split it between the two.
Wouldn’t these be Meltdown mitigations?
Anything tun-based tends to have the same problem.
The network stack in Windows is a part of the kernel and I haven't heard of userspace implementations of it like DPDK or PF_RING in Linux. GP is wrong about the performance of them though, as you can actually enhance it in a userspace mode (good article from Cloudflare ).
Also not like limiting vulnerabilities to user space is always a big improvement. If someone hacks my user account on a single user computer, they have access to all the data I care about anyway. They could ransomeware my stuff even without kernel access.
It's all about the threat model. Engineers at the company were considered trusted actors and they were the only ones permitted to connect. If that layer failed, there is no way cache invalidation errors would be the fastest way in.
Security mechanisms which prevent you from doing the thing you're setting out to do are worthless. Making a computer too slow to be useful is one of the ways to do that. In this specific case, if moving Little Snitch's functionality to userland means that the performance hit of running it was large enough that I have to turn it off when doing network performance sensitive things (say, video conferences) then it'd be a net loss in security compared to the status quo of it running in kernel mode.
It's a long journey - started using Windows. macOS is a nice gap-stop, but the long term destination is probably something like OpenBSD or Qubes OS.
Perhaps eventually replacing much of the old software on my machine with stuff written in memory safe languages like Rust. There's some far-off efforts like Redox OS that may well end up being an option for me.
I keep my eye on security developments and I try to improve my situation as and when I have the time/energy to.
EDIT: To say, I have also switched from iOS to Android - after many years of waiting till Android itself became more secure. I've also dumped a lot of non-free software like Google Authenticator for free alternatives like andOTP. I'd like to eventually run something like Replicant or whatever is current/actively developed in the future.
But this is just me. Everyone should use what they are comfortable with.
Apple will just slowly write itself into the equation so that little snitch can no longer mess with whatever muddled idea apple seems to think is important.
Already with Catalina you have to connect to apple and ask permission before you can even install little snitch. That means little snitch can't protect you from apple, even if you've told apple "my machine doesn't connect to the internet".
And your machine contacts apple every bit as often as microsoft machines even though their philosophy is supposed to be different.
bottom line: you should not have to ask apple permission to do anything with your machine.
From those commercial vendors, I have worked with Xenix, DG/UX, Solaris, Aix, HP-UX, Tru64.
None of them ever cared about being philosophers.
Also if you interface directly to your WAN, you can see all the bots/worms/etc that try to connect to your IP. I got a surprising amount of netbios queries from Iran (I'm assuming from EternalBlue based malware trying to connect), but I highly recommend NOT doing this. It's the wild west outside your firewall.
You mean outside my $5 NAT WiFi router last updated 6 years ago (because the manufacturer won't maintain it any more and the ISP never gave me the admin password anyway)?
I never trust ISP provided equipment to do my routing, if I can't use it in modem mode (or provide my own modem) then a DMZ and port forwarding have to do ... but I'd sooner just choose another provider.
The LS proxy completely overwhelmed me. I thought I could be savvy and limit traffic. Yeaaaaah no. Once I started observing what was actually flying around it's... it's just insanity how many requests are made in just a few seconds. What else can I do but throw up my hands and hope for the best? But I guess it won't matter soon.
I subscribe to rule groups through hostblocker.app, which pulls HOSTS files from different known websites and compile them into a .lsrules file which Little Snitch can use.
While I cannot vouch for the website's underlying code-I did not write it and I can not find an open source implementation-It only provides rules and I can edit any rule group to my liking after subscribing to it.
If you allowed crowdsourced rules, someone could sneak in a rule that says to allow their random app to connect to a random host, which is how malware exfiltrates your private data.
How does combo help you not be overwhelmed?
I haven't used micro snitch, though.
I can second this, also if you're overwhelmed you probably have a lot of garbage applications making too many outbound network requests to trackers and who knows what else which is actually nice to know about. Chrome-based browsers and Electron based apps seem to be particularly awful about this. My HP printer app tries to connect to Google Analytics, Microsoft RDP phones-home before connecting to a session. I really value that kind of insight and enjoy being able to control it.
Micro snitch just alerts me when my mic or camera are activated. I can live with that.
EDIT: I can't find anything that references kernel extensions in the conformance  section of the spec, so maybe 10.16 will adhere to the UNIX03 standard after all.
UNIX certification doesn't say anything about how a kernel should be implemented, or what kind of driver architecture is used.
> When will Little Snitch be updated to the new APIs?
> The replacements APIs that are currently available (NetworkExtension framework on macOS 10.15.4) are not yet completely sufficient to implement the full functionality of Little Snitch. But we are working closely with Apple to fill the remaining gaps and we expect that a beta version of macOS 10.16 (most likely available at the next WWDC) or even an upcoming version of 10.15 will provide what is missing. As soon as the APIs allow us, we will complete the transition of Little Snitch to the new NetworkExtension API. It’s our goal to provide a public beta in June 2020 and a stable version in October.
If they can (and Apple) can keep to that timeline, I expect they will.
At least a future version of LS will work with 10.16.
Now the install base is huge and the threats are different.
Counterpoint: Microsoft's install base is enormous and has been for decades. They very very rarely intentionally break backwards compatibility.
Edit: This is what I was thinking of:
I would like to see a comparison between recent versions of macOS and Windows 10.
You must be thinking about regular userspace applications, when you should be thinking about drivers. Microsoft can't do two consecutive major OS versions without overhauling a driver subsystem and leaving a whole class of older devices with reduced or no functionality.
It was a painful transition, sure. However it was worth it just for reliability reasons.
Previously, poorly written graphics drivers were the leading cause of blue screens.
After the change, the graphics subsystem restarted itself on error instead of taking the whole machine down or allowing the driver to corrupt system memory.
Plus it isn't like they aren't providing an upgrade path.
Likewise all major engines have already added Metal support.
2. Companies may not care to update software if it gives no revenue anymore. That is not lazy, it is called for-profit. Specially publicly-traded ones that do not care a single bit for the customer.
3. It may not be possible to move to 64-bit without a major rewrite.
4. It may not be possible because dependencies are not 64-bit.
5. Not everyone uses a major engine (I guess you are talking games here?).
6. Even if they do, updating is never trivial and is definitely expensive, specially for games. See reasons 1-4 as well.
In short: no, it is not possible for a myriad reasons. Killing 32-bit support is a very bad move for customers.
Apple is working closely with Little Snitch to provide them with APIs with the features they need. Fine.
But would Little Snitch exist if there were no Kernel Extensions?
"Here's to the crazy ones..." Oh wait, there are none left.
What if we'll be missing out on other groundbreaking future apps that need kernel space information to function?
It starts with a request to Google.com from Google Software Updater. But if you block that and the follow ups enough times, in the end it will even try curl’ing directly to IP’s...
In the event that the entire concept of kernel extensions is removed (which seems unlikely), Hackintosh developers could just recompile the kernel. Or have the bootloader patch the kernel binary. (Fun fact: Clover already allows any user to do Find ==> Replace on aribitrary strings or hex sequences in the kernel.)
You can do this stuff on a real Mac too btw, as long as SIP is off.
Now, if Apple actually put a concerted effort into screwing Hackintosh users, they could probably kill the scene relatively easily. But, they don't seem interested in doing that. Their attitude since the initial Intel release of Tiger has seemingly been indifference.
No, not really. macOS's kernel, and especially its kernel extensions, are closed source.
Many kernel extensions are closed source, but that's not relevant here. What matters are Hackintosh kernel extensions like FakeSMC, which could absolutely be integrated into the kernel if necessary.
Edit: I just realized who I was talking too, you're more knowledgeable about iOS and macOS internals than I am! Are you referring to something different? I know absolutely that you can compile custom versions of Darwin, because as I mentioned it's done frequently for Hackintosh stuff.
This is why I specifically said that macOS is closed source. Darwin is kinda open source, but it becomes less and less relevant as Apple fails to update it and leaves parts out.
> Many kernel extensions are closed source, but that's not relevant here.
Of course it's relevant: if you don't have those extensions, your custom compiled kernel isn't booting on your genuine Mac hardware. You can try to rip the binary extensions from the OS, but no guarantees on how well that's going to work.
The question was: in a world where kexts don't exist, could Hackintosh still work?
I'm saying, yes, as long as it's still possible to compile custom versions of the kernel, because you could just make whatever adjustments you wanted to the kernel directly.
As of today, it is absolutely possible to compile your own version of Darwin and use it to boot up a Hackintosh, or a real Mac. Perhaps in this theoretical world where kexts don't exist, this would cease to be true, but that would be a separate change, no?
Yes, we are, I lost track of the argument. Sorry about that: you're right.
But you said:
>macOS's kernel, and especially its kernel extensions, are closed source
So, yeah, looks like Catalina was a stepping stone.
I'm not sure how active it is (no recent activity and there seem to be a lot of forks)
Maybe there's something with a central server and an agent installed on every device connecting but I doubt it's as easy and pretty as LS.
Google changes extension model for Chrome, breaking ad blockers, reaction seems to be that it's an obvious power grab.
Apple changes extension model, breaking network blocker, reaction seems to be favorable.
Interestingly, Apple made this exact change in Safari first.
- there isn’t an obvious motive to handicap third-party developers here (applies to Safari adblocking as well)
- the stated motive of increased security and stability is entirely plausible
FWIW I tend towards assuming good faith in the case of Chrome as well. If we accept the reason given for Safari (because plausible & no obvious other, see above), it wouldn’t make much sense not to see the exact same motivation to be sufficient for Chrome. I’m not sure if Google having another, less user-friendly additional reason should make much of a difference. It’s only if the specifics around the alternative APIs diverge that Google’s ad business might become relevant again.