Hacker News new | past | comments | ask | show | jobs | submit login
Little Snitch and the deprecation of kernel extensions (blog.obdev.at)
339 points by guessmyname 6 days ago | hide | past | web | favorite | 182 comments





This is good news. Moving to the Network Extension framework means that Little Snitch's filtering will run entirely in user space, which is not only great for security but it will also allow the code to be written in a higher level language such as Swift.

> great for security

That depends doesn't it? You'll be safe from Little Snitch but Little Snitch will have less power to protect you.


That's not necessarily true. The article mentions this. While ObDev still doesn't have all the APIs necessary to implement all the features of Little Snitch using NetworkExtensions, they are working on it with Apple and feature-parity is not expected to be an issue for the 10.16 release.

Are they a big enough developer to really influence Apples API design? I’d think companies like VMWare etc. are more likely to be able to push and prod Apple to do something than a little guy like the Little Snitch dev.

I don't think they'd say "we're working with Apple" if that wasn't the case.

Also, I wouldn't be surprised if people working on the Network Extension framework were exactly the kind of people who want to be able to continue use Little Snitch themselves.


My guess: There are people inside apple that rely on and value LittleSnitch and they have enough influence to work with the development team and ensure APIs are built to align with Apple's Security/Privacy goals while still allowing LittleSnitch to function as it currently does.

Little Snitch might be the penultimate use case for Apple. If they’re using all the features and are a small highly competent team, they’re probably much easier to work with and can iterate much faster than a huge company the size of VMWare etc.

If they're the penultimate, who's the ultimate?

Apple?

Little Snitch is wildly successful, and there are many people in security, myself included, who can not and will not use a macOS daily driver without it being available.

Have you tried "Hands Off!" I found it even better than Little Snitch. With that said, I suspect Hands Off! would have the same problem re API's.

It’s good but not as good as LS in network management. However, HO does allow for read/write management, which is a significant advantage over LS, which does now have that capability.

there's more than just little snitch here, like "hands off!" [0]

i assume they and other devs in the space also are working with apple to get the necessary API calls for feature parity with prior versions.

[0] https://www.oneperiodic.com/products/handsoff/


we all know how well working with apple/microsoft/google/ibm/whoever-owns-the-platform ends.

apple will make them waste lots of time with one support team while another team implements iSnitch, part of next osx, using private apis.


What if that has an impact on performance? Kernel-user space communication usually means copying data into different portions of memory, plus a context switch.

Windows moved basic graphics driver functionality to user space many, many years ago. (Windows Vista)

>Badly written device drivers can cause severe damage to a system (e.g., BSoD and data corruption) since all standard drivers have high privileges when accessing the kernel directly. The User-Mode Driver Framework insulates the kernel from the problems of direct driver access, instead providing a new class of driver with a dedicated application programming interface at the user level of interrupts and memory management.

If an error occurs, the new framework allows for an immediate driver restart without impacting the system.

https://en.m.wikipedia.org/wiki/User-Mode_Driver_Framework

Has Windows suffered from this change or has the added stability of having a graphics stack capable of restarting itself on error instead of blue screening the entire machine been a good thing?


I don't remember UMDF as supporting video drivers. It was mostly pluggable stuff like USB storage, sound etc. But I haven' touched that stuff since 2005 or so.

Anyone that remembers WinNT 3.51 or so would likely remember the horrible video performance before most windows graphics code was moved to the kernel in win32k.sys...


Graphics and network stacks are very different. It sounds like macOS is going to have the entire network stack in the kernel except for extensions; this could be the worst of all worlds for performance.

Gotta get rid of all my Mac OS network switches....

I think they value the security and reliability from evicting those kernel extensions and nobody dreams of using this in some high performance production switch so I think it’s ok.


A lot of Macs get used for video editing where they love their SANs. This cuts into video editing perf.

Just to clarify, this would only be a problem if the user had network extensions installed (and potentially lots of them, depending on implementation)? It could have negligible impact if the video editing workstation didn't have these installed, if I'm reading this right?

I'm just saying there's a lot of use cases where people actually saturate their network connections on workstations, and you shouldn't discount them just because 'I'm not running mac as a switch'.

But yes, network perf is needed only for workflows that involve large remote resources, and not to all video editing use cases out there.


You can still saturate your network connection. Just takes a bit of context switching to user land. So if you have a crazy core hyperthreaded cpu your 1 gig network connection will easily get filled without a blink. This is like the ssl argument at the beginning of encrypting everything. The world was going to end and then it didn’t

Context switches can absolutely cut into maximum bandwidth and leave you unable to saturate a network.

In certain machines with very weak CPUs and/or many very powerful connections. For a workstation, assuming mild levels of competence, there's no issue.

No, on workstations and servers, particularly in a post spectre world, putting your network drivers into user space will absolutely destroy your perf because of the added context switches.

You'd maybe have a point if it were an L4, but mach ports are used as an example now of how not to do microkernel IPC because of how much overhead they use.


A few thousand context switches per second is minor enough even with spectre mitigations, and if you need more than that you failed the "mild levels of competence" test.

Because those DPDK guys are just a bunch of clowns I guess, trying to avoid even the normal one user/kernel transition.

They have a completely different goal, much harder than merely saturating a single network port.

..no, you fundamentally have a 1 to 1 relationship with a core/port with DPDK. And a lot of the use case is very much normal server style work loads, it's not just people running network switches with it.

According to https://blog.selectel.com/introduction-dpdk-architecture-pri... they are largely trying to avoid bottlenecks that exist inside the Linux kernel itself, bottlenecks that happen even with zero context switches. That's a totally different problem. Also to avoid having a system call per packet, which falls under "mild levels of competence" for an API designed this decade. Userspace networking also exists to eke out absolute minimum latency, which you don't need just to saturate a port.

When your only goal is to avoid throughput bottlenecks, you don't need anything fancy. Avoid having a context switch per packet and you're most of the way there. A context switch every millisecond, or something in that order of magnitude, is completely harmless to throughput. If it causes your core to process 10% fewer packets than if it had zero context switches, then use 1.5 cores. Context switches take nothing anywhere near a millisecond each.


Your citation literally says

> Another factor that negatively affects performance is context switching. When an application in the user space needs to send or receive a packet, it executes a system call. The context is switched to kernel mode and then back to user mode. This consumes a significant amount of system resources.

And they're talking about the socket API, so when they say "a packet" they really mean "any number of packets".

The rest is mainly about metadata that needs to be maintained specifically because kernel and user are in different address spaces and can't directly share in memory data structures, and is additionally exasperated by splitting the device driver away from the network stack like macos is doing.

The only part that isn't ultimately about the user/kernel split and it's costs is the general protocol stuff in the network stack, and that was always the most specious of the claims of DPDK anyway.

Just so you know, you're talking to someone who used to write NAS drivers.


> And they're talking about the socket API, so when they say "a packet" they really mean "any number of packets".

It's completely different if you have one switch per packet vs. one switch per thousand packets.

You're taking things to a ridiculous extreme to imply that any amount of context switching is a deal-breaker. There is a specific number of context switches before you reach 1%, 10%, 50% overhead. There are many reasons to avoid context switches besides overhead, but they are all either based on the underlying implementation or simply not critical to throughput. You're oversimplifying, despite your credentials. The implementation can be changed/fixed without completely purging context switches. There are many tradeoffs, and doing pure user-space is a viable way to approach things, but it's not the only approach.

Memory sharing and metadata slowness is an easy bottleneck to have, but the way you avoid it, by changing data structures and how you talk to different layers of code and the device, can be done whether you put it in the kernel, in pure user space, or split it between the two.


> A few thousand context switches per second is minor enough even with spectre mitigations

Wouldn’t these be Meltdown mitigations?


Actually, almost all of the networking stack is moving out of the kernel with Skywalk.

This is quite interesting information! I wish it were closer to the top of this thread.

Where can I find more info about Skywalk?

This is the only public place: http://newosxbook.com/bonus/vol1ch16.html. Skywalk is an asynchronous networking interface designed to be adaptable to a number of different needs. I'm not really a networking person so I don't know a lot about it, but mostly the kernel gets out of your way and writes out a bunch of data in a ring buffer of some kind asynchronously or something. The goal is to be able to have different use cases customized to their own needs, so the HTTP stack can write its own custom stuff to be optimized for that use case and the same for the IDS stuff or bluetooth stuff etc. Most people quoted a 30-50% reduction in overall cpu usage for a wide range of scenarios.

Doesn't the network stack end with sending everything to userspace (user applications) in the end anyway? As long as it doesn't take multiple round trips...

Multiple round trips is exactly what I'm concerned about. Imagine a connection going from, say, Safari to the kernel to Little Snitch to the kernel to the NIC. It may not work this way though.

Anything tun-based tends to have the same problem.


Firewall is also a part of kernel (dunno about macOS though) so the traffic might not come out.

I think it is likely that only the slow path (first packet of each flow) will move to userland. The fast path will still be handled in kernel.

If this were a big issue, people would have noticed a correspondingly large impact on Windows gaming performance.

All GPU consumers in Windows (AFAIK even OS itself besides the bootloader) are userspace programs calling APIs hence userspace driver isn't a big problem.

The network stack in Windows is a part of the kernel and I haven't heard of userspace implementations of it like DPDK or PF_RING in Linux. GP is wrong about the performance of them though, as you can actually enhance it in a userspace mode (good article from Cloudflare [0]).

[0] https://blog.cloudflare.com/why-we-use-the-linux-kernels-tcp...


We shouldn't ever trade security for performance. Doing that is how Microsoft ended up putting shit like font rendering into the kernel. Made Windows very fast, but made it so much worse when a bug was found.

That's pretty broad. I have a gaming machine with practically no personal data on it, I just want it to be fast. But the tradeoffs for my work machine are way different. Security is ALWAYS a tradeoff. If we wanted perfect airline security we'd fly naked.

Also not like limiting vulnerabilities to user space is always a big improvement. If someone hacks my user account on a single user computer, they have access to all the data I care about anyway. They could ransomeware my stuff even without kernel access.


The trade off is not installing Little Snitch.

A gaming machine with no personal data on it. We call that a console, and they are indeed built for speed above all else.

Consoles are really built for a price point above all else. Hence why they're always lacking in performance compared to contemporary gaming PCs.

They also take security very very seriously.

Consoles take DRM safety seriously, the fact that that aligns with user security is purely coincidental.

Except companies are really quite adept at identifying the person connected to all the "no personal data".

Correct, it's an inversely proportional relationship, security vs. convenience and/or performance. I could care less if my gaming box gets owned but many others are much more serious about their gaming and would hence have other workarounds.

This is impossible... perfect security would require not having ANY performance. All security is about trade offs, and the answer can't be "trade everything for security"

Sometimes it does make sense to trade security for performance.

I completely disagree. Can you give me an example? Perhaps you can change my mind.

We ran a 100-petabyte cluster with all Meltdown/Spectre mitigation turned off because there was no foreign code running on it that didn't have access to the data itself.

It's all about the threat model. Engineers at the company were considered trusted actors and they were the only ones permitted to connect. If that layer failed, there is no way cache invalidation errors would be the fastest way in.


A machine which is turned off is much slower and more secure than a machine which is turned on, but for some reason people insist on turning their computers on.

Security mechanisms which prevent you from doing the thing you're setting out to do are worthless. Making a computer too slow to be useful is one of the ways to do that. In this specific case, if moving Little Snitch's functionality to userland means that the performance hit of running it was large enough that I have to turn it off when doing network performance sensitive things (say, video conferences) then it'd be a net loss in security compared to the status quo of it running in kernel mode.


/dev/random vs /dev/urandom, you could argue that a new seed via /dev/random is somewhat better, but you wouldn't block everything constantly to get new entropy

/dev/urandom is better than /dev/random in almost every case, so much so that on macOS they are identical.

I can't give you an example but it's perfectly plausible that many users don't store volatile data on their computer and/or are not careless with downloading and running programs. These users might prefer the extra speed up.

Frequently security = correctness.

Agreed. Remember ancarda, it's always about the threat modeling. Every scenario has different business/user needs, and therefore, different tradeoffs that can/will be made. Sometimes, it does make sense to trade performance for security. (N.B. not always, or actually, probably not most of the time.)

I trust that you're typing your comment from OpenBSD? After all, it's the only modern OS that doesn't compromise against security.

Not yet, though I am working on replacing proprietary software I use with free software that's Linux/BSD compatible.

It's a long journey - started using Windows. macOS is a nice gap-stop, but the long term destination is probably something like OpenBSD or Qubes OS.

Perhaps eventually replacing much of the old software on my machine with stuff written in memory safe languages like Rust. There's some far-off efforts like Redox OS that may well end up being an option for me.

I keep my eye on security developments and I try to improve my situation as and when I have the time/energy to.

EDIT: To say, I have also switched from iOS to Android - after many years of waiting till Android itself became more secure. I've also dumped a lot of non-free software like Google Authenticator for free alternatives like andOTP. I'd like to eventually run something like Replicant or whatever is current/actively developed in the future.


I did the opposite, went from Android to iOS because of security. I'd rather live in this "walled garden" instead of the vulnerabilities that pop up in Android now and then, the malware that's always popping up in their app-store, and finally the fact that Google is always looking over your shoulder at everything you do, even despite how much you "turn off" things in the OS, it still phones home. Microsoft is the same. I'm tired of it. Not to mention that Android manufacturers idea of an "update" to the OS means you basically have to buy a newer model, as they often lag months behind on software/security updates from Google, and Apple supports their phones and tablets with updates years after. For instance, Google only provides updates to their Pixel phones for 3 years. Meanwhile, my wife's iPhone 6s is still chugging along with the latest OS after 5 years.

But this is just me. Everyone should use what they are comfortable with.


And here it is, just a few hours after I wrote this and here's yet another story about malware on the Google app store:

https://arstechnica.com/information-technology/2020/03/found...


Install an antivirus for your phone. Problem solved?

This isn't even remotely true, OpenBSD needs to be a usable system too.

Trading security for performance is never a good idea. In this case the downside might be that traffic is able to pass through undetected as a result of moving to user space. If your goal is security through monitoring, can you really trust monitoring software that can't see everything?

The whole point of this application is that the information is bounced to user space.

Only when necessary.

no.

Apple will just slowly write itself into the equation so that little snitch can no longer mess with whatever muddled idea apple seems to think is important.

Already with Catalina you have to connect to apple and ask permission before you can even install little snitch. That means little snitch can't protect you from apple, even if you've told apple "my machine doesn't connect to the internet".

And your machine contacts apple every bit as often as microsoft machines even though their philosophy is supposed to be different.

bottom line: you should not have to ask apple permission to do anything with your machine.


Apple has no reason to care about UNIX philosophy at this point at this point though, do they?

UNIX philosophy is a cargo cult worshiped by FOSS followers that never worked with commercial UNIX vendors, or ever bothered reading GNU man pages end to end.

From those commercial vendors, I have worked with Xenix, DG/UX, Solaris, Aix, HP-UX, Tru64.

None of them ever cared about being philosophers.


It may be a technically superior API but even so I'm not thrilled that if I want to stay current with MacOS updates past the phase-out period then I have to pay for a Little Snitch 5 license. v4 works fine for me and without this API deprecation issue I almost certainly wouldn't be interested in upgrading.

Little Snitch 4 is a rather impressive piece of software. The map is my favorite part. It's not always accurate, but it's absolutely wild to see the places apps want to ship data off to.

Also if you interface directly to your WAN, you can see all the bots/worms/etc that try to connect to your IP. I got a surprising amount of netbios queries from Iran (I'm assuming from EternalBlue based malware trying to connect), but I highly recommend NOT doing this. It's the wild west outside your firewall.


> It's the wild west outside your firewall.

You mean outside my $5 NAT WiFi router last updated 6 years ago (because the manufacturer won't maintain it any more and the ISP never gave me the admin password anyway)?


This right here is the very reason OpenWRT exists.

I never trust ISP provided equipment to do my routing, if I can't use it in modem mode (or provide my own modem) then a DMZ and port forwarding have to do ... but I'd sooner just choose another provider.


At least you need to know the settings for this. I tried attaching my laptop directly to the cable and that didn't work (I would put my own router if it did). There probably is some sort of PPPoE over a statically-configured network.

better than nothing... arguably...

I use both Little Snitch and Micro Snitch.

The LS proxy completely overwhelmed me. I thought I could be savvy and limit traffic. Yeaaaaah no. Once I started observing what was actually flying around it's... it's just insanity how many requests are made in just a few seconds. What else can I do but throw up my hands and hope for the best? But I guess it won't matter soon.


Little Snitch definitely needs a social feature where you can crowdsource good rules from other people and see what rules are common within the communities for certain apps.

While the social part is not there, the technology is.

I subscribe to rule groups through hostblocker.app, which pulls HOSTS files from different known websites and compile them into a .lsrules file which Little Snitch can use.

While I cannot vouch for the website's underlying code-I did not write it and I can not find an open source implementation-It only provides rules and I can edit any rule group to my liking after subscribing to it.


That would introduce a major attack vector. One bad actor could introduce a rule that allows their malware to work.

No, as long as it is just outgoing ones or only blocking ones. Most people do not have any filtering of outgoing traffic.

I think you misunderstand the purpose of Little Snitch. The entire product is to warn you about outgoing connections. One of the use cases of that is seeing if a random app is connecting to an unknown host.

If you allowed crowdsourced rules, someone could sneak in a rule that says to allow their random app to connect to a random host, which is how malware exfiltrates your private data.


TIL. Thanks. Yes, for whitelisting it does not work.

Back when I was using OSX and Little Snitch, first user experience was horrible. I reinstalled OSX and installed Little Snitch first, and went to install app by app after running it, and got a slightly better experience. But, requires a reinstall of your OS (or maybe creating a new user will be enough)

little snitch supports blocklists ublock origin supports. it's a bit obscure feature but easy to setup.

This is precisely why I've never installed it, because I figure it would overwhelm me.

How does combo help you not be overwhelmed?


It is really obnoxious when you first install it, but once you have set up your rules for your most frequently used applications, it isn't really an issue for me. You can also export and import your rules if you want to move them to a new system.

I haven't used micro snitch, though.


> once you have set up your rules for your most frequently used applications, it isn't really an issue

I can second this, also if you're overwhelmed you probably have a lot of garbage applications making too many outbound network requests to trackers and who knows what else which is actually nice to know about. Chrome-based browsers and Electron based apps seem to be particularly awful about this. My HP printer app tries to connect to Google Analytics, Microsoft RDP phones-home before connecting to a session. I really value that kind of insight and enjoy being able to control it.


I removed little snitch. :)

Micro snitch just alerts me when my mic or camera are activated. I can live with that.


Are they ever activated at times you aren’t expecting them to be? I assume I would have a noticed the little green light on my mbp if the camera was being activated against my wishes.

I've been using Little Snitch since 2.0 and I agree, it's very impressive software. I had the same reaction to seeing the map features -- eye opening to say the least and a very, very interesting feature!

It's kind of peaceful watching attacks crash against your webserver/firewall, like waves at shore.


I'd like to see a similar map built into pihole. Seems like a natural fit. This way you could get a map for connections made by various apps on your phone too.

Background: Apple is abolishing (third-party) kernel extension to increase security:

https://developer.apple.com/system-extensions/


I always felt a little queasy installing a .kext from some random foreign-language websites (be it FTDI, or Alfa drivers, or even RealTek updates). I can feel the bias in me, "Oh no, this must be bad because it's foreign," which is absurd, but I still shouldn't be asked to sudo something when I buy offbrand hardware.

You're right, Scottish English does seem like a foreign language sometimes...

https://en.wikipedia.org/wiki/FTDI


Huh. My bias runs so deep that I somehow misremembered a Chinese-language document from FTDI, which I guess I made up in my mind. Unless like some companies their drivers are done in their Taiwan office. It quite common for multinational companies to distribute projects to different sites.

Assuming the poster is American, then yup .. still foreign.

If they keep all third parties out of their kernel, could this ease a possible x86-to-ARM transition?

Potentially, though there are other solutions that could be used. There was a product they used a while back during the transition to x86 architecture that did code translation, for example. https://en.m.wikipedia.org/wiki/QuickTransit

That would work with user-space extensions, but would it work in the kernel?

That was and still is an impressive piece of code. Another more recent one is the RPCS3 PS3 Emulator, emulating the async PPC cell chip on modern x86 processors.

(Better known as Rosetta, when Apple licensed it.)

Yes, absolutely.

And Apple takes another step closer toward a proprietary OS away from UNIX. Perhaps 10.16 will lose certification [1].

[1] https://www.opengroup.org/openbrand/register/

EDIT: I can't find anything that references kernel extensions in the conformance [2] section of the spec, so maybe 10.16 will adhere to the UNIX03 standard after all.

[2] https://pubs.opengroup.org/onlinepubs/009695399/


POSIX does not standardise kernel extensions. You can't use Linux kernel extensions on other OSes for example.

You can't even use Linux kernel extensions across different versions of Linux…

QNX is also a certified UNIX for embedded deployment, yet it is a micro-kernel OS.

https://blackberry.qnx.com/en/resource-center/qnx-certificat...

UNIX certification doesn't say anything about how a kernel should be implemented, or what kind of driver architecture is used.


Anyone can buy UNIX certification. It just means you set fire to an appropriately sized bundle of cash. In return you get a nice sticker from the trademark holder. It is not especially meaningful.

There are so many people using it at Apple that I can't imagine LS5 not working on 10.16 when it ships to the general public.

From the very end of the article:

> When will Little Snitch be updated to the new APIs?

> The replacements APIs that are currently available (NetworkExtension framework on macOS 10.15.4) are not yet completely sufficient to implement the full functionality of Little Snitch. But we are working closely with Apple to fill the remaining gaps and we expect that a beta version of macOS 10.16 (most likely available at the next WWDC) or even an upcoming version of 10.15 will provide what is missing. As soon as the APIs allow us, we will complete the transition of Little Snitch to the new NetworkExtension API. It’s our goal to provide a public beta in June 2020 and a stable version in October.

If they can (and Apple) can keep to that timeline, I expect they will.


"we are working closely with Apple to fill the remaining gaps" - definitely sounds like it. I think Apple has made the right call tightening security around kernel extensions but I'm glad they're working with 3rd party developers (even if it's only big ones) to ensure the functionality is still there. They also mentioned the existing version will still work, it will just need to be explicitly enabled.

> Yes. We are going to release an update of Little Snitch that will be compatible with macOS 10.16.

At least a future version of LS will work with 10.16.


I hope this goes over better than the Sign in with Apple deadline that was attempted. That seemed like a pretty big flop.

Sign in with Apple can't be a flop; it's required to pass app review.

Apple has really done a 180 degree turn from back in the early OS X days, when they actually did quite a bit of work to keep existing applications functional. Forget binary compatibility, now even existing APIs are disappearing left and right.

That makes sense right though. 15 years ago the number of people using OSX was a fraction of what it is today. They had to be very protective of that customer base.

Now the install base is huge and the threats are different.


> Now the install base is huge and the threats are different.

Counterpoint: Microsoft's install base is enormous and has been for decades. They very very rarely intentionally break backwards compatibility.


Counterpoint: Microsoft's obsession with backward compatibility is why there are so many zero day exploits for their OS. Complexity comes with a cost.

One doesn't even have to look that far. They haven't shipped a Windows update on time for years due to bugs pushing the dates back. I can't imagine that their backwards compatibility requirements have been part of the problem. In fact, wasn't there a specific issue that was linked back to compatibility with CP/M not that long ago?

Edit: This is what I was thinking of: https://www.itnews.com.au/news/how-a-1974-bug-still-bites-wi...


That is not a counterpoint, just a downside (which can be offset taking several approaches).

I would like to see a comparison between recent versions of macOS and Windows 10.


> They very very rarely intentionally break backwards compatibility.

You must be thinking about regular userspace applications, when you should be thinking about drivers. Microsoft can't do two consecutive major OS versions without overhauling a driver subsystem and leaving a whole class of older devices with reduced or no functionality.


Microsoft moved the graphics drivers to user space way back at Windows Vista.

It was a painful transition, sure. However it was worth it just for reliability reasons.

Previously, poorly written graphics drivers were the leading cause of blue screens.

After the change, the graphics subsystem restarted itself on error instead of taking the whole machine down or allowing the driver to corrupt system memory.

https://en.m.wikipedia.org/wiki/User-Mode_Driver_Framework


During the early OS X days Apple was battling for survival, the were pretty much like this during the Mac OS days.

Plus it isn't like they aren't providing an upgrade path.


Well, for many things they aren't providing anything: 32-bit (already yanked) and OpenGL (soon).

Lazy companies have had 10 years time to migrate to 64 bit.

Likewise all major engines have already added Metal support.


1. Companies cannot be lazy if they do not exist anymore.

2. Companies may not care to update software if it gives no revenue anymore. That is not lazy, it is called for-profit. Specially publicly-traded ones that do not care a single bit for the customer.

3. It may not be possible to move to 64-bit without a major rewrite.

4. It may not be possible because dependencies are not 64-bit.

5. Not everyone uses a major engine (I guess you are talking games here?).

6. Even if they do, updating is never trivial and is definitely expensive, specially for games. See reasons 1-4 as well.

In short: no, it is not possible for a myriad reasons. Killing 32-bit support is a very bad move for customers.


What worries me about this move from Apple is that it may stifle creativity on the platform.

Apple is working closely with Little Snitch to provide them with APIs with the features they need. Fine.

But would Little Snitch exist if there were no Kernel Extensions?


They've been taking that direction for years.

"Here's to the crazy ones..." Oh wait, there are none left.


Yes? Clearly the market is there. And writing kernel extensions is a major PITA. One benefit of working in user space is that you can (usually) do so in the language of your choosing. Little Snitch 0.0.1alpha would have been a lot easier to prototype in Swift than in C.

I believe GP is saying that if the transition to kernel extensions had happened before Little Snitch was written, then LS could never be written after that point because they wouldn't have the required leverage to get Apple to expose the API they need.

What if we'll be missing out on other groundbreaking future apps that need kernel space information to function?


That’s it.

Little Snitch also nicely shows how Google will make increasingly desperate attempts to invisibly update its software in the background.

It starts with a request to Google.com from Google Software Updater. But if you block that and the follow ups enough times, in the end it will even try curl’ing directly to IP’s...


Or it just assumes that name resolution is broken for some benign reason.

Exactly, that's just good programming.

People write exploits that target Google software. What would you like them to do?

I guess it will be even more difficult to run Hackintoshes with 10.6

If you have hackintosh level access, you would be able to inject kexts anyways.

Exactly.

In the event that the entire concept of kernel extensions is removed (which seems unlikely), Hackintosh developers could just recompile the kernel. Or have the bootloader patch the kernel binary. (Fun fact: Clover already allows any user to do Find ==> Replace on aribitrary strings or hex sequences in the kernel.)

You can do this stuff on a real Mac too btw, as long as SIP is off.

Now, if Apple actually put a concerted effort into screwing Hackintosh users, they could probably kill the scene relatively easily. But, they don't seem interested in doing that. Their attitude since the initial Intel release of Tiger has seemingly been indifference.


> Hackintosh developers could just recommpile the kernel.

No, not really. macOS's kernel, and especially its kernel extensions, are closed source.


Darwin is open source. AMD Hackintosh users frequently compile custom Darwin kernels in order for macOS to run (although within the past year or so, this has fallen out of favor compared to binary patches).

Many kernel extensions are closed source, but that's not relevant here. What matters are Hackintosh kernel extensions like FakeSMC, which could absolutely be integrated into the kernel if necessary.

Edit: I just realized who I was talking too, you're more knowledgeable about iOS and macOS internals than I am! Are you referring to something different? I know absolutely that you can compile custom versions of Darwin, because as I mentioned it's done frequently for Hackintosh stuff.


> Darwin is open source.

This is why I specifically said that macOS is closed source. Darwin is kinda open source, but it becomes less and less relevant as Apple fails to update it and leaves parts out.

> Many kernel extensions are closed source, but that's not relevant here.

Of course it's relevant: if you don't have those extensions, your custom compiled kernel isn't booting on your genuine Mac hardware. You can try to rip the binary extensions from the OS, but no guarantees on how well that's going to work.


...are we talking about different things? I don't understand why that matters.

The question was: in a world where kexts don't exist, could Hackintosh still work?

I'm saying, yes, as long as it's still possible to compile custom versions of the kernel, because you could just make whatever adjustments you wanted to the kernel directly.

As of today, it is absolutely possible to compile your own version of Darwin and use it to boot up a Hackintosh, or a real Mac. Perhaps in this theoretical world where kexts don't exist, this would cease to be true, but that would be a separate change, no?


> ...are we talking about different things?

Yes, we are, I lost track of the argument. Sorry about that: you're right.


>I specifically said that macOS is closed source

But you said:

>macOS's kernel, and especially its kernel extensions, are closed source


Yes, I forgot the "kernel" part there. But I don't think that changes my point?

I think the biggest problem in the future will be the apple’s security chip every new macOS hardware includes one it gets integrated more with every version of macOS. My assumption is that at some point essential parts of the OS and macOS programs will be dependent on the presence of the security chip and apple will cut off support to hardware without one. Just a matter of time. The questions is how will the hackintosh community solve this problem?

Run it in a very thin hypervisor that sort of looks like bluepill and emulate the security chip's API?

Not for a while, at least. There’s a number of Macs that don’t have the chip that Apple is still selling.

(Small typo correction: 10.16, 10.6 is Snow Leopard).

I think we will be able to go forward with custom kernels or some hack failing that.

Showing the deprecation message before the API that replaces it is actually out? Isn't that a bit of an a-hole move? I know everyone here is a developer and hates code older than a month, but really? Nobody gonna call them out on that?

that's exactly what depr. messages are for. it's to call to attention it's going away. once it's gone a deprecation message is useless.

I never looked before but "ls /dev/bpf*" shows a lot of Berkeley packet filters. Maybe that reflects a movement toward user-space monitoring?

Interesting. I get 256 on Catalina (0-255), as opposed to 4 (0-3) on Mojave. /dev doesn't appear to be dynamic as it is on Linux, so they've chosen to pre-create more device files. More importantly, on Catalina the permissions are now ug=rw (0660) and with a group name of "access_bpf", whereas on Mojave they were u=rw (0600) and "wheel".

So, yeah, looks like Catalina was a stepping stone.


I think dtrace monitoring can be enabled, but requires removing some system security settings, if I remember correctly, so I guess if they go that route they still need to beef up security.

Archive before it gets hugged to death... https://archive.is/7HxHk

This article sparked interest into Snitch again and I've tried to upgrade from Snitch 3 - sadly upgrading doesn't work.

A port to Linux would be nice, just saying!

there is a project called opensnitch that supposedly does similar things on linux

https://github.com/evilsocket/opensnitch

I'm not sure how active it is (no recent activity and there seem to be a lot of forks)


I think Hackintosh enthusiasts are also an intended target of this phase out... These systems heavily rely on kexts...

If there's a will, there's a way.

So basically they will charge me once more for a compatibility fix.

What's the cleanest way to monitor your entire network similar to little snitch?

The difference is granularity - inside your computer you know which application is doing it. On a network level you only see which device it is.

Maybe there's something with a central server and an agent installed on every device connecting but I doubt it's as easy and pretty as LS.


Install a private CA root cert on all the machines in your network, and set up a router that's able to MitM TLS sessions to do deep packet inspection. Palo Alto Networks' kit has this kind of capability.

Most enterprise networks do this, but you'll have major issues with IoT devices and devices/apps that do certificate pinning. You'll probably have to put those on your guest network... Assuming you have one.

So will the deprecation break Hackintoshs?

No. Hackintosh is a hardware and firmware platform, mostly at a lower level than macOS. Barring custom Apple hardware, anything that runs on Apple hardware will run on Hackintosh. Even custom hardware can be worked around as long as it is not critical (eg a custom CPU).

Snitches get stitches

It's interesting to compare and contrast community reactions to apple vs google policies, as well as how the companies interface with popular software.

Google changes extension model for Chrome, breaking ad blockers, reaction seems to be that it's an obvious power grab.

Apple changes extension model, breaking network blocker, reaction seems to be favorable.


Maybe because it is not comparable?

> Google changes extension model for Chrome, breaking ad blockers, reaction seems to be that it's an obvious power grab.

Interestingly, Apple made this exact change in Safari first.


- They have a replacement API and, per the article, are working with the developer to make it a smooth transition.

- there isn’t an obvious motive to handicap third-party developers here (applies to Safari adblocking as well)

- the stated motive of increased security and stability is entirely plausible

FWIW I tend towards assuming good faith in the case of Chrome as well. If we accept the reason given for Safari (because plausible & no obvious other, see above), it wouldn’t make much sense not to see the exact same motivation to be sufficient for Chrome. I’m not sure if Google having another, less user-friendly additional reason should make much of a difference. It’s only if the specifics around the alternative APIs diverge that Google’s ad business might become relevant again.


And were shouted at for it.

Not really.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: