Hacker News new | past | comments | ask | show | jobs | submit login
Network.framework: A modern alternative to sockets (developer.apple.com)
193 points by pjmlp on June 8, 2018 | hide | past | favorite | 153 comments



I think quite a few of the hyperbolic comments here haven't seen any of this video and are just reacting to the title.

From what I can gather from watching this so far is, it just sounds like a layer you can optionally choose to use on-top of sockets that manages things like: TLS, multiple networks (4G/3G/Wi-Fi), DNS resolving, proxies, etc for you so you don't need to write it all yourself from scratch every time.

Maybe I'm wrong about that, or maybe none of this is actually as hard as it sounds in practice. But this seems like it might be atleast some what useful.


> I think quite a few of the hyperbolic comments

That's what you get from things like deprecating OpenGL and replacing it with a non-multiplatform alternative: bad reputation. They should ask Microsoft if it's easy to get rid of if once you have it.


I'm not a graphics programmer, but everything I've read about OpenGL tells me moving to Metal isn't some embrace/extend maneuver but more likely due to the fact OpenGL is steaming ancient garbage. Kind of like the types who once complained that MacOS wasn't built on X Window (chortle)

BSD sockets isn't a bastion of extensible interface design, it basically amounts to one giant ioctl() in every operating system. Darwin has weird escape hatches built into it (AF_SYSTEM) much like Linux netlink, and protocol parameter documentation is strewn across 5+ man pages each documenting a different set of magic value combinations for non-portably controlling some part of the stack. TCP_USER_TIMEOUT? That's one of many, many Linux socket options, but it's part of our precious so-portable socket interface we must maintain forever!

OS X chooses to bury its system call interface too, it's always been considered a private detail of the C library. I see no reason why socket() and friends couldn't similarly be forgotten. If some clean 21st century redesign of an ancient API has knock-on effects for the structure of code in open source projects across the ecosystem, that's probably a good thing in the long run. Progress never happens by hugging the past, especially when that past is ugly and filled with horrors

Try not to look at this as "hey, MacOS is breaking precious UNIX", but more as "hey look, why can't Bonjour+multipath TCP be a one liner on Linux too?"


I've held the title of graphics programmer professionally a few times and I do think it was a bad idea. If you're targeting Android/Linux/Windows/OSX it's really handy to be able to run the same APIs on the desktop for debugging and faster iteration time.

The real knife in the back was when they forked for Metal instead of adopting Vulkan. I've not used Vulkan in anger yet(just a couple toy demos) but everything I saw pointed to it solving all the major issues people have with OpenGL. Now there's another N+1 APIs, shader language and graphics pipeline that you need to support if you're building a graphical application and not leveraging Unity/UE4/etc.


While not the best solution, there is MoltenVK [0] which handles Metal from Vulkan code.

0: https://github.com/KhronosGroup/MoltenVK


Yeah, but if you look at the support[1] you need to be careful and not depend on any of the APIs called out. Also you will take a perf hit regardless of how well the shim is written.

[1] https://github.com/KhronosGroup/MoltenVK/blob/master/Docs/Mo...


OpenGL is certainly old and warty but it is nothing like X11.

I think people would be ok with Apple dropping OpenGL if they supported Vulkan, but their motivation and desire to lock people into Apple-only APIs is so obvious and transparent... Why would you support it?

At least Microsoft works with third parties and acknowledges the existence of some standards. Apple are always like "Here is the Apple way. I mean, the way. Because there is only Apple. Apple is the whole world and nothing else exists."


Sure, but what platforms ship with Vulkan support?

* Vulkan was a recommended API on android N+, which has ~35% device install (from first % stats that came up on google). However, the CDD doesn't require Vulkan to be present. * Linux is it up to the distribution to include it, and it isn't currently required by any standard open source package. If your card isn't AMD or Nvidia, you do not have an option for vulkan support AFAIK. * Windows it is something installed by the video card driver as part of their game support and not available across all cards * Mac/iOS it is built on top of Metal and available as a free third party library for your app bundles

I'm sure people would prefer Apple lead the pack and embrace a new standard before the rest of the industry, but it doesn't really buy them anything with their Metal investment to be first adopter, and it is very Anti-Apple to want to cede that sort of control. Especially when nobody else is doing it.


Which makes sense from a business point of view. Obviously that doesn’t always align with a hacker mindset but there is a reason Apple is worth almost a trillion dollars.


Exactly, Apple has 1B devices that can run Metal on their GPUs, why shouldn't they write their own library?


Metal is used by more than just games. Apple needed a robust, scalable library for its GPUs, across 4 different platforms that could handle more than just graphics. It is now the heart of Core ML, used for machine learning, as well a few other things.


> OpenGL is steaming ancient garbage

Not really by any standard. It is about as good as other APIs of a similar level of abstraction (certainly no fair comparison with X11), and crucially it is widely supported. OS X's support for OpenGL is very poor (they are behind on standards by more than half a decade, despite shipping hardware that supports all the new stuff) because Apple has been hostile to it, just as they are not interested in enabling Vulkan in any way (though that hasn't stopped the industry from routing around Apple's tremendous narcissism).


I like OpenGL probably as much as the next developer, probably more. I’ve long been an advocate. But I still think it’s a bunch of steaming ancient garbage, the same way OpenSSL is. Apple stopped bundling OpenSSL and the message was, “if you want it that badly, bring your own.” That turned out to be a good move, and the people who stuck with OpenSSL paid the price.

OpenGL will continue to be available for a long time. But the idea that you would want to develop new applications for it… well, the mismatch between the OpenGL API and the way modern GPUs work is just too big. You can paper over the small problems with a bunch of small changes but at this point I think it’s extremely fair to say, “you want OpenGL, be prepared to bring your own implementation.” This is a good thing, we are getting OpenGL implementations on top of e.g. Vulkan, Metal, DX12 that are going to be more consistent and easy to target than all the differences between vendor OpenGL implementations.

It’s extra work for developers but I believe it’s better for long term platform health. It just makes more sense for OpenGL to be a fat library, and it makes things easier if you can target one version of the library instead of five. Again, this is basically what happened with OpenSSL.


> But I still think it’s a bunch of steaming ancient garbage

Compared to what? Direct3D? I don't think it is fair or reasonable to compare OpenGL to libraries which exist on a completely different level of abstraction. The last straw for OpenSSL was in the implementation, not the API design, so I don't really understand the comparison.


OpenGL is absolutely inferior to DirectX. If for no other reason than it is impossible to figure out the correct, modern way to do things in OpenGL.

The only saving grace is that OpenGL is kinda sorts cross-platform. If your platform provides drivers that are worth a shit, which has been a big question.


> OpenGL is absolutely inferior to DirectX. If for no other reason than it is impossible to figure out the correct, modern way to do things in OpenGL.

Well, I guess I would just say I disagree. I am relatively new to graphics programming, and the right thing was the first thing I found, every time. The most confusing part reading people warning people not to use compatibility contexts at the beginning of articles from the transitional period.

> The only saving grace is that OpenGL is kinda sorts cross-platform. If your platform provides drivers that are worth a shit, which has been a big question.

This bit I largely agree with, but even if it were the worst it's ever been (and it's not), it would still be better than implementing your application twice or thrice. I have been lucky enough to be on Mesa (primarily) for the last eight or so years, and driver quality has been of minimal concern to me.


Ironically Microsoft has a non-standard API for higher performance networking for quite a while, AKA I/O Completion Ports. It's just they built it right by making it work with existing socket handles rather than fragmenting the platform with a new API.


Microsoft really nailed it with IOCPs. They added them on pretty seamlessly to existing APIs, from file ops to sockets, and made them fit into existing workflows without developers having to waste mental cycles. They've screwed up a lot of things, but man, they got that right.


IOCP is a pretty generic async IO API, not specific to networking - it is similar to epoll on linux and kqueue on BSDs and MacOS. Network.framework is an API which gives you more direct access to, and better support for TLS, TCP, and UDP.


> [IOCP] is similar to epoll on linux and kqueue on BSDs and MacOS

Similar, but the differences matter. kqueue is readyness-oriented. It tells you that the driver is ready to queue more writes. But you still need to call fsync and friends to confirm that your data has actually been written out. There's no non-blocking fsync. Servers like Redis run an extra thread just to call fsync; and eat all the performance problems that entails.

In contrast, IOCP is completion-oriented. There's no need for any blocking fsync calls in order to find out when data has been written. Its way more granular - the OS can reorder writes. And it doesn't suffer from fsync's awful non-local error handling problems.

Honestly I'm surprised there's no equivalent for linux / BSD. (I mean, we have AIO but its really not good enough). IOCP is a fantastic API for high performance databases and servers. It enables faster and simpler server code. We should collectively get on it.


Yeah, unix usually is based on reactor pattern async I/O, while IOCP is proactor pattern.

You can build a proactor pattern on top of a reactor pattern, but that implementation needs to provide quite a bit - more coordination of pending memory/state and probably needing back pressure support (which I never really understood if Microsoft supplied or if you had to track that yourself).

I know Microsoft used to have several IOCP-related patents; I believe I remember one for event scheduling for multi-threaded locality (preferring to dispatch a completion to the same thread that made the original request).


Write flushing is not a good example.

You still need FlushFileBuffers even with IOCP if files are opened in buffered mode (which is the default). While you will know when a write was completed, an explicit flush is still needed to make sure it’s written through. So it’s pretty much exactly the same case as with fsync(). Alternatively, you can use the write-through mode, but that kills performance like that, especially with a random write pattern.

IOCP is basically a better/cleaner abstracted epoll() that also doubles as a manager of a worker thread pool.


Exactly as you say. I'm not into apple products from the development perspective at this moment in time. But I did watch the video. My first thought was: I like it a lot, can I have it in every language.

The framework looks really good. They mention at the end of the video that some of the things might not be very well polished but I'm sure it's all going to get there in the end.


> on-top of sockets

That would have been my guess as well, but looking at the slides it appears to actually have a separate "user space networking" path (p105-108 in the PDF).

So not entirely sure it's built on top, at least not API-wise. Of course, there are sockets underneath in the OS.


Correct. On iOS/tvOS, it looks like Network.framework (and NSURLSession) is using the transport stack in-process, not layered on top of BSD sockets.

On macOS, I believe they're still using the traditional in-kernel transport stack for everything. They want to move to user-space for macOS though, this is why Network Kernel Extensions (NKEs) are deprecated (which started last year). I wouldn't be surprised if macOS uses the user-space stack starting next year and NKEs are fully gone (this also corresponds with 32-bit processes being gone)


> this is why Network Kernel Extensions (NKEs) are deprecated (which started last year). I wouldn't be surprised if macOS uses the user-space stack starting next year and NKEs are fully gone (this also corresponds with 32-bit processes being gone)

AHA, so this is why I had to hit the Google cache for that documentation. I wanted to write an NKE as a side-project but they just blackholed all the docs, this is the first I'm seeing about deprecation.


Apple’s complete removal of old docs/videos is really unfortunate, although in this case it’s already deprecated and I think will be fully gone quite soon. Check out the Advances in Networking sessions from last year:

https://developer.apple.com/videos/play/wwdc2017/707/ https://developer.apple.com/videos/play/wwdc2017/709


It is limited to UDP for the moment (at least AFAICT). I would be very surprised if they ever offer user-space TCP. It's a much different beast than UDP to handle right in user-space.

Also, the 30% less overhead is underwhelming. I would have expected much better improvement than that. That being said, if their measurement includes the encryption layer then the I/O benefit may be overshadowed by the encryption overhead.

Still, pretty cool stuff to make publicly available.


It definitely offers TCP as well. I had an Apple engineer confirm to me that the kernel still provides sanity-checking of the packets and implied that the kernel will still terminate a TCP connection if the process crashes, both of which only make sense if the user-space networking layer is handling TCP.


This implementation looks a lot like HW TCP offload engines where the kernel handles session creation and termination and the HW takes care of most of the state machine. Apple then must have found some way to hide the handling of ancillary tasks from the user in a way that does not cripple the protocol. They may have added hooks in the main application loop but my guess is that they are running separate threads to avoid having the application hanging the main thread in a non-returning loop and prevent these ancillary tasks from running (some are time sensitive). This means that their TCP user-space session management is most likely multi-threaded, which has a detrimental impact on performance due to the use of locks and the consequential cache pollution.


In the WWDC session on this, they demonstrated a simple app that sent uncompressed video frames captured from the camera over the network (I think using TCP, but I don't recall for sure) using BSD sockets and using Network.framework and reported 30% less overhead with Network.framework.

Which is to say, you're saying "detrimental impact on performance" and yet this seems to be a significant win over BSD sockets.


> I think using TCP, but I don't recall for sure

It was UDP.

> Which is to say, you're saying "detrimental impact on performance" and yet this seems to be a significant win over BSD sockets.

The point I was trying to drive home is that, while 30% overhead reduction compared to BSD socket is nothing to laugh at, a fully user-space UDP/TCP network stack combined with memory-mapped buffer sharing usually gives performance improvement measured in the "x" and not in the "%".

Now, there is not much in terms of experiment protocol in their material so its very hard to tell. You mention uncompressed video so they could be sending humongous frames, which I doubt as no one ever would send raw frames over the network (that would be several MB per frame for a 720p front camera). But if that is the case then the data copy operation becomes the predominant bottleneck and getting rid of one may justify the improvement. But that is not a realistic scenario.

The more realistic scenario is that they sent compressed delta-frames across the network (H264 or HEIF), which would then considerably reduce the transferred payload size. In that scenario, data copy is not the predominant overhead anymore and 30% overhead reduction is underwhelming, telling me that they still are calling expensive operations like syscalls/uIPC on the critical data path.


According to what they said in the session, they were asking for video frames from the camera and sending them, completely un-interpreted, over the network. The idea being the receiving device could take these frames and handle them exactly the same way as they would handle data coming from the local camera.


According to their slides from WWDC last year, it also offers TCP in userspace.


If you are referring to "Advances in Networking, part 1" [0], the material used is a little thin to draw any conclusion regarding their user-space TCP/IP design.

[0] https://devstreaming-cdn.apple.com/videos/wwdc/2017/707h2gkb...


That is the talk I was referring to -- and I do agree the material is thin. But here's what I was basing it off of: the userspace networking slide has the "TCP / IPv6" block in userspace. While it's true this could be a broad block (it doesn't include UDP for example, which we know is in there), I doubt they would have explicitly put TCP if it was not true.


> I doubt they would have explicitly put TCP if it was not true

I am not doubting they have some sort of user-space TCP implementation. I am trying to understand how much of TCP they moved out of the kernel. From what I can gather they still have enough of it in the kernel such that the related syscall/uIPC overhead does not allow more than the 30% performance improvement announced.


it seems they've written their own socket code, at least from the video. It's not like BSD was handed down from on high, it certainly could do with an update to make it easier.


Am I the only one whose first reaction to this is "Oh, great, yet another manner in which my next application won't be cross-platform?"

My second reaction being: "Oh, well, I'll just make it a web app instead of a native app."

edit Rephrased the last sentence.


1st thing: The people here asking if they are going to deprecate BSD sockets. No way that is going to happen. We are talking about 30yo+ API deeply integrated in the kernel, unix and userspace. OpenGL/CL is a very very different case. It's always been a third party thing an doesnt have the same level of usage. Not even close.

2. Not everything is a webapp or http. You may absolutely need to use the raw network without any encapsulation. This posibility must simply not die. Its true that most people will not need to use this apis and will use a http webapp.

On the API itself. I think it is a really good idea to make a new generic tcp/udp library. With everything we use today included, and as a first class citizen from the first minute of the design. The particular API they present looks good and it is very well thought out. Mostly integrates async tcp/upd/tls seemless and supports changing interfaces on the fly. This alone is a big improvement. Does this while still providing raw access to the out/input net queues. You can even access the lowlevel details and parameters if you want or need to.

However this is not the first modern network API. QT (qt network) has a good network layer for one. There are simpler alternatives: libuv, libevent etc. This are more speciallized and dont handle all that Network.framework offers.


> 1st thing: The people here asking if they are going to deprecate BSD sockets. No way that is going to happen. We are talking about 30yo+ API deeply integrated in the kernel, unix and userspace. OpenGL/CL is a very very different case. It's always been a third party thing an doesnt have the same level of usage. Not even close.

I agree that removing sockets would break large chunks of the ecosystem. On the other hand, after OpenGL's removal, it's not clear to me that Apple cares anymore about anything outside of Cocoa.

> 2. Not everything is a webapp or http. You may absolutely need to use the raw network without any encapsulation. This posibility must simply not die. Its true that most people will not need to use this apis and will use a http webapp.

Of course. Nearly 100% of my code is not webapp/http. Which is why I'm annoyed when I see (or in this case, fear) cross-platform APIs disappear.

> On the API itself. I think it is a really good idea to make a new generic tcp/udp library. [...]

I have no problem with a new, better, API. But I would certainly be more enthusiastic if any effort was presented to make it cross-platform and, hopefully, open-source and/or standardized.


> But I would certainly be more enthusiastic if any effort was presented to make it cross-platform and, hopefully, open-source and/or standardized.

They opensourced libdispatch[1] and ported it to linux, so there's hope they might do the same with Network.framework. It would be time consuming, but it shouldn't be hard for someone in the opensource community to write a linux / BSD implementation of this stuff on top of posix sockets & libdispatch.

I suspect the reason they didn't opensource it is that it looks like it was designed mostly with phone apps in mind, not servers. And thats a shame - it looks like it would be a great addition to the swift server story, especially given it does user-space networking out of the box. I still prefer swift's ergonomics over rust for application development, and this would help immensely. (Rust is steadily closing that gap though, what with native wasm support and async/await on their way.)

[1] https://github.com/apple/swift-corelibs-libdispatch


I agree. Im not saying this is the solution, but is not a bad API, that is what I say. A cross plat. and OSS would be great, absolutely. But as others have said those apis usually come out of defacto standards that people adopt. Probably adopting an apple library likely with patents and copyright issues is not even an option. But maybe it triggers new ideas and reactions on people on other OSes.

Implementing this (similar functionality) API on Linux on top of libuv/libevent and gnutls + multipath tcp shouldnt be much work. Well, maybe multipath tcp is problematic, I dont know details there.


OpenGL was not removed. It was deprecated. These are very different things.


No cross platform APIs have disappeared! The 'worst' thing that's happened was that a barely supported and never updated API was officially deprecated.


Note to readers: It was barely supported and never updated on Apple products only, at Apple's discretion. If they supported it properly like they were "supposed to", it wouldn't be barely supported and never updated :)

[0] "Supposed to" is my opinion on OpenGL. At the time it was the only cross-platform graphics API, and with Valve pushing a DirectX to OpenGL converter and building all their games with OpenGL support, we were finally going to get to a standard interface to graphics hardware. I guess that ship has sailed now. At least there will be abstractions that compile to the various platforms.


Not everyone cares about cross-platform, some developers rather specialize in a given platform.

You can keep using sockets if that is your thing.


> Not everyone cares about cross-platform, some developers rather specialize in a given platform.

True.

> You can keep using sockets if that is your thing.

For the moment, yes. However, as you know, earlier this week, Apple has officially deprecated OpenGL and OpenCL, a few years after introducing a competing (and admittedly suprerior) API. It is my understanding that Apple has not attempted to get this replacement API standardized, nor to offer it on any other platform. While it's hard to deduce from such a small sample, this precedent does suggest that the days of sockets are numbered.

This would make the life of some of my colleagues quite more complicated, and it would, in time, simply get rid of the macOS version of a number of existing cross-platform applications whose authors have neither the time nor the energy to rewrite the network layer.


Every effort to get rid of plain unsafe C APIs is a plus on my book.

Apple has enough business with developers that care about macOS experience, not plain ports from other platforms.

When I buy a computer with a specific OS, I want the experience provided by the OS APIs.


I understand your point. I also dislike a lot these plain unsafe C APIs.

However, I am a developer. When I write an application, I only have so much time to spend on ports. Anything that requires me to have different designs for different platforms is bad for me. So, in the ideal case, we'll end up with a few high-level cross-platform abstractions (which may actually share Network.framework's API, for all I know) on top of networking, and everybody will remain happy. In the worst case, we'll end up with bits of duct tape, or simply with fewer macOS apps.

Also, even as a macOS user, most of the applications I use daily are cross-platform: Firefox, Thunderbird, VSCode, my various compilers & interpreters & debuggers, VLC, Gitter, Open Office, Terminal, games, etc. In fact, looking at my recent applications, the only single-platform applications I seem to be using are Keychain, SimpleComic, Instruments (the only one in the list to particularly integrate with the OS), Notes, Calendar. I sometimes use Pages and Keynote, both of which have nice UX, but the niceties don't strike me as particularly OS-integrated.

As I mentioned above, everything that makes it harder for developers to add macOS to their list of targets encourages them to not port their app to my current OS of choice. I'm thinking of, well, many in the list above, including games.

This is, I imagine, part of the reason why so many developers left single-platform development for web development. At least, that's my reaction.


> This is, I imagine, part of the reason why so many developers left single-platform development for web development. At least, that's my reaction.

Spot on. You even gave an example of this: VSCode. If Microsoft is doing it, you can bet loads of other people are.


> I sometimes use Pages and Keynote, both of which have nice UX, but the niceties don't strike me as particularly OS-integrated.

Well, what are the niceties you see? I'd generally say that Pages and Keynote's integration with macOS is one of their strengths.


Yes precisely. I’m sure this varies from person to person but the applications that have the most staying power in my tool belt by far are those that are well executed first class citizens of my platform of choice. Cross platform apps written by developers who are more concerned with an easy one size fits all solution nearly always fall to the wayside.

I chose the platform I use for a reason. Why should I be ok with apps that undermine that by taking the lowest common denominator approach to porting?


Interesting. As a counterpoint, if I look at the toolkit I'm using right now on my MacBook, it's:

Chrome

Firefox

Hipchat

Vim

Git

So, not one single-platform tool in the bunch. I'd even go so far to say that, aside from my terminal emulator and app install tool, I don't use any single-platform tools in my day to day job.

> Why should I be ok with apps that undermine that by taking the lowest common denominator approach to porting?

Because they're, in many cases, as good if not better than the native-only alternatives?


I guess it depends on what types of apps one tends to use and what work you do. While I make frequent use of the terminal, I live in UI apps all day, so bad ports tend to stick out more prominently. If I lived in vim or emacs I could see it not mattering as much.


Do you use iTerm for opening vim?


Clutching at straws.


Which of the apis provided by the socket layer do you find to be unsafe?


All that require manually keeping track of pointers, structure and buffer sizes.


That's why you have wrappers in the language of your choice that will handle this for you.


I agree. I extend same to other software like databases etc. I have seen so many project over abstract and that turns out useless when migrating or supporting additional backend with same abstracted solution.


The people who tend to care about high performance on a specific platform are game developers, and we all know Macs don't have that market. And with deprecating OpenGL, they probably aren't going to be getting it either. People on web are always saying performance of language doesn't matter, but then when these ultra-specific libraries come up it becomes different--- but I'd say server performance is _way_ more important than how fast my Notepad app runs...


Well, also web browser developers, authors of A/V software, etc.

Do you want Safari to be the only web browser on macOS, Quicktime to be the only video player, etc? Not saying that it will happen, but with the disappearance of OpenGL and if sockets eventually disappear, too, it will become more difficult for independent web browsers (e.g. Firefox, Servo), independent video players (e.g. VLC, MPlayer), etc. to create or maintain a macOS version.


While there’s truth to this, there’s no real discussion here about deprecating or removing BSD sockets. UNIX, POSIX and OpenGL are very, very different things to discuss, and Apple deprecating one doesn’t speak to any real expectation of deprecating anything else.

At some point it may happen, but there’s simply no writing on the wall for it right now.


Fair enough.

But I'll remain at least a bit worried, if you don't mind :)


OpenCL didn't seem to be succeeding, so that's not entirely surprising. It is disappointing that they've chosen to deprecate OpenGL and ignore Vulcan in favor of Metal, though. Although I don't know much about this particular area, and I couldn't tell you about the merits of Vulcan vs Metal. It's hard to tell whether there's any good technical reasons for their decisions or if it's purely just business in that they don't want to contribute their resources to something that will benefit Android.


Admittedly, I am also not well versed in this area. My guess is, because Metal is proprietary, they can include intimate details of their hardware platform to squeeze out the most performance. And they probably would not feel comfortable having that same level of detail in Vulcan.


They also started making/using (and deploying?) Metal before Vulkan was announced/available.

Doesn’t mean they couldn’t have changed course, but I’m sure it was a factor.


And they may still yet change course. At this point it’s hard to say.


> Not everyone cares about cross-platform, some developers rather specialize in a given platform.

Yeah. This is exactly what people thought in the early 2000s of IE6. It was the de facto 'standard' web browser. Whole enterprises were built on the assumption that no other browser matters.

But the world around changed. IE6 became obsolete. Then insecure. People kept running it to use the old apps that would not work on modern browsers. But eventually, IE6 had to be buried.

Thousands of apps containing billions of lines of code had to be re-written at an astronomical cost. Just because, for a short while, we once thought specializing on one platform was a sustainable long-term strategy.


You are claiming a false dichotomy. Specialization doesn't mean future-deprecation. Every single cross-platform app is written to an abstraction. There is no implicit assumption that all abstractions are good. Whenever I hear cross-platform code I think slow bloated web-apps or slow bloated java apps with a funky UI.

Websites written for IE6 can still be used today with minor changes. The problem is activex and other crappy plugin technologies. If people had just targeted Win32, those apps would still be working today - but probably still end up requiring a rewrite because of modern security requirements.


Which is why those devs now specialize in Chrome instead.


Sad but largely true.

On the other hand, it is easier to install Chrome on your machine, if you need a Chrome-locked webapp, than to install macOS on the same machine. In fact, the only platform on which you cannot install a Chromium-based Chrome is iOS, and that's a political choice by Apple.

edit Rephrased entirely.


Blocking all third party code (and most first party code) from creating executable pages is a pretty strong security decision, but yes it has political ramifications.


You can also code in a language with good cross platform network APIs like Go or use a wrapper library like boost::asio or libuv. The latter are likely to add support for this eventually if there is a strong benefit to doing so on MacOS.

I'm ambivalent about platform-specific solutions but I welcome any attempt to obsolete socket() and select()/poll() and friends. I absolutely loathe that API.


I couldn‘t care less about cross Platform honestly, and yeah the most of the mobile market use Andoird, well, the android user (average) don‘t download apps, do not spend money in apps, so... is out of my target.


Wot. Citation needed? I may not be an "android user (average)" but I download apps every day (paid and free) and have spent far more money than I'd like to admit on the Play Store.


And I'm now wondering how long it'll be until Apple deprecates Sockets.


> yet another manner in which my next application won't be cross-platform

I skimmed the video and it really looks like sockets++: once you have the connection the rest is up to you. It doesn't seem to break anything on the wire. Unless you're referring to compilation - but we can assume that Libuv will soon abstract this platform away (just like it does with Windows IOCP).


If you're writing it in Swift, you're probably doing native apps anyway. So there isn't much reason to not use the native things the platform provides.


At our company (in one of the largest companies in the US) all of our employee related tools are iOS devices. So cross platform is unnecessary. It's not like we standardized on Palm or Windows Phone, Apple is not going anywhere and neither are we.


Well, Apple has demonstrated more than once that they are willing to U-Turn and entirely drop core technologies. By opposition, Linuxen and Microsoft try very hard to avoid breaking applications.

So, while Apple is not going anywhere, whichever technologies you're using to build your iOS apps might.


Apple hasn't given any examples of removing APIs without a deprecation lasting at least a few years... on iOS.

If it's deprecated on macOS, it's going to be around for a lot longer. They can certainly make things hellish for users trying to install your kext, but the APIs will still be there.


Microsoft intentionally broke compatibility with every major release of Windows Phone though, and then ultimately killed the entire platform?


Ah? I wasn't aware of that. Still, that doesn't mean that Apple won't :)


Kill the entire platform?


This thread confuses and saddens me. What is with the absolutist compulsion that every API and framework has to be cross-platform? Cross-platform APIs and frameworks must, by necessity, paper over differences, exclude the possibility of leveraging a particular platform's strengths, and target the lowest (i.e., worst) common denominator. Sometimes that's a good tradeoff! But all the time? To the point where we have to attack OS vendors for having the nerve to, oh gosh, add a feature?

Taken to the extreme, this makes no sense. What's the point of even having different operating systems if there's only one acceptable API? Do you guys think operating systems can never be allowed to introduce a feature and corresponding API call?


I think it's inexperience speaking, which is also why it's very dogmatic. If you've seen how standards develop, they're usually taking many ideas like this that have become popular and tweaking the specs to smooth out the differences.

The POSIX API is often called Berkley Sockets because a vendor, BSD, released their API and it was widely adopted. ANSI C cobbled together existing C compilers into a standard, and POSIX itself was trying to tame a huge mess of Unices.

And even if the standards don't fully converge, you have standard libraries to plaster over the differences. Everyone has to contend with weird Windows path conventions and such, to the point where it's hardly even notable.


I'm old enough to remember the POSIX meetings (I was sitting a few feet away from one of the conference rooms while the executives were meeting - and had some input into the spec.) One of the goals was to figure out what functionality was stable and mature - and that would be part of the standard. In contrast, some functionality was expected to evolve and become a differentiator between vendors. Those functions, even if we had an existing implementation, would not be part of the spec. We could really use an equivalent standard right now - it has become too hard to practice our skills. (Also reminds me about when I decided to learn UNIX and C - I was tired of being in the "assembly language of the week" club. Relearning and reimplementing gets old.)


Well, I for one have lived a time when all platforms were radically different, and it was really really hard to share any code between Linux, Windows and System (for youngsters: that's the name of the previous OS for Macs).

We now live in a time where it's much easier to share code, thanks to technologies such as Qt, Java, .Net + Xamarin, game engines, web applications, etc. Any sign that we could be returning to the bad old days is, well, not a good sign.

Now, I agree with you that the real bad sign here is not Network.framework, it's the deprecation of OpenGL/OpenCL, apparently without any attempt to go towards the open standard Vulkan. Now, Apple introduces a new technology designed to replace an existing API, and people's built-in pattern-matching (including mine) see it as a sign that Apple might be testing the water before removing a well-known, cross-platform API. That's the kind of behavior that the previous King of the Hill kept using during the bad old days, and Apple seems to have borrowed more than one page from that time's Microsoft, so why not this one?


See also the Network.framework-based netcat implementation [1]. Only a couple hundred lines of code, complete with TLS and mDNS support.

[1] https://developer.apple.com/documentation/network/implementi...


I wonder how this can/will affect Go’s standard net package [0]. Will it be able to take advantage of this new Network framework (on macOS)? Or is the framework too high level for the low level primitives net exposes, and will need a new separate API?

Also, personally, I suspect this is another hint of an upcoming transition to ARM64-based Macs. They will only implement these new Apple-specific APIs, not the legacy/deprecated ones like OpenGL.

[0] https://godoc.org/net


Raw sockets have been discouraged in iOS years ago. It doesn't do things like waking up the radio, activating the on-demand VPN; it performs poorly when the device is multihomed; and from the developer's perspective there is a lot more code that needs to be written to support simple things like supporting both IPv4 and IPv6 (with happy eyeballs etc). The writing has been on the wall for years. I will not be surprised if a few years down the road Apple completely removes the sockets API from iOS (though I'd be surprised if they do it on macOS). I'd say good riddance to this outdated API. Your traditional desktop and laptops can continue to use this legacy API for your traditional and cross-platform apps, but iOS as a mostly consumption platform has no need for it. Personally I'm glad that Apple is using the closed nature of the iOS platform to delete legacy baggage and experiment with newer saner APIs that meet the needs of the twenty first century.

I'd also say there's very little reason as a typical developer to write socket code; just use a library provided by the platform or your language. When's the last time you used the raw sockets API?


> Raw sockets

This is an established term for IP-level sockets, of SOCK_RAW protocol type. Probably not what you meant.

http://man7.org/linux/man-pages/man7/raw.7.html


Yes I mean using the sockets API instead of a higher level API.


There are certain things that the raw sockets API allows for that the standard CoreFoundation/Foundation API doesn't. It's pretty rare, but there are some legitimate use cases where this can be necessary.


This is a nice high-level API for (more) reliably handling networking, taking a fair amount of load off developers having to implement it themselves. Of course it would be nice to have an API like that available as cross-platform library.


The door is open for someone to write this cross platform...


I do hope that this will inspire other OS / devs to make similar open apis


What about ZMQ?


ZMQ is not a 'neutral carrier' and can't interoperate with things not implementing the protocol. It's great for the intended use case, but it's not a general purpose replacement for the Berkely Sockets API.


Maybe, but it works really well for most applications, and its available on almost every language you can use.


I agree - it's a great tool and useful for a lot of things. However, it's not useful as a general purpose network stack, which is what Network.framework is.


I saw it has a C API, if someone made bindings, that would be nice


Why does Apple have to make everything proprietary? It's kinda annoying.



Why does apple like money?


Swift? FoundationDB?


Like any other OS vendor you mean.


You mean "like one and only one other OS vendor"


Like Microsoft, IBM, Oracle, Sony, Nintendo, Google, ARM, Green Hills, PTC, Unisys, DDC-I, MicroEJ, Blackberry, SAFERTOS, ....


To be fair, when it comes to sockets, pretty much every OS' interface is identical, all taken from BSD.


Most OS also have platform specific extension over standard apis. This is the same category. They will not(and can not) deprecate BSD socket. Posix and BSD socket are totally different from OpenGL/CL. It's just an additional api for one who want to have maximum control.


I was replying to "Why does Apple have to make everything proprietary? It's kinda annoying."

And related to sockets, yeah when those OSes do provide some kind of POSIX like compatibility, and even then some advanced configurations are only possible via the OS specific APIs.


Microsoft has open sourced a ton of their code in the last few years



I'm still waiting for OpenTransport to take off...


Haha, nice. I immediately thought of OpenTransport when I saw the headline. I wonder how many developers actually wrote code directly against that API.


Outside of Apple? Probably not many directly. I do remember them sending out developer kits for it though.


So will BSD sockets follow OpenGL's path to deprecation?


As long as Apple wants to keep macOS POSIX-compliant, the sockets remain.

[1] https://images.apple.com/media/us/osx/2012/docs/OSX_for_UNIX...


From your link:

  * Open source UNIX foundation 
  ** Support for multiple CPU and GPU cores via Grand Central Dispatch and OpenCL
  * Comprehensive UNIX user environment 
  ** Standards-based graphics built on PDF (Quartz), OpenGL, and H.264 (QuickTime)
Your 2011/2012 document gives less assurance towards any future POSIX compliance.


Unrelated, but there's a whole lot of information about someone named Ernest Prabhakar there…


Doubtful, there's no reason to throw it out since Mach is still UNIX-y.

This is really there for Swift/Obj-C developers who want to use sockets but have something easier to grok than BSD sockets or NSStream, especially when you need to deal with TLS (Secure Transport, as simplified as it is, is still more difficult to use than saying "open this socket with TLS please"). BSD sockets are also ugly as sin to use in Swift because of rampant use of pointers (it's C, after all) - so there's that too.


Network.framework isn’t just easier to use, it performs better and is generally safer too.

What this means is that while BSD sockets will continue to work for the foreseeable future, native cocoa apps and cross platform apps with specialized ports will have a leg up in performance, stability, and safety over dumb ports using BSD sockets.

In other words, you’re free to keep using BSD sockets at your own detriment.


For domain sockets specifically, there is this Swift library which does various heavy lifting with the C APIs: https://github.com/IBM-Swift/BlueSocket


There are many big name apps on the app store that are intended to be as cross-platform as possible (reducing platform specific code), and as backwards compatible as possible (4 latest releases of iOS). For these reasons, they are built on top of sockets. I supremely doubt Apple will tell these companies to pound sand; they are much more important to them than those affected by the OpenGL / OpenCL deprecation.


Not today.

We'll see in 4 years.


I love this! These are great ideas. Plans for more languages?


Not open source, don't care. Open standards matter.


What’s the point of this comment?

It’s an Apple library for an Apple platform, to make things easier for people using Swift/Objective-C.

Some people care about that.

Just because it’s not an open standard t shouldn’t be discussed?


Unfortunately it seems that many people cannot distinguish between the programming model of sockets and the protocols that programming model exposes.

From what I can tell, Network.framework is a different API still talking TCP, UDP and so forth underneath.



typo: freed from the strictures of the current sockets API

https://en.wikipedia.org/wiki/Stricture


i do not understand what incentive there is for anyone to use this - after apple's deprecation of a standard they themselves conceived only 9 years ago [1]. Berkley sockets might not be the conceptually best approach, but their support is damn near universal.

[1] - https://www.khronos.org/opencl/


There are thousands of developers who make the choice to write apps for only one platform. They now have a better networking library. What don't you get about that? Not everyone chooses to write cross-platform. For people who don't write cross-platform apps, why would they ever want to use sockets instead of this much easier to use API?


the cross platform bit was a little disingenuous of me. what i was really trying to say was that in 9 years time from now, apple could deprecate it at the drop of a hat. sockets, on the other hand, are likely not going anywhere, and also have the benefit of existing everywhere, as they have for a long time.


You can use BSD sockets through the regular POSIX I/O API (read, write, select, etc.). I wonder if the same holds for Network.framework sockets and libdispatch I/O? Are they composable?


Network.framework seems to depend heavily on state notifications, which the POSIX API doesn't support.


I think you misunderstood my comment.

I was asking whether Network.framework sockets can be used via functions like “dispatch_read”, analogously to how you can plug BSD sockets into POSIX I/O functions as file descriptors.


Skeleton crew on OSX but they have time to reinvent sockets. Apple, 2018.


What makes you think nobody is working on macOS? I really feel that macOS got more focus than iOS this year.


Let's see how long until they deprecate BSD sockets.


Disingenious. A better way to say is that their particular socket implementation was inferior to their new proprietary API. There is nothing with socket paradigm as such that gives it inherent performance disadvantage.


I disagree. As discussed in the talk, Network.framework is built on top of their userspace networking stack. This eliminates the kernel-userspace context switch and gets rid of one packet copy operation. These are known problems in the socket paradigm, at least as it exists in Linux and every BSD.


Well, for as long as the driver is ran in ring0, you have context switching. The water is wet, there are no tricks around that even if you are an Apple.


Where do they say their network stack is userspace? I watched the video but the last 10 minutes and I missed it.


The userspace networking portion of the talk starts at 46:00.

To clarify, the whole networking stack isn't userspace. They still have the classical BSD networking stack. I think the userspace networking stack is used automatically by system libraries like URLSessions and this new Network.framework library. They introduced their userspace networking stack during last year's WWDC: https://developer.apple.com/videos/play/wwdc2017/707


Ah, good. Thank you.


Page 105 of their slides.


It's only for datagrams, apparently.


My understanding is that TCP is also partially user-space, but with some kernel management (such as communicating the closing the TCP connections when processes terminate)


There are some scenarios where sockets are slower because of the movement across the userland-kernel boundary. Even with all of the tricks that have been developed over the years it can still be much, much, slower than some of the tools that have come out. Intel released DPDK, Windows Server 2016 had a similar thing called PacketDirect.

It's an interesting read. I'm not finished with the video, so I don't know if this is the way that Apple went.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: