I find this surprising, as GCD does insulate you from that low-level stuff. When you need to work with mutable data, just create a dispatch queue, and only ever access the data by dispatching a function to the queue. Both Swift and Objective-C have friendly syntax for anonymous functions that makes this lightweight and easy.
I think I would have less hang ups if the author just came out and said I wanted to try using Go for multi-threaded code with Swift instead of trying to make GCD sound so confusing.
That's been the case since it was released – all Swift C / ObjC interop happens at the "module" level. You write a Clang module map (if you're not using a system library which already has one), point the map at the relevant header(s), then you can import that module into your Swift code.
That said, not everything is capable of being imported (variadics, complex macros, VLA structs, and a few others).
It seems like there's a better way to go about this than my method. I was motivated to do this not because I don't like GCD (I still don't) but because I really like Go and IPFS and didn't think that iOS should be left out of the party.
But, looking at it again, I still don't think GCD is all that great. This seems to have been intentionally designed to be different from the way so many other systems handle the issue - or at least, different enough that its still relatively discomfiting to attempt to port something away from iOS and not just to it ...
Cocoa is incredibly fragile about the main thread, so you need to be super careful what runs in what queue. If you add KVO/bindings into the mix, it needs an extraordinary level of paranoia^Wdilligence.
This basically just boils down to "don't touch the UI off the main thread". There are some exceptions with CoreAnimation, but other than that the main thread checker will yell at you if you do something wrong.
Anywhere we're doing something specific, we make liberal use of `dispatchPrecondition(condition: .onQueue(mySerialQueue))` or `assert(Thread.isMainThread)`.
Even in places where we are doing multithreaded UI rendering (mostly CATiledLayer), it's become a non-issue. It's a night-and-day difference compared to 5+ years ago.
That basically applies to all UI framework. At least I'm not aware about one which provides good support for accessing it from another thread.
There is obviously a reason for it, which is that UI frameworks are incredibly stateful, and trying to manipulate lots of state from multiple threads at once rarely works out well.
Not sure if Haiku has any improvements regarding it.
Having the whole OS being so multithreading heavy was a novelty for the large majority of developers and quite easy to make mistakes.
GCD is super great and I use it a lot (...if you had to extend AsyncTask recently: my condolences), but it doesn't safeguard you from race conditions, dealing with locks and all that fun stuff.
Channels seem like a simple thread safe stream for the most part, which you can get with Rx.
I don't see how this technique is comparable to Electron. The article does not describe anything related to a cross-platform user interface, which is what Electron addresses. You can write non-UI logic that is cross-platform in half a dozen mainstream languages and another dozen less popular ones. That's not a big deal. It's the UI component that is harder to achieve, and that is what Electron offers.
All you need is an objective-c bridging header where you do an #import "name_of_header_.h" for every header. After that, the headers are visible to all of your swift code. It's no different than mixing objective-c and swift, except here you're mixing your language of choice, compiled to callable C functions inside a static library.
To recap - drag .h and .a files the same way you have .swift files into your xcode project. Add a BridgingHeader.h file, go to it, fill it with #import "name_of_header.h" statements. Lastly, the project needs to know you're using a bridging header, that's done in the project target's Build Settings tab, under "Objective-C Bridging Header" you need to have the value set to the filename you chose for your bridging header.
This is not unique to calling Go in Swift btw - any language that can be called from C, can be called from Objective-C, and therefore Swift. One thing to be aware of is memory management - unless you're passing things by value (copying), making sense of when things can be safely deallocated across languages is non-trivial.
Not really rational.
(Hint: its koolaid, kid.)
I imagine we’re not going to convince each other of the technical merits, but I personally think GCD is a very nice programming model. I’m less of a fan of Go but plenty of people love it so clearly there’s something there. There’s room for both.
It’s a valid criticism of a lot of Apple tech that it’s very locked down, but given libdispatch is open source (as are the Clang changes to support block syntax in C) I don’t think that criticism is fair here.
The approach is based on the same principle: cgo as bridge between Go and a C library. The C library is build by the Swift package manager. Blog post on Dev community with details:
https://m.youtube.com/watch?v=R0oaOohl5jk ( in french but the slides are in english)
Go, horrible as it is, doesn't lack cultists.
This comment is not about the language per-se. it is about the current goals of the people with money and weight behind the language right now. I guess the regret will come or not if those goals keep or change.
In our Perl code base, we have had so many issues with auto-vivification, lack of argument tracing (just pass around @_ everywhere!), callback hell in AnyEvent for concurrency, and more. Maybe if you use Moose everywhere, you can get some form of sanity, but I doubt it. Engineers I have full respect for have scratched their heads trying to initially dive into this perl. What I can grant however is that it is able to do a lot of work (given enough machines!).
For the Go version, I know exact method signatures and variable types. Concurrency if first class. And just about anyone can read the code and figure out what is going on. We've onboarded new grads who can help put solid features into this already large Go codebase quickly. We are seeing 20x optimization in throughput over one code base from Perl (it requires a lot of waiting on remote servers we don't control), and 100x in another.
I can't imagine regretting the choice to write in Go for networked services running in the backend.
“The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.”
-- Rob Pike
The harsh language there is not being "capable of understanding a brilliant language." I'm not aware of what was meant by a brilliant language, but I have to assume that it means research languages. I would not expect fresh grads to build production worthy code in any language, but especially not in a research languages (which typically are relegated to the realm of research because they are not capable of being used by large teams for the making of good software).
I'm in the business of creating value with good software, not ivory tower building that only a smaller proportion of software artisans can build and maintain.
And that's a bad thing?
When we can talk to an AI and describe something novel and it can interpret that, then I'll start to worry about software development jobs being lost to AI.
Naturally those developers are also entitled to have jobs, the question is how it is actually done in practice.
Sadly a large majority are sweetshops.
1) regex was treated like the primary way to do things - even when it wasn't necessarily called for - at the expense of readability (and Perl supported it so well)
The two combined together (and possibly the fact that it was the early days of commercial internet service) led to the idea that anything done in Perl was destined to look like "line noise" to actual SWEs.
/just how I saw it...
All that Go delivers today, over the medium, is very similar to what perl also delivered yesteryear, on top of yesteryear medium. But also like perl, it is being born from a bunch of old unix system engineers :)
Go is often praised for being simple.
...and criticized for being too simple, ignoring modern advances in programming language research.
> no compiler OMG!
The compiler and its careful tradeoff between fast compilation and fast programs is a key characteristic of Go.
> text transformation on the code itself!
Go doesn't have macros. Instead people fall back on code generation, just like Java.
Did I miss that part of history?