Hacker News new | past | comments | ask | show | jobs | submit login
“Swift will be open source later this year” (apple.com)
1334 points by brbcoding on June 8, 2015 | hide | past | favorite | 548 comments

Nice to see that Chris (Lattner) got his way. I chatted with him last WWDC right after the main Swift technical session, and he expressed the desire to open source it, but had no idea if he could get it through the powers that be.

Supporting the standard libraries on Linux is certainly a surprise, though.

Supporting Linux is a surprise, but I think it's a great move on their part.

Think: How many iOS apps are frontends to a server API? And how many of those APIs are running on Linux servers? Swift on Linux means ~all the code for a client-server iOS app can be written in the same language.

So here's the thing. They will release Apple Music for Android... and there's full stdlib of Swift support for Linux? Could it be that the Android app is partly Swift?

I wouldn't read too much into that.

Android ships with Bionic libc, which is different from the glibc that is usually shipped in a Linux distro. There are definitely some differences between the two.

Plus the average Android app is very far from the average Linux app. If they were aiming at supporting Android, I think they would have said that instead of Linux.

The key word here is "partly" Swift. I would not put it past Apple to use an intermediary framework that hosts a Swift runtime and calls into native Android APIs. That keeps the non-UI logic in one codebase.

That's more or less what they do with iTunes, isn't it? I recall there being an incomplete library packaged with iTunes once upon a time (with stuff like a stub implementation of Grand Central Dispatch that was neither grand nor dispatching).

iTunes (used to, at least) have a lot of Mac OS 9/Carbon stuff partially ported to Windows, in fact.

> stub implementation of Grand Central Dispatch that was neither grand nor dispatching

Thanks for the chuckle, almost r/programminghumor worthy ;-)

Possibly talking about different things when you say "stdlib"?

It's great news. Hopefully there will be a node-like ecosystem for writing small services on Linux soon. With F#, Swift and C# available, hopefully the proliferation of Javascript can be slowed on the server (without people running to Go).

With the .NET Framework actually declining, Swift still being far away from the server and frameworks like Meteor, JS seems to have a really good position.

With that said I will root for Clojure + ClojureScript. One could theoretically build a framework much more advanced than Meteor, on the same code-sharing principles.

How so is .NET declining? I haven't heard this.

This is purely based on numbers, but still might be a little subjective. Nevertheless here it is: https://onedrive.live.com/view.aspx?resid=1E5AA35A965D3234%2...

Just curious, why do you say that (about Go)?

It feels much too pragmatic for my taste. Exactly the kind of thing to come out of Google, to solve their type of problem (scalability, deployment are much more important than the code itself). Personally I don't have googles problems (few of us do!) and I really like a good type system, with algebraic types/pattern matching/generics

It feels much too pragmatic for my taste.

C was unprincipled and pragmatic too, it was created with the goal of making UNIX portable. Still, C became one of the most popular and influential languages.

While I prefer ADTs/generics too, let's not forget that many people are less principled and pragmatic factors influence language choice as much (toolchain, ecosystem, popularity, familiarity).

> Still, C became one of the most popular and influential languages.

That was a side effect of successful startups (Sun, SGI...) adopting UNIX as their OS.

The success of UNIX is inseparable from C and vice versa. C made UNIX one of the first portable operating systems and, by virtue of being low-level, provided performance.

The same could be said of OSes using Algol and Mesa. They just hadn't successful startups using them.

There are good reasons that these startups weren't using OSes using Algol and Mesa. You seem to imply that the success of C and Unix is some historical accident as a result of two startups picking Unix randomly, and their choice was bit a function of the technical properties of UNIX and C. Which, is of course, nonsense.

Sun used Unix because Sun was co-founded by Bill Joy. By that time, Joy was already deeply involved in Unix (per BSD). If he didn't find Unix and C likeable and up to the task, they would have made different choices.

SGI and Sun were just one of many catalyzers, like a lot of programming languages have catalyzers.

You just confirmed what I said.

The startups that have chosen UNIX, did so because the owners were part of the American UNIX university culture.

A different background would have mean a very different history in mainstream OSes and their respective system programming languages.

In Europe C had very little meaning until most enterprises started to replace their mainframes by UNIX servers from those companies.

Having had exposure to both C and Pascal at around the sane time, I vastly preferred working in C.

I do think there's something intrinsically good about it compared to other contemporary systems languages - I don't think it's just that it tagged along with Unix.

I use Go a lot for backend stuff, and I agree with you. There is a lot of stuff you can do with interfaces to ameliorate the lack of generics--but you still end up missing generics and algebraic types.

> It feels much too pragmatic for my taste

It doesn't feels pragmatic at all. It relies on users to do the compiler's job(type assertions) , because "You dont need that with Go"TM ...

It feels "pragmatic" in that Go follows "YAGNI" almost to a fault. It makes the argument that repeating yourself can sometimes be a better choice than more abstraction to increase code clarity. Some disagree with that, which is why it's nice to see other languages taking on similar areas that Go tackles but with differing choices in that area.

If you write Go code, you will realize you will actually not hit that problem often at all.

Right now the annoyance I hit more often is the inability to map a []Foo on an attribute .Name (string) to get a []string. I do that in ruby all the time, and with Golang it really sucks to do a for loop to collect stuff appending to a new array.

  people.map(&:name) => ["Joe", "John"]

  a := make([]string, 0)
  for _, p := range people {
    a = append(a, p.Name)

It is not pragmatic at all, it is a language for mine workers that are too stressed to think.

Hah, I precisely use Go when I have to think deeply, i.e. about a difficult to solve problem.

When the problem is boring, use a fun & challenging language. When the problem is fun and challenging, use a boring language.

javascript is awesome

Be interesting to see if they support @objc on Linux, the Apple runtime is difficult to support because it requires the dynamic linker to notify libobjc when an image is loaded.

The Objective-C support on Linux is already great, the compiler handles it well. The problem is the Foundation library isn't ported so you're left using old, unsupported relics from the OpenSTEP project.

CoreFoundation has a MakefileLinux[1] for versions since 635 (corresponding to 10.7, if I'm not mistaken) — anybody know the status of that? It obviously relies on Clang (as it uses various extensions), but does it build, is it useful?

[1]: http://www.opensource.apple.com/source/CF/CF-1151.16/Makefil...

It's not all of CF and some of the ObjC glue is missing, particularly in the last few drops. Apportable put most of if back in though.

What language is that link written in? It looks like bash sort of but not really?

It's a Makefile[1]. They are quite simple really. They're defined like this:

  target: dependencies
The command is run by a shell (e.g. bash!) when the target is called to be run. For example...

  all: hello world
      echo "!"

      echo "Hello"

      echo "World"
Now if you run "make", you will see each command being run. By default, if you call "make" with no arguments, the "all" target will be run. You can also call "make world" for example, to have it only run the "world" target. As you can see, first the dependencies of the target are called, then the command of the target is run.

It's often used by C/C++ projects in order to manage dependencies, but can be used for anything really.

Here's a short tutorial you can walk through if you're interested in how it works and why it's useful: http://mrbook.org/blog/tutorials/make/

1. https://en.wikipedia.org/wiki/Makefile

wow thanks for breaking that down. years of playing and prodding on linux and this helped me a lot. funny and funky what slips through the cracks. kudos friend

It's a Makefile.. for.. Make.

I feel like this reply could have been much more helpful and less snarky. Some people are still learning - we all were, at one point.

everyone is always learning (hopefully) ;D

Some people aren't learning to be helpful or less snarky though.... :-)

It's always just a SMOP, but my hunch is that Swift is way too integrated with the Apple runtime to not use it. I wouldn't be surprised if they have limited support for ObjC in their Swift Linux port, but happy to be proven wrong. (I started porting the Apple runtime to Linux last year but ran into the aforementioned linker issue. A way around it is to change the Apple runtime ABI on Linux of course.)

Is Apple's dialect supported, or only the NeXT version? (Which, TBF, has absolutely nothing wrong with it and is a great language; but it's not what people are writing these days.)

Objective-C 2.0 via Clang and the GNUStep ObjC2 runtime supports blocks, GCD, fast enumeration, declared properties, introspection.

I've been wondering for a while now why no one builds an Obj C backend for their apps:


Being able to deploy on Linux might be what it takes!

I long dreamed of using Objective C to build server apps. The compiler support has been there for years (GCC, later Clang), but there's no standard library. All the stuff you would want from Foundation is missing; strings, sockets, file I/O, encoding support, threads, etc., it turns out you'd have to reimplement a lot from scratch. There's GNUstep, but last I looked it, it was stagnant, and also very much geared towards GUI apps.

Swift on the backend would be even better, I think. Between all the languages that have evolved lately (including Go and Rust), Swift has struck me as the one that feels the most like my ideal language.

The Étoile OS folks have been working on top of GNUStep for a while: http://etoileos.com/

Here's a recent thing that came out of that group, which I'm interested to see in a server-side project: http://coreobject.org/

The best Web development framework of all time was WebObjects, which started at NeXT. It was basically Cocoa for the web. It was wonderful. When Apple bought NeXT they continued it for awhile and ported it to Java to try and make it more cross platform. That didn't really take, as it was a commercial product and despite lowering the price dramatically many times and loosening the licensing restrictions this was the point at which open source solutions like Ruby on Rails, despite being terrible in comparison really started to take off because they were free.

There was a great community around WebObjects and it's a shame that Apple didn't open source it in a timely fashion-- they could have changed things quite a bit.

It is still used in heavy production- for instance the iTunes store is a WebObjects Application as is, I believe, the App Store.

At the time Sun was working together with NeXT for having Objective-C in Solaris (before Oak became Java) and got their base ideas for J2EE from WebObjects.

Agree. Feels like a reasonable language to get things done and still readable. Much cleaner in design than Go, and let's not even discuss JS.

It's exactly because of server support. I'd LOVE to write obj-c on the backend - libgcd and the dispatch syntax alone makes it worthy, not to mention ARC, the ability to write pure C inline (not in an unsafe block or the like).

The fact that I don't use OSX has been a barrier to getting better at iOS development. Hackintoshing has proven to be quite elusive and the vmware and vbox USB layers in Linux don't convince virtualized OSX enough to transfer over apps to my iDevices. I'm not convinced you can get to high quality by testing strictly on emulators.

If anyone has an old mac that can run modern xCode (you probably know what this constitutes more than me) and wants to donate it to a dedicated open-source developer for completely unspecified future projects, feel free to email me at (my handle on hn)@(googles email service). thanks

Xcode is written Xcode, not xCode. Thank you and good night. ps. if you have any friends who work in Apple retail, twice a year or so they have a big clear out of some of the internal use machines. You can pick up fairly good machines for very very cheap. They had 2009 white MacBooks for 170 bucks or so the last time.

The cheapest official retail option is the Mac Mini for $499. It's not a laptop but it will run the latest Xcode. Might be just the thing you need for that first app that makes you a bazillionaire.

This is a great tip. My main laptop is a 2009 white MacBook with an SSD I cannibalised from another old laptop. The only thing I miss is Civilization V, everything else is fine.

Doesn't Civ V run on Mac as well ?

Yes, but probobably not on a 2009 MacBook.

> Hackintoshing has proven to be quite elusive

I've been using a hackintosh since 2008. It's literally never been easier to get one up and running.

So many people report success but a half dozen attempts with 20 years of computing to back me up has been nothing but abysmal failure.

I don't know what kind of magical book of voodoo spells these people are using.

I wonder if the web is just biased towards success because generally the people who haven't succeeded have nothing to say.

If so, it would be nice to get insight on the whole picture here somehow

Skyline OS X is a great resource for getting a solid And simple Hackintosh install. I'd recommend trying the guides there.


It's not hard. Apple computers are still using commodity hardware - that is, you can buy very similar parts off the shelf to build your own system. The closer you get, the easier it is, but there are forums revolving around tweaks and updates to make just about anything work. Even AMD CPUs, which Apple has never used.

It _does_ take some tinkering, though, and if you aren't comfortable with that (or if your time is valuable enough), you should invest in the real deal. :)

If you have the right hardware it can work as good as a real Mac.

But some hardware is just not compatible and that's it, nothing you can do about it.

I have an old white Macbook running SL, but it won't run modern XCode. It's great for doing things like Python dev, though.

So we can start writing articles about isomorphic Swift!

...I'll show myself out.

Apple needs to get the rest of OS X onto the server. They could build their own servers again (unlikely), or at least license OS X Server to Dell/HP/IBM/VMWare/etc..

Why when the aftermarket will do it for them?


OS X Server is really a shell of the former product. It used to be a great small business / design studio server, but it has lost most of its use-case and I really question its use outside of a few Apple Device Management scenarios.

Almost all the documentation and best case expects Macs to be tied to an Active Directory / MS environment for management.

As a typical Web / LAMP host, OS X really is not that performant. Unix tools execute faster on a RHEL/CentOS box then an OS X (even if the OS X box has higher specs). MySQL is particularly bad if it hasn't been tuned, and most of the literature seems to indicate that OS X (or more specifically MACH) doesn't have the level of optimisation for server workloads.

That being said, if your using the OS X frameworks, there can be some great value with exceptional performance (just look at the startup using Mac Pros as image manipulation servers).

I think Apple who did do servers at one time have found the consumer side of things more profitable. That said, they by not doing servers are seemed less hostile in server providing vender partnerships and this I believe would be more profitable for Apple than going into servers.

Now with the move of things into just the app bare bones style docker VM's in which the app is a service upon a server then maybe some form of runtime that enables some applications would be useful, maybe.

But if they went into the server market, then the trust of doing partnerships with server focused vendors would diminish. Though VMWare would probably be just as happy, if not more and that would be about it. IBM, for the arrangements they have now, work for both of them too well to upset I feel.

Back when they did servers they did it for two things, render farms and studio networking.

Render farms have largely been supplanted by Linux, and studio networking can either be handled by a Mini on the shelf or a Windows server in the closet.

Very true and probably more so when Job was somewhat into computer render animation. That and attention to a good working audio driver setup and API's have helped keep Apple still in the audio DAW industry. AS you say much has changed and can see why they shifted focus. Though I do as many wish they at least had a server flavour, more so given would not be much to change I feel as to what is already there. But exposure to a consumer messing up and a sever messing up can be vastly different in support/costs to make sure 100% right and less customers.

Still they do like consumerising things and who knows, personal home iCloud that sits in your home would perhaps be a likely server offering if any they may take as targeting consumers. That if any route is maybe the one that could happen.

Don't they already offer that via their Airport range of routers? I seem to recall they offer file storage/backup, perhaps even with a iCloud link.

I really wish that the airport extreme routers would expose an itunes home sharing service, so that those of us with laptops and way too much music/videos could stream to airport express/appletv/itunes home sharing without having to have a machine dedicated to the task.

I knwo you can add discs via the USB port and with hub can have a fair few and can network share and backup to them. But not apple user fully and did look but found nothing to enable sync with iCloud beyond a desktop option, though may of missed something. Though did find this: https://support.apple.com/en-us/HT204618 which pertains to remote desktop and related functionality done via your iCloud account, so maybe, though still need to access via iCloud, which if just athorisation would be ok if you had a fallback option.

I agree, with the type of growth Swift has shown in the past, there is going to be a big demand for Swift developers now. Swift could possibly rule the mobile/web. I have already added it on my #TODO list :)

I doubt Swift will rule the web. Javascript has yet to be dethroned by any other language coming before it, after all.

If you ask me, no chance to rule the web. JavaScript has been the standard for a very long time now. Because JS is not a flawless language dozens of js precompilers have been created till now, so the competition is pretty rough. Swift tries to be functional and provides you a way to use immutable data structures, but it is neither functional nor opinionated enough to force immutablity. With that said I personally don't think that it brings something new to the already existing web ecosystem. It sure is a nicer alternative to obj-c though.

When I was referring to web, I mainly considered the web-backend on web-servers. JavaScript is the standard for frontend, no doubt.

I'm going to double down on my bet that Apple will position Swift as the lingua franca to replace javascript. Front end and backend.

Good luck with that. One of the reasons JS is the lingua franca is that every browser comes with a JS runtime, all you need to get started is a plaintext text editor and all you need to publish an "app" is cheap/free web space.

Mark my words: Swift won't replace JS, not even PHP.

EDIT: Also, what's the point of writing Swift code in Linux if you aren't developing for iOS? As a non-iOS developer nothing at all compels me to learn Swift. It doesn't bring anything to the table that JS, Python, Ruby etc don't already do better. Except iOS support. HN tends to forget that just because iOS is the default in the US it's not internationally. Apple is a luxury brand, not a commodity one.

The point on having Swift on other platforms means you can write software for things other than iOS/OSX. It's a fun, modern language that brings A LOT to the table that js/py/rb don't (type checking and the ability to distribute binaries, for starters).

But programmers don't want better languages[0]. Programmers mostly just want what they already have. You can't overcome that kind of resistance just by saying "look, here's a language that's more fun and modern".

Programmers say they want better languages, but good luck trying to convince them to switch to a new language just by its merit. The only tangible benefit of Swift is that it can be used instead of Objective-C for iOS -- which is important for iOS because Objective-C poses an even bigger hurdle by virtue of its unfamiliar syntax. Swift wins on iOS because it competes with a language nobody wanted to learn to begin with.

Even as far as familiarity goes, the only thing Swift looks familiar to (based on first impressions) is Ruby, except it uses a more familiar C-like syntax (i.e. braces). While Ruby programmers are extremely visible (especially in startup/valley crowds) there aren't that many of them -- not to mention that some have already moved on to Rust or JS.

0: https://www.youtube.com/watch?v=JxAXlJEmNMg

I'd counter this

> But programmers don't want better languages

With TypeScript and/or ES6(7)+Babel. These have a pretty healthy userbase between them and there's a lot of excitement around ES6 which leads me to think that developers do want better languages.

Also... I want a better language. I'm not sure how you'll convince me otherwise and I don't think I'm alone in this sentiment.

Do they want a "different" language... maybe, maybe not. I suppose my comment about Apple "positioning" (and I'm careful to use that word) Swift as a replacement to Javascript can make a lot of sense to Apple and people who develop for iOS and OS X.

Firstly, Apple doesn't need buy in from other browser vendors. They can supply a Swift runtime with Safari (desktop and mobile) and I think it would be no more difficult to write a Swift to ES5 transpiler than it is to write an ES6/7 to ES5 transpiler, meaning that web devs can be agnostic about what browser their web app is running on and make demote Javascript to simply being a build task.

Secondly - Swift is going to have a lot of developer support. Because native apps are are increasingly dependent on a corresponding Web API - I think it's reasonable to expect people to be happy about sharing code between the front and back ends of their apps. I think nodes biggest advantage over any other server-side languages is that I can use node packages in both place.

Thirdly - I don't think Apple really cares wether "web developers" have an issue with this. They care about providing tools and solutions for people who develop for and use their platforms. If they can make the case that Swift is going to be "better" than Javascript by some metric we don't know right now, then that's what they'll do. Everyone else can go cry in a corner.

Also, Down-votes? Really? It's not the most radical idea in the world and I fail to see how someone could take offence from it.

PS: Apologies but I don't have time to watch that video right so I hope I haven't got the wrong end of the stick.

I agree, in part.

Programmers do want better languages, but they are extremely allergic to friction.

Parts of ES6/7 are relatively low friction thanks to Babel. But even so moving the majority of JS devs to modern JS is a very slow and long process.

Swift may make a dent if Swift-to-JS becomes a thing, but compile-to-JS languages are mostly a failed experiment (see the lack of success of "serious" languages like Dart or the decline of CoffeeScript). The advantage of Babel over other to-JS compilers is that you're just compiling JS to JS and it carries the promise that one day you won't need the compilation step at all.

The "JavaScript is the ASM of the Web" idea, while enticing, has time and again failed to come true. I don't think Swift can succeed where others have failed, even if it would allow iOS developers to write web apps.

At the risk of having to swallow my words, I don't think Swift-in-the-browser poses any risk to JS. I also don't think Swift-on-the-server will have a major impact, although I can imagine iOS-heavy shops wanting to use Swift when developing server APIs for their apps.

I agree that Apple will carry on regardless. Apple is all about controlling their ecosystems, so I wouldn't be surprised if they try to pose Swift as an alternative to JS (much like their unilateral CSS extensions back in the day).

About the video: basically Douglas Crockford (of JSON fame and JSLint infamy) argues that history has proven that developers (as a whole) favour similarity over "betterness" when it comes to the success of programming languages. They're more likely to pick something that is nearly exactly like what they already know than something that requires them to adjust their mental model, even if it is superior in nearly every way. The entire series is worth a watch IMO, and helps appreciating why and how JS got to the point where it is today.

Following your logic, Javascript just started growing on the server-side due to a number of frontend devs "just wanting what they already have". There's a huge number of mobile devs out there that have to, one way or another, write server-side code. Guess what'd be their language of choice then (when the Apple-backed open-source Swift is out)?

There's vastly more people who write JavaScript than people who write iOS apps in Swift.

I agree that server-side Swift may become a thing for iOS developers writing their own server APIs, but I'm only inclined to believe that it will at best become yet another alternative alongside Node, Ruby, Python and PHP. After all, Swift will only be a logical choice if you're already using Swift as the primary language -- i.e. only if you're writing native iOS apps.

For the "Swift in the browser" narrative it's also important to remember that the web is not just JavaScript. Even if you can abstract away DOM manipulations as in React, you still have to be aware of HTML and CSS and how everything comes together. Replacing JS with Swift only provides another layer of indirection, you still have to be a web developer to write web apps.

Considering how previous attempts to pretend you're not actually writing JS worked out (Ruby developers using CoffeeScript, Java developers using Dart or GWT, Python developers using PyJS, etc) I don't think JS in the browser is going anywhere, no matter how much some people would wish it.

Too little too late. Apple could have got my attention if they had done this from the start, but at this point I find it hard to get excited about this. Half the reason I find myself drawn to a new language is the culture and community surrounding it. You might think this seems silly at first glance, they're programming languages, not fraternities. But hear me out.

Golang is a pragmatic crowd. Go into #go-nuts on freenode, you'll notice that discussion rarely deviates from solving problems. Many links to the go playground. Go, from the beginning, shaped their community in this way. Consciously? Who knows. But the community turned out the way it did and I believe that's a result of their start.

Haskell is a computer scientist crowd. People write in haskell because they enjoy the functional paradigm and haskell is a purely functional language. Discussion on freenode? Very dense. People are expected to grasp topics easily and the community initially gives everyone the benefit of the doubt that they are capable of understanding it until they ask for more help.

When I look at swift, the only thought I have is that it is the new iOS language. I don't believe they'll be able to deviate from that. I don't see myself making iOS apps, so I don't see myself every writing a single line of the language. When I first looked at swift, my impression was that it was a wierd and somewhat-functional language that had a few unique things. Maybe I would have taken an interest in it. But at this point, I don't want to have to wade through a bunch of iOS specific questions to get what I need. I don't want to have to deal with being a beta tester for their linux implementations. I don't want to use the language when I'm probably going to be using it in a vastly different way than the rest of the community is. My interests are going to be second class.

It's not necessarily set in stone, but I think it is. In a world where there's a new language every day, I just don't see why I'd go with swift.

Hey, look me up in a year and explain why you were wrong.

Go has been out for 6 years and has less than 10,000 questions on StackOverFlow: http://stackoverflow.com/questions/tagged/go

Swift has 37,000 questions in its first year: http://stackoverflow.com/questions/tagged/swift

I'm a fan of Go. I built my websites in it and I've written a few small apps.

However, you really are overlooking how much of a difference the bigger Swift community will be.

All bets are off if Google officially supports Go on Android.

Anyway, 1,000,000 Swift developers will change everything.

I think part of the parent comment's point was that Swift does/will have a lot of users. Right now those 10,000 Go questions cover a lot of server related issues; the 37,000 Swift questions are almost completely iOS questions.

If I started writing an HTTP server in Swift the day it comes out I won't find much help but I'll have a mountain of unrelated answers to filter. Essentially Swift's adoption for iOS apps doesn't help and might hurt efforts to use it elsewhere.

Look at Object-C: extremely common writing software for Apple; almost non-existent elsewhere, despite the fact that it has never been limited to Apple.

Objective-C never caught on outside of the Apple/NeXT ecosystem because its biggest advantages were in the application frameworks, not the language itself. By the time iOS made Objective-C popular there were entrenched alternatives in the C++, Java, and .NET ecosystems. Swift is facing the same challenges, but against a newer generation of competitors that are much less established. It's probably still an uphill battle for Swift to gain outside adoption, but it's not climbing a cliff.

Not to mention, Swift the language itself, which is basically Rust with arguably an easier memory management model (all ARC) and Scala-like pragmatism (an OOP model built-in). Coming from ObjC, developers are showing a high level of interest in Swift. I might look it up when it comes out for Linux!

I have carefully compared the two languages and I do not see more similarity than between Go and Swift or any other language. You have yourself cited the different memory management models, and this is the main characteristic of Rust. The sole common feature is the use of LLVM.

Why would I ever care about Go for Android when I can write apps in Kotlin? Kotlin is fully interoperable with Java. I don't have much experience with Swift but Kotlin is so nice. It has the best features from numerous languages such as Ruby. C#, etc. It's being made by Jetbrains who makes the the core of Android Studio so it will have support. Kotlin is also nearly as fast at runtime as Java with a tiny 200kb runtime. Almost all the magic happens at build time so your build time will be a little longer. Kotlin also works with any existing Java library, even annotation processing libs.

Again, why would any Android developer choose Swift? Kotlin also runs fine on the server and it even compiles to Javascript.

If I were a startup doing an iOS, Android, and a backend I would do either Go or Rust on the backend. You will be able to distribute a native lib with your Android or iOS that shares network and model logic. Go and Rust both are planning on supporting cross compilation to iOS and Android.

That would leave you with Kotlin/Groovy/Clojure/Scala for the view layer of your Android app if you choose not use Java. Which is a good idea since there is no indication from Google Java 8 will be supported which means no lambdas unless you use an alt JVM language. On iOS you write your view layer in Swift. Both apps use the shared binary library.

I found this comparison between Swift and Kotlin interesting, and the point that it would make switching between Android and iOS development easier:


Kotlin is positively awesome, but when I tried it (admittedly last year) it didn't seem ready for production yet. It worked all well as long as stuff was kept simple, then I used Realm.io, wrote some unit tests and got all sorts of NoClassDefFoundError exceptions, dexmaker errors etc. Long story short, I couldn't fix it. As I said, it's been a while. I hope things have been improving and continue to improve. The language itself is superb

It's pretty damn closed to 1.0. They just released M12 which adds support for interop with Java libs that use annotation processing such as Dagger.

With the just released support for annotation processing Kotlin is now working with e.g. Realm.

I've just been playing with it some in the last couple of weeks. I get the impression that they're still iterating pretty rapidly on it - at least partly because the official practice repo uses trait all over the place, which they apparently deprecated and changed to interface.

I will say that it It looks very promising so far, and seems to have the best of C# and Ruby, assuming you're sticking with compile-time type checking, plus a few more tricks besides. I'm not that up on JVM languages though, so I'm not sure that it's definitively better than any of the other choices.

If you use Scala for your view layer, why would you even want to use Go for your back end?

Very true. I am starting to not see the point of Go. It's about the same performance as the JVM languages and the JVM is just as easy to deploy. On the JVM you get to choose between Clojure, Scala, Ruby, Groovy, Kotlin, and more. JVM is just as easy to deploy as a Go binary too. Personally, I plan to stick with JVM languages plus Rust.

Here is how to see the point of Go:

Open the rosettacode web site. Choice any algorithm you are familiar with. Place Go version and Java version. Compare the number of words and lines.

Compare the Maven hell with how Go solves the same problems.

I bet my Clojure or Groovy version will be simpler. I agree Scala is super complex :). Maven does suck. Gradle wraps it and does not suck. Probably the best build tool I have ever seen.

So, a more succinct Java, that can't leverage the JVM ecosystem? Why is that better than Scala, which can also be more succinct than Java, but can leverage the JVM ecosystem?

Go is enormously simpler than Scala.

(I also don't agree that the deployment story for the JVM and Go are similarly complex, but that's a different argument).

> Why would I ever care about Go for Android when I can write apps in Kotlin

Power consumption.

Since Android Java is compiled to native code on installation time, I don't see any benefit using Go in terms of power consumption.

I guarantee you could write a more power efficient app using the Android SDK versus Go in the NDK.

The numbers of results there vary wildly upon refreshing the page repeatedly. I'm getting ranges of 39k-79k for Go, 12k-42k for Swift, and 3k-9k for Rust. Also, in the absence of knowledge of what precisely is being measured (e.g. does it include forks?), we should probably only interpret these numbers as order-of-magnitude comparisons.

Swift repos will pass Go within 12 months. I think most iOS developers were holding off on Swift. Heck, Xcode and Swift, etc only became usable earlier this year.

Most people aren't writing open source iOS apps.

No but there are a lot of components.

I think that doesn't support your hypothesis--- 31k in one year compared to 70k in 6 years implies a lot more momentum for swift than go.

It seems to me that Dart would be a more obvious choice should Google choose to push a new application development language for the Android platform. It's closer to JavaScript and has more GUI/visually-oriented libraries. Plus, it's a no-brainer to box such apps in a HTML5 container for iOS/Windows.

> Anyway, 1,000,000 Swift developers will change everything.

All of these objective-C developers changed nothing for non-ios crowd. Why would swift be different?

Linux support.

GCC has supported Objective-C since at least version 2.0, released in 1992

(based on http://www.informatica.co.cr/linux/research/1992/0222.htm)

This is very very true. I remember trying to look into Objective C years and years ago and trying to get somewhere on my Linux box but due to the lack of standard GUI libraries etc. it seemed a non-starter.

Of course, all bets are off once Go becomes a first class citizen on Android.

This cannot happen soon enough.

I'm not saying this because of anything to do with Apple's Swift announcement. I'm saying this because it'll be a huge benefit to Android.

> I'm saying this because it'll be a huge benefit to Android. reply

How so?

The language has a weaker type system than Java, doesn't support exceptions and requires error checking every other line, has very poor tool support (because the compiler was not designed with IDE's in mind).

Except maybe add a few developers who refused to write code on Android because of Java, I really don't see what Go would bring to Android.

You shouldn't be getting down-voted. The idea of anyone rewriting the entire Android Framework in Go when you can use better languages like Kotlin, Scala, Groovy, Clojure is asinine. Go will compile to Android architectures so you could do shared code in Go between iOS and Android like a C++ lib. That makes sense.

> That would leave you with Kotlin/Groovy/Clojure/Scala for the view layer of your Android app if you choose not use Java

> when you can use better languages like Kotlin, Scala, Groovy, Clojure is asinine

You seem to be piggybacking an aspiring JVM language (Kotlin) on top of three other alternative ones that have already made it (Scala,Groovy,Clojure). Because that trick's already been done before with them (i.e. Groovy piggybacked on the quality of Scala and Clojure), it makes your list of examples of alternative JVM languages seem disingenuous.

Sorry about that. They all have tools that make them work on Android. Kotlin is reasonable option imo. It doesn't effect app startup and runtime size is so much smaller compared to the others. Clojure Android adds 3 seconds to startup time for example. I haven't used Scala so I am not sure how nice it is for Android.

Did anyone say anything about rewriting the entire Android Framework? Nope. It was about Go becoming a first class citizen on Android. That doesn't mean it can coexist with Java for someone who needs a fancy IDE.

It already IS. You can compile Android NDK libraries with Go today. It isn't that useful because the NDK isn't hooked up with the Android framework which means you don't get lifecycle events. There was a whole Android framework written over the past decade or so in Java that abstracts this.

Type system strength does not correlate with language adoption. Else, nobody would use C, Python, PHP, Javascript, Perl, Ruby ; C++, C#, Java would be outsiders and almost everybody would use Ada, Haskell, OCaml.

I agree with your last sentence, however.

Unless all those devs with apps on Android want to port all their code?

While that would be nice, every indication is saying that Google has no interest in this right now.

>Too little too late. Apple could have got my attention if they had done this from the start, but at this point I find it hard to get excited about this.

That's just like your opinion man.

If Swift is Open Source it's gonna be a huge thing, for two reasons: (a) it already attracts millions of developers because it's the suggested language to develop iOS apps in, and (b) it's a nice modern language that plays in a very hot niche (with languages like Go, Scala, Rust etc).

>When I look at swift, the only thought I have is that it is the new iOS language.

OTOH, tons of people and HN and elsewhere expressed their liking for the language and how they wished it was open sourced so they can use it elsewhere too.

It seems to me that Microsoft and Apple are -- quite consciously -- in a war to own the next ubiquitous language (the precursors being Javascript, PHP, Java, C++, C, Fortran, and COBOL). This is the language that Comp Sci students will spend their time in, that kids and hobbyists will tinker with, and so on. It's a huge deal -- the stakes are enormous.

C# was in good shape and got better with Microsoft's recent cross-platform moves; Swift has started way behind, but is doing very, very well.

Luckily for us, the two languages are not worlds apart. Crossing from one to the other won't be a terrible thing -- so lean back and enjoy your popcorn.

Very true. I wonder if they'll both earn the same? C# seems well paid here in the UK.

It'll be interesting to see how effective how many people make that transition to server side development. I mean we now have node for server side after JS becoming ubiquitous because of web browsers.

And Swift doesn't have the stigma of being a "bad" language when compared to JS.

> And Swift doesn't have the stigma of being a "bad" language when compared to JS.

Except Swift has the stigma of being a "dumbed down" Objective-C.

are you trying to compare JS using Node.js with Swift?

> it already attracts millions of developers because it's the suggested language to develop iOS apps in

Which I pointed out isn't always a good thing. You know how Rails kind of overshadows Ruby because so much use of Ruby is related to rails? Well I think swift will face the same exact fate. I'm not interested in developing iOS frontends or iOS backends in the language. I firmly believe Web will win, and I believe it's going to happen sooner than most people do.

And I firmly believe that Google, Apple, and Microsoft are scared shitless of that idea. Why do you guys think we've seen such a recent renaissance in programming language development and availability? From the years 2000 -> 2010, what did these companies put out? Microsoft had .NET, just as they have since the dawn of time. As far as I can google, it seems that Google put out absolutely nothing. Apple revived Objective-C. Absolutely no progression or innovation whatsoever.

Cut to the last 5 years. Microsoft is dabbling in functional programming in the form of F#...in spite of the fact that their developers are in enterprise and presumably have no interest in this voodoo functional programming. Just in these past few months, they released Visual Studio and .NET on Unix! Meanwhile Apple's making Swift, a language that is simple enough to be used by hipster Apple developers who make iOS apps but also complex enough to have some decent functional elements. Google develops Golang and Dart? They decided to go with Java for android, but now they're interested in spending money developing languages with names on them like Rob Pike and Ken Freakin' Thompson?

Ballmer has been quoted as calling linux "cancer". Now Microsoft is hosting linux VMs for cheap and paying people to develop tools for unix users. When apple launched the iPhone in 2007, they had only 8.1% of the PC market, and yet they opted to risk the success of the iPhone in exchange for maintaining a closed ecosystem and require iOS developers own a mac instead of providing development tools for Windows. Now, when that risk turns out to have paid off and they have a dedicated community of developers, they're suddenly deciding multiplatform support for Swift is important?

Web is winning.

What happens when we can make our apps with ReactJS and cut Google out of their android adsense money because it's just a webview? What happens when we can make our apps with ReactJS and suddenly the iPhone advantage is nil because every app is available on every device and they look exactly the same on every device? What happens when we interact with computers solely through the web and mobile, and Windows continues to lose PC marketshare and maintain their pathetic 2.5% with Windows Phone?

This is embrace, extend, extingush at it's purest. Microsoft wants you developing on their languages using their tools because they know that .NET isn't really licensed under free licenses, and since that means they control where applications run best, they hope they can outperform web and get developers on their platform. It's certainly a stretch, but they're scared.

Apple is operating on the same principle, but theirs is actually permissively licensed. It makes no difference, you can be sure that Swift will be most conveinent for iOS and OSX developers. It's a first class product for Apple devices, but they say they want the community to help them provide support for other devices? Have fun relying on that.

I don't have Google totally figured out. Their goal is probably to bring Golang to android. They're investing in providing a very unified Android experience enhanced strongly by the power of Google Now and their google services, just look at recent android releases. If I can write a backend in golang, host it on google services, and write the client side in go as well, it's a very attractive environment. To be honest, when it comes to Google, I have more faith that they can, sometimes, act altruistically. I don't think Go was an altruistic action, but look at how they give back to open source via GSoC. No one else does anything close. The only big name I think is better than Google is Mozilla.

Speaking of Mozilla, why do you think they're developing Rust and Servo, with lots of experimental rendering techniques? They're betting on web and at being the best at it.

TL;DR: They're giving you new languages and development environments because they are holding on for dear life, and I'm not willing to accept being charmed into closed development environments. The state of tech is a direct result of truly free software that is dictated by the community and the community only. Accept nothing less.

"Web is winning."

If you mean "everything but HTML-based web sites", that's accurate. If you mean the actual Web sites, I really don't see it. The Web is huge, so isn't exactly dying, but it certainly isn't winning.

Native mobile is "winning", in terms of being usually a far superior UX, and by sheer numerical demand. Native still often (thankfully) uses architectural elements of the web (URIs and HTTP) and the UX of Hyperlinking. But the HTML web as we know it hasn't kept up well with the native experience. With the massive investments in IoT being laid down along with VR/AR experiences coming out of Facebook, Google, and Microsoft, this trend will likely only continue, unless someone releases a new innovative hypermedia format & client that catches on.

"They're giving you new languages and development environments because they are holding on for dear life, and I'm not willing to accept being charmed into closed development environments."

I applaud your commitment to openness, truly, but I suggest you need to look at the global market little harder. These companies are not holding on for dear life. They're growing in power. I watched the WWDC Keynote. The crowd just about had kittens when they announced Swift will be open sourced. Twitter nearly exploded. These are anecdotes yes, but the numbers are backing Swift's meteoric rise.

Apple has a vested interest to promote Swift far and wide, perhaps even going so far as Sun did with Java, with the hindsight of their mistakes. Whereas Google could have done this with Golang but seem to be keeping a lid on it for unknown reasons. Best theory I have is that Apple is a product company and Google is a technology company. Golang is making good headway as a server-side language mainly by several companies that aren't Google.

> Native mobile is "winning", in terms of being usually a far superior UX, and by sheer numerical demand.


I could type a URL to anything here, and you could view it.

If instead hide my content behind an app, I can't deep link to it. The likelihood of you installing the app to see the content is way lower. You need space on your mobile device to install my app. Potentially a password entered to install it. It needs to be compatible with your device. You need to approve my permissions. The content of my app isn't searched in any search engine. You can't bookmark where you were.

I'm not at all convinced that native mobile has "far superior UX." Sure, aspects of it, no question.

And sheer numerical demand? Again, I'm not so sure. Actually, I'm quite sure you're wrong.

Why would I prefer to type a URL? I can just as easily get an URL for the app in ap store and have it right there. Having app stored on my device is an advantage, unless you like your web apps download in full even on crappy mobile connection. And you really want the content of your app indexed by external search engine? I like being able to control the permissions my app has. Or does your web app not ask for the permission to use your location, camera, microphone? I don't need to bookmark where I were in the app: I open it and I am there at once.

You don't want to type URLs, especially on mobile, but the fact that all the items on the home page of HN are links to web pages and not to screens inside apps should hint to a definitive advantage of the web over apps. And to why having a web site is mandatory and having an app is optional.

If the web and apps where in the same competition I'd say the Web can't lose, but I don't believe they compete. They serve different purposes with some overlaps (think Google Docs, the web site and the apps).

> Why would I prefer to type a URL?

You can just click on URLs if I type them. Maybe if you go to your local library, someone can give you a demonstration of how the internet works.

> I can just as easily get an URL for the app in ap store and have it right there.

You're ignoring the deep-linking I'm talking about. Not just the app (like cnn.com), but to actual CONTENT on it, like some specific story.

> Having app stored on my device is an advantage, unless you like your web apps download in full even on crappy mobile connection.

Web Apps can store most of their content in your browser, now. So additional views are not downloading the full app.

> And you really want the content of your app indexed by external search engine?

Yeah, I really want content I produced to be indexed by search engines, so users can actually find it. Not 100% of the time, sure, but I'm in control of that. Explain to me how I can possibly get a search engine to index my content if it IS in an app?

> I don't need to bookmark where I were in the app: I open it and I am there at once.

If you want multiple bookmarks, then no, what you're saying isn't remotely true. If you want to share a bookmark, no, this isn't possible.

Or are you just being argumentative for the sake of being argumentative?

It depends what the app is for. If it's for viewing content over HTTP then yes a native app probably is a dumb idea (unless the platform has wildly different browser capabilities, like on Android - you never know what you're going to get so have to write a dumb website).

However, if the app is for something useful (and not just a content viewer) then native is far superior.

Lets say I'm searching for a house.

If I go to redfin.com on my phone vs the native redfin app, there is a distinct difference in UX. The native app by far surpasses the mobile app in terms of performance and usability.

Now this is one example but can be said for many sites whose functionality isn't just to display static content.

I feel like people are ignoring my fundamental point -

There are things the web can do that native currently sucks at. Sometimes those things are really important to me.

I'm not saying web is always better than native. Far from it. But I am saying web is sometimes way better than native. Even on mobile.

I am a huge proponent of the web architecture, I just think HTML is dying from the politicization of the standards process and resulting slow pace of innovation.

"If instead hide my content behind an app, I can't deep link to it. "

This is not true. I enter a URL into my iPhone's Safari, it deep links into the App associated with the domain. We just watched a WWDC keynote that showed plenty of deep linking between apps, from Siri, etc.

"The likelihood of you installing the app to see the content is way lower..."

This is also contrary to the data - people love apps, download and use apps like mad. Over $100 billion in revenue is expected to be from mobile apps this year. That's bigger than Hollywood box office 3x, bigger than book publishing 10x, bigger than PC software. The only thing it doesn't eclipse is packaged/server software yet.

Installing and using apps is demonstrating to be less effort than using the web on a mobile device: I have to do it once, vs. have to deal with a slow web browser every time I use the website.

This isn't to say people don't use the web on their phones, it's that if an app is available, people will pick the app.

"Potentially a password entered to install it... You need to approve my permissions"

Same on any website I have to sign up with via form or OAuth2.

And if an app has in-app purchases, I do none of that (just sign in on the store or use my thumbprint).

"The content of my app isn't searched in any search engine. You can't bookmark where you were."

That's interesting, most of the apps I use are searchable in the app, or on the Web. That doesn't mean I'm using a web browser to do most of the interactions.

Again, I don't think the Web is going away, just that HTML is becoming the new command prompt. That might change if something awesome comes out via open hypermedia. Doesn't look like it though... given how rich experiences are getting and how political the HTML standards process is.

"And sheer numerical demand? Again, I'm not so sure. Actually, I'm quite sure you're wrong."

By what measure? I mean, I don't claim infallibility here, I'm just saying that (a) mobile smartphones are quickly becoming the most common device on the planet, more common than a toothbrush, car, or toilet, (b) all of these support apps, (c) almost every company with a service or content has an app, (d) almost every user will prefer an app over over an website, and (e) thus almost all the money spent on consumer software is moving to native mobile. I'm following the money and the usage figures..

> We just watched a WWDC keynote that showed plenty of deep linking between apps, from Siri, etc.

True, apps are beginning to figure this out. But in general, I can easily share a link to any web page. Sharing a deep link into some content in some app is nowhere near a "solved problem."

> This is also contrary to the data - people love apps, download and use apps like mad.

Really? If I share a link to a CNN or HN article, you're less likely to view that? I don't think you're remotely correct. It depends on the kinds of content, I guess. I'm just saying there are cases where web wins huge.

> Same on any website I have to sign up with via form or OAuth2.

Yes, each of us is inventing situations where the other is wrong. But I think you agree with my fundamental premise there are times when the web utterly destroys apps.

> I'm following the money and the usage figures.

I'm sitting here typing to you on ycombinator.com. After spending time on reddit.com. facebook.com. plus.google.com. cnn.com. Yup, there are apps versions of those, but I sit at my desk for 8-10 hours a day, and my desktop experience is WAY better than my mobile experience...

I'm not saying web is better, I'm saying web is better at some things, still.

"Really? If I share a link to a CNN or HN article, you're less likely to view that? I don't think you're remotely correct. It depends on the kinds of content, I guess. I'm just saying there are cases where web wins huge."

I didn't say I would be less likely to view the link if t was a website, nor would most people avoid it. The web architecture (linking) is alive and well.

I'm saying that many websites have an app link at the top of their content once I click it and if I found myself visiting that site often i would click "Get" and grab it.

Then future links would not open my browser, they'd open the app.

"I'm sitting here typing to you on ycombinator.com. After spending time on reddit.com. facebook.com. plus.google.com. cnn.com."

Right - my habit for many years as well. But IME most people use app versions of those sites on a mobile device (except HN - and the UX suffers for it)

"Yup, there are apps versions of those, but I sit at my desk for 8-10 hours a day, and my desktop experience is WAY better than my mobile experience..."

Here we fundamentally disagree. The mobile experience FAR exceeds the desktop experience for most casual computing tasks. I use my phone and iPad way more than my laptop for reading and replying; I use my laptop more for coding and system administration.

> This is also contrary to the data - people love apps, download and use apps like mad.

I just think you and I are talking about way different things...

If I look at the HN main page right now, I see these URLs:


























I could easily see myself spending 15 minutes reading EACH of those webpages.

THERE'S NO FRICKIN WAY I would install apps from each of those places, in order to read their one article on HN this morning. NO WAY.

Right. The web was originally meant for, and is still intended for HTML document sharing. Thankfully, HN is a basic, sensible site that doesn't cause cancer of the eye. Let most web developers around it and it would be.

That said, as the parent suggested, when it comes to opening my Gmail via Safari or the app, like everything beyond a simple document- I'm going to pick the app everytime.

...except when I'm on desktop.

GMail via web, every time.

That will change soon with Windows 10. If you're on Linux you have no choice. And if you personally had to maintain Gmail on the web, you couldn't do it yourself. Enormous time put into an application such as Gmail. So suggesting you'd build Gmail on the web vs an app is disingenuous to suggest. It's quite an edge case for many reasons.

If I had to personally maintain GMail app, I couldn't do it myself, either. I don't see how this is remotely relevant.

I'm not suggesting I'd build GMail one way or another. I'm saying that since GMail web is available to me on my desktop, where I spend the majority of my day, that's how I prefer to use it. It's nice to also have an app, but that's way less percent of my time. And I think reading and typing in GMail is way nice on my desktop.

Sure, it's an edge case, but so is Reddit, so is HN, so is G+, so is Facebook, so is CNN, so is Github, so is StackOverflow, so is xkcd, so is news.google.com...

I spend way more time on those sites on my desktop than I ever would in their corresponding apps.

Add all those edge cases up, not to mention all the websites those things link to, and it's a HUGE part of my day. Apps? Sure. I like em.

And sure, when I'm on mobile, any given content is probably nicer in an app. But sharing, mixing, quickly browsing... web wins hand-down for me.

I wouldn't even know how to take a page from my CNN app (unless they give me a web URL) and discuss it on my Reddit app. Do you?

Again you're back to what we've already established the web is ideal for. It's original intended purpose- simple document sharing. It's been doing this since the early 90s.

While sharing info on Wikipedia is great, where the web fails is competing with native apps. Other than sharing HTML text documents (and there are projects working to replace that too), it doesn't get chosen over native apps.

Also, I'd still say that you'd come a lot closer to being able to maintain a Gmail-style app on iOS, than you ever could on the web. As a 1 man operation. Plenty of people actually do personally maintain email native apps alone on mobile. But unless you go with basic HTML or little to no features, I don't think 1 person is keeping a web email client up to date for all browsers, all the time.

The reason you prefer using webapps on the desktop, is simply because that's where you are all day. At a desk. If you were in anything but a desk job, it'd be the complete opposite. It's not an issue of superiority, it's a matter of convenience. For most people, that convenience is flipped and the experience is equal or better. Which is why native apps are winning.

Why do I get to target iOS only with my email client on the one hand, but I have to support "all browsers" on the other? Stop stacking the deck, ok?

> If you were in anything but a desk job, it'd be the complete opposite.

...and if I were a banana, I'd probably prefer sunshine. I'm saying there are cases for me where the web is way better, even on mobile.

You can't prove to me that's not true, because it is. It's like you're trying to talk me out of observational data on my own life experience.

I just said iOS as an example. I didn't mean to make it appear I was stacking the deck, you could use Xamarin and target all platforms if you wanted. That's still easier than supporting all browsers over time. All browsers over time with a complex app like Gmail? Forget it.

But the issue I was trying to convey is that you're an edge case and in an increasingly marginal pool as time goes on. Most people aren't sitting at the computer all day, and even if they do (we'll say so for sake of argument), they stay off personal email at work (a good idea). And they use their phone yet still. That's what I do. I'm at a desk all day as well, and never login to personal stuff on my work network/machine. That's where iOS gets its use.

for many of those, they're not apps they're websites. Apps are for repeat use of aomething more complicated than reading documents. Usually websites with a lot of JavaScript are better off as apps.

LinkedIn I have an app, and it's usually better than the website except for some features they maddeningly haven't carried over. Dice too. The rest I would use a website.

This isn't contradictory to my point that apps are the future of how we interact with the web. In effect apps are about a proliferation of user agents - rather than a single hugely popular type of user agent (the browser) and a handful of busy singular agents (the crawler). I am not saying the Web is going away, I'm saying HTML browsers are becoming relatively less important. They're not going away they're just not where the money and growth is.

Sorry, but you are too deep in your illusion. Your TL;DR is funny. How will you then explain dozens of JS frameworks coming out every day? The web holding to it's dear life? Please, stop talking about the things you know nothing about. Or at least give native development a honest try and see for your self, that web is not winning anything, and web apps are just a huge pile of mess. Or for a start just try to make the app with ReactJS which looks the same on every device. Good luck.

PHP has a huge developer base, and Hack is wonderful. The OSS ecosystem for it hasn't really developed in a significant way. So I'm not nearly as optimistic as you are that Swift will do better.

Have you tried Go, Scala or Rust? All of those language are from their separate leagues, how can you compare swift with them?

In my experience, in the wide world of programming languages Rust and Swift are really not very far apart. Swift's designers even cited Rust as an influence.

> Swift's designers even cited Rust as an influence.

Specifically, http://nondot.org/sabre/

     > Of course, it also greatly benefited from the
     > experiences hard-won by many other languages in the
     > field, drawing ideas from Objective-C, Rust, Haskell,
     > Ruby, Python, C#, CLU, and far too many others to list.

...you can say they looked at <add your favourite language here> as an influence.

> Golang is a pragmatic crowd

Why do people keep on saying that? Would people say javascript is for the pragmatic crowd because jsfiddle exists ? no

Go is a badly designed language period. the type system forces devs to write runtime type assertions which should be the job of the compiler thanks to parametric types, if go designers knew one or 2 things about types. The fact that Go dismisses 30+ years of type theory isn't pragmatism. It's ignorance.

Nobody would call PHP pragmatic yet even PHP is more expressive.

I had the same opinion as you until I started some code with it.

The true is that I had never been so productive writing code. The friction from what is in your head and the code is almost 0.

It is true that I ended writing a few functions that were there only because the lack of generics, the type system and other stuff being "built-in". But at the end, I think it was an order of magnitude less often than I was expecting it to. I was wrong.

Your comment is interesting, it has made me want to look at the language, so thanks.

But I think that the "friction from what is in your head to code" probably betrays that you weren't settled in a previous language? I only say this because I have zero friction writing C++, and have to think mildly backwards when it comes to C# or Java, and definitely Objective-C. Even PHP is like simplified C++ to me, so I write everything like it is a dumbed-down dialect of C++.

I suppose it is just what some people find comfortable.

> But I think that the "friction from what is in your head to code" probably betrays that you weren't settled in a previous language?

I had the same feeling as the user you're answering to. And I have used quite a bunch of languages I've been quite productive with (C, C++, PHP, Python, Java, Ada), but I've never felt with these languages as little "friction" as with Go.

I mean, that language is so dumb that you don't have to think about the right way to do stuff. It's obvious. Boring, sometimes verbose, but obvious. It doesn't stand in the way between me and the problem.

I was following Rust very entusiastically. But at some point the language was getting itself in the way of the problem I was solving.

I am not only talking about the strict compile checks. I am talking about that extra thinking in the ten ways you can design it and how it should be done in "proper" Rust.

And the friction is not only in the language, but also in the standard library. A language that comes with XML/json marshal/unmarshal plus a quite decent http client/server framework covers most of what I need to do.

I am sure if Rust had included in the standard package http/xml/json it would be far more popular. I understand the voices that say "It does not belong there", "the language is not targeted at that". Fine. But then I found myself having not much user for the language, even if I loved the idea/theory behind it, plus that I could not get comfortable thinking in it.

That's really good that it's so dumb that you don't need to jump through hoops to think of the way to do it. Clever! Just code and go!

It is not unicorns either.

* Transforming a collection means doing boring for loops instead of one-liners. * interfaces/duck typing is great. But hey, []interface{} does not fit into interface{}. * Structs with fields are encouraged, but there is no uniform Uniform access principle and that sucks. * etc

But definitely, my own "science" confirms that it turned to be out less of a problem (for my own usage) compared to better languages with more friction.

Yep. I've been bored by the fact that putting an []int into an []interface{} doesn't work, too (although the reason is obvious).

[I'm a Rust user, and not particularly fond of Go, but...]

Not having great static typing doesn't make it not pragmatic.

Given Go's purpose, it is extremely pragmatic.

One of the reasons Google uses Go is because their employees are often fresh out of college and might not be familiar with the depths of type systems.

If you look at most professional Java/C++ code (starting to see this with Rust too), the features of the language are used to their full extent to create really convoluted APIs (to an outsider not familiar with the language). So, to work on it, one must first become familiar with the intricacies of these languages (this is not as easy as picking up a book, you need experience), and then with how the language is used in context, which is another time consuming thing.

Instead, go trades off some higher level features for compilation speed and ease of learning.

After programming in Go for a week I was able to pick up any Go code -- ranging from the standard library to various other applications, and know what was going on. I cannot say the same about Java or C++ or Rust; generics/templates/virtual/friend/advanced typesystem gymnastics all are used to great extents in the code for a standard library or some large application. one may be able to use the language for your own stuff after a week of learning and hacking, but one will need much more time before they're well suited to contribute significantly to existing codebases.

A course I took recently was taught by a PL aficionado. He knew Rust, Nim, and Go, along with many older languages. Go, Rust, and Erlang, among others, would have worked beautifully for the project topic (concurrent stuff). He made everyone write in Go because he wanted a uniform language for project evaluation, and because Go is something students could reasonably pick up in a week without constraints on their background.


At the same time, the Go community is pretty awesome. There is a "go way" of doing things, and while it is pretty rigid, that doesn't make it not pragmatic. From my (admittedly limited) discussions with them, they do seem to think practically/pragmatically. There are good reasons for wanting to use Go. These folks know those boundaries, and think practically within them.

Though there might be much to say about the different implementations of generics, I've found them to be easy to understand and use in languages. No type theory required. That has been more like "Oh, so that's the name for that thing/aspect, neat".

If I don't have a problem with that, those coveted master race Google developers should have absolutely no problem with it.

It's not that generics are hard to grok, it's that they get used in a hard to grok way. Wrapping your head around such things gets more and more hard as the app complexity grows.

The base point is that languages like Go let you think of the program at runtime, only. Types are a runtime concept, everything is a runtime concept. Structuring APIs to use compile time safety requires a whole other kind of thinking, one which they may not expect to be uniformly available.

Ultimately it's a choice of whether or not you want to the programmer to think of data and flow (runtime), or metadata (types, compile time). Once you're used to both the difference is blurred. For newbies, the concept of using compile time safety can be daunting.

Right, don't have to think about compile time. Instead they sometimes have to think about post-compile time[1], i.e. code generation. Now you don't have to think about those weird bracketed capital letters. Just take care to check where you have comments[2] that happens to include directives to some external tool.

With how Go programming seems to work for some people, I can perfectly well imagine a system with commands that do code generation as a hook for other commands, or by 'listening to the file system' for changes in certain files. That could get out of hand pretty easily. And yeah, one can say "anything can be taken too far". Just like you said about generics. The difference might between having well-encapsulated and behaving mechanisms that does most of what you normally would need. As opposed to a "simple" system that just pushes the problem to another level, in more ad-hoc ways.

[1] I honestly don't know if the code generation typically is pre-, post-compile or a mix of those two. It's too complex for me.

[2] I thought single-line comments was supposed to be a safe haven...

Not everyone uses codegen or other tools. You can write generic-ish code in Go which is just runtime checked via vtable pointers (interface objects).

And of course, the loss of static checking means that one has to write more tests for mundane things. But again, designing tests requires "runtime thinking", designing statically checked APIs requires "compiletime thinking".

Well, variance can be hard for many to grock. The combination of generics, mutation, and subtyping is problematic, complicating nice feature like type inference. I can see why Pike and company left it out, though I think it's quite feasible to fix their problems.

> Too little too late.

This is a toxic attitude. It suggests that if people don't get something right on the first try they shouldn't try to improve it.

"Better late than never" is true in many cases, and this is one of them.

Also, Apple tends to figure out what they have and try to get it right before releasing it widely. They don't want to throw things out there and then cause a lot of forks and language wars. The idea that it should be open source from the beginning, or it's worthless, is kinda dogmatic. Apple knows that the most core technologies need to be open source, which is why Darwin is there, Webkit, etc. And which is why Apple's work on LLVM is open source, it's extensions to Objective-C and of course Swift.

Swift was always going to be open source, but Apple launches OS products differently-- it gets them to 1.0 so they are strong enough to stand on their own.

Golang is a pragmatic crowd ? Weird because it seems like an ideologically driven language.

Their attitude towards exception handling and generics has never strike me as being overly pragmatic.

I really like Go and have written quite a bit.. But go into #go-nuts and ask about unsafe. Just as a counter point.

Chris is indeed an amazing and humble guy. I had the same opportunity to chat with him (by email) about this topic and he was so kind to answer all my questions (where he could).

I felt the same way at first in regards to Linux support. But honestly the platforms are similar enough that a Linux fork was probably inevitable once open sourced.

Avoiding this "fracture" could have been a powerful argument for it, along with maintaining control (the nodes iojs situation could not have inspired confidence, though that was not due to interoperability from what I know)

Tim Cook didn't seem as enthousiast with announcing the open-sourcing compared to other announcements

I disagree--Craig F announced it, and he was clearly moved.

I remember a thread on HN [1] from a few months ago talking about how apple was never going to do this. I'm so glad they were able to pull it off! Good for you, Apple!

[1] https://news.ycombinator.com/item?id=8488808

> a few months ago talking about how apple was never going to do this

I would have definitely participated in the hate if I had seen that thread, all I can saw now is: bravo Apple! More of this please!

Perhaps there is a lesson here.

yup. more hate (bad press / pressure), better behavior from huge companies

If you think that's what happened then you know nothing about how big companies operate.

Or Apple saw that and decide to change its mind.

Sometimes companies change their minds this way, like how Microsoft changed its mind on the banning of DVDs for Xbox, even though they never "officially" said they would do that before the announcement of the Xbox One, but it was strongly "rumored" they would do that. I'm sure that's what they intended, but the outrage was too big to let it be.

Apple is never going to send me a free MacBook Pro and Thunderbolt display. Never.

Could you send me one instead?

Well, not with that attitude. It's funny because approximately right now they are giving out free hardware --- to Apple Design Award winners.

... Yes, Apple makes most of its decisions on open sourcing stuff based on reading HN. Of course.

They could have also been influenced by MS open sourcing .NET - it's difficult to compete if you don't keep up.

They haven't pulled it off yet. They've just finally stated that they intend to do so, not actually done it yet. Prior to this, they hadn't even said it was on the table. So, progress, but they still haven't pulled it off.

I would argue that the biggest challenge of open-sourcing Swift was getting the go-ahead, not the actual mechanics of doing it. So from that point of view, they have pulled off the hardest part.

Yeah, but calling it in the keynote is a pretty hard thing to back off from. I don't think they will. Hopefully it'll stay on schedule though!

The same was said about FaceTime... http://www.fiercedeveloper.com/story/facetime-open-standard-...

Sadly, this never came to fruition, supposedly due to legal complications. Since they own Swift, and the underlying compiler infrastructure this may be simpler to pull off.

The "legal complications" that prevented the open-sourcing of FaceTime were apparently because they lost a lawsuit [0]. Then they had to switch FaceTime to use Apple's servers for signaling/connecting instead of peer-to-peer [1].

[0]: http://www.bbc.com/news/technology-20236114 [1]: http://www.reddit.com/r/apple/comments/1xuzif/what_ever_happ...

FaceTime was my first thought, too. An awesome promise that failed to deliver. It'll be really interesting to see if a video chat format ever becomes a standard, or if we're all forced to keep accounts with multiple vendors for interoperability (skype, hangouts, facetime, etc).

> It'll be really interesting to see if a video chat format ever becomes a standard, or if we're all forced to keep accounts with multiple vendors for interoperability (skype, hangouts, facetime, etc).

I have a feeling that's what Firefox Hello will eventually become (if it gains traction).

As it stands, I can already post a link in this thread and anyone who visits it from any WebRTC-enabled browser (including mobile devices!) can immediately start a video chat with me[0].

That's not federated (yet), but if it catches on I see no reason why it won't be.

[0] If I weren't in the "quiet room" in my coworking space right now I'd try it as an interesting experiment in the HN community. :)

A lot of this has really been blocked by Google before; when they stopped interop with XMPP. XMPP was meant to cross company borders and allow video/media to be built on top of that (with something like RTP & h264).

WebRTC is works under Firefox and Chrome, and is standardized.

WebRTC services are still, for the most part, isolated silos. There is no effort made to federate or interoperate between services. For that, you want something like SIP or XMPP/Jingle.

While true, my point was more that there /are/ standardized ways of doing that, and there is no barrier to apple working on and/or with standards bodies to discuss any concerns they have with WebRTC, SIP, etc.

That's not what OP was asking. WebRTC is a standard in the same way HTTP is a standard: it defines the API surface and transport for a particular set of features within the browser.

Signaling is intentionally missing from the WebRTC spec, and that is the "interesting" part here: without open and interoperable signaling, you're just preserving the status quo of proprietary video chat services such as FaceTime, Skype, Hangouts, etc. Just like HTTP enables proprietary services such as Facebook, Twitter, Google Plus, etc.

Matrix looks very promising. Check them out at matrix.org

My understanding is that Steve Jobs made that up on the spot, and of course he could get away with that because he was Steve Jobs (I can't provide a citation, unfortunately--think I heard it from Gruber or the ATP guys on a podcast).

Tim Cook probably wouldn't do something so impulsive based on temperament, and his direct reports would probably not risk it. So I suspect this is a considered announcement.

They said it will happen by the end of 2015.

They said they would make facetime an open standard.

Still waiting.

Are you sure this is why Apple went radio silent? Even if Apple was somehow prevented from publishing the standard (I don't see how a patent would prevent them from doing so, but whatever), that doesn't explain why they couldn't just say so.

I am not sure--definitely don't have any real knowledge about this issue. I was just looking for any info about the facetime issue and that came up in the search results.

Apple loses Facetime patent lawsuit.


Let's hope this waiting has not caused your Facetime to be Faceache!

Which is still not "pulling it off", which is my point.

I think the "pulling it off" here is coercing their megacorp organization into tolerating the idea of open-sourcing something they poured so much time and money into developing. This is also why people have been celebrating Microsoft open sourcing things recently.

"By the end of the year" perhaps suggests that Microsoft got out in front and Apple's hand has been forced into changing their roadmap. I suspect that Apple will struggle with to support cross platform development beyond tossing Swift over the transom. Supporting diverse execution environments is not their core competence historically.

It also suggests that swift version 2.0 isn't out yet and they don't want to deal with open sourcing it until then or have to go through lawyers regardless.

If you followed Chris Lattner on the dev forums he always gave the impression that they wanted it open source from the start but had bigger fish to fry. They reimplemented a ton of the compiler multiple times after finding bugs in the existing language specs. Don't read too far into this that microsoft forced anything. For one, we'll never know and this is at best conjecture.

As for supporting diverse execution environments, I'd argue llvm/clang/webkit proves otherwise. Granted they're not "supporting" it in the sense that they're selling support for it but I'm not sure exactly who would meet your criteria right now.

I think this is where companies like Xamarin and JetBrains can pick up the slack as they have with the Microsoft stack. As long as it's (legitimately) opened-sourced, of course.

Xamarin and Jetbrains live in the enterprise market. It's hard to see Swift quickly gaining traction in that space.

Which is not a very interesting thing to gripe over.. there's nothing wrong with being excited about the announcement, why don't we stop with the pedantry?

There is nothing wrong with being excited about the announcement, but that 1) that isn't what the headline indicated (it's since been changed, but at the time it was something very close to "Apple has open-sourced Swift!"), and 2) the comment I was replying to said Apple had "done it," which seemed to be responding to the inaccurate headline.

I think most people involved in software would realize that the distinction between plan and implementation is extremely important.

Surely any reasonable person would recognize that it looks like swift will be open sourced, barring something unexpected.. You're nitpicking at someones choice of words and not adding anything valuable to the conversation.

I sincerely do not think "They pulled it off" versus "It looks like they will do this sometime in the next six months" is nitpicking somebody's choice of words. The difference between the two is not a minor nuance, it's a large practical difference, and I don't think people would necessarily understand the latter meaning from the former.

When I first saw this thread, I certainly thought Apple had actually open-sourced Swift, as both the headline and the comment I was replying to said so. Then I looked at the linked page and saw it had not happened, so I corrected this materially important piece of information.

The headline and the direct quote from apples website say "Swift will be open source later this year".. I don't think anyone in this thread is trying to say anything to the contrary. Certainly the decision to open source swift has been "pulled off", perhaps that's what LesZedCB was referring to.. who knows? Who bloody cares? It's very much not important.

As I already said, that was not the title when I posted my comment. At the time it was something very much like "Swift is now open-source!" Because of worthless nit-picks like mine, the title was later changed to be accurate.

Yeesh, ok I didn't realize the title was changed

> Which is still not "pulling it off"

Honestly if you're that nervous and scared of what might happen in the future there's probably nothing Apple (and most other companies) can possibly do to comfort you. Probably better for you to avoid the potential pain and skip this one.

I will surely be downvoted for speaking so off the cuff, but I haven't really enjoyed swift so far. How has the general developer reception been to the language, not just with respect to obj-c, but also to java or any other turing complete language?

Swift as a language is pretty great, however, the tooling and obj-c interop make it a pain in the ass.

It's hard to realize most of the performance benefits when everything you're interacting with requires objc_msgsend or uses NSArray / NSDictionary.

As well, it's far less 'scripty' than Obj-c. The type system in swift really leaves something to be desired in terms of typing types. The point of Obj-C was kind of to avoid writing the kinds of apps where a great type system would really shine, swift lets you build those kind of apps, but in my opinion most of the time we shouldn't be building them.

Your project probably doesn't need 1,000 developers on it who need solid interfaces and type checking to make sure that everything is going according to the UML diagram. It probably needs 2 or 3 developers who talk to each other, add asserts to their code, and a type system thats a little forgiving.

ObjC is a language that has everything you really really need, and left out the 1 thing you kinda wanted in exchange for leaving out the thousand 1 little things that everyone else wanted too. Like for example exceptions, sure they're there, but it's not idiomatic and when you program without them you realize what a crappy idea they were in practice. In day to day coding NSError is 1000x better than exceptions.

> The point of Obj-C was kind of to avoid writing the kinds of apps where a great type system would really shine, swift lets you build those kind of apps, but in my opinion most of the time we shouldn't be building them.

Can you clarify this? What is the kind of apps where a "type system would really shine"? How can you avoid writing them?

Type systems really shine in apps that you maintain. You avoid writing them by writing shovelware that intends to extract the maximum amount of money in a short term and be replaced by something else in the long term.

I don't think that's really what Obj-C was designed for.

Apps that have a lot of YAGNI features.

eg. For some reason your app has the ability to use Postgres instead of SQLite, even though you always use SQLite.

eg. For some reason your app supports Postgres/MYSQL/SQL Server because every enterprise you sell to wants to run their preferred database instead of the one that works best with your product.

In short 'enterprise' apps.

I'm not trying to be contrarian, but I fail to see what great type systems (or lack of them) have to do with YAGNI or writing poorly architected enterprise apps.

Are you saying the main application of type systems is writing overengineered enterprise apps? Do you think type systems have no place in well-architected and/or minimalist apps?

Can you elaborate more?

im saying those with a propensity to build such systems invariably choose strongly typed languages and in those languages that is the culture and the language encourages it.

Eg.rewriting base classes to use IWritableStream instead of looking up the write method and calling it.

You seem to be confusing Java-like interfaces and design patterns with types. That's not representative of what great type systems can do for you.

I'm confusing types and 'Java-like interfaces and design patterns' to the same degree that you're confusing swift's type system with 'what great type systems can do for you.'

The type system of swift is much closer to java than to that of haskell. (Or whatever language you think has a 'great' type system)

For example in F# if you just use one method the type system will infer that you need an object that has that method. (especially if that method is an operator)

eg. let add a b = a + b will work on any pair of objects that have the + operator defined. That is a good and useful type system, what swift has just makes you type more for little benefit.

eg. making types is easy, reflecting methods is hard. (it should be noted that swift effectively doesn't support reflection)

You mistake me: I wasn't talking about Swift, a language I'm barely familiar with. I was puzzled by your more general assertion about "the kind of apps where a great type system would really shine". So far you haven't explained what you meant by this; you seemed to be complaining about enterprise software, but a lot of enterprise software isn't built with programming languages featuring great or even particularly modern type systems.

I'm not sure what reflection has to do with this. I'm sure you're aware there are "great type systems" which do not use reflection.

This seems to conflate type systems with interfaces -- while not really having a clear idea what the latter serve either...

No, it's that there's a limited amount of space to communicate my ideas and I chose brevity over clarity. I also use the common definition of terms over academic preciseness because I'm trying broad thoughts on a forum rather than a formal dissertation on type systems.

What I personally dislike about Swift is the change in mental context when dealing with the combination of C/C++ and Swift code. Plenty of nontrivial apps use significant amounts of C and/or C++ libraries in some way.

The switch when dealing with a combination of C/C++ and Objective-C source was not really a problem. But trying to interface a C++ library into a Swift application is less than a fun experience.

Personally, while the language is fresh and seems pretty good (it has some great additions), I still prefer Objective-C. I guess I'm waiting for better integration with existing libraries.

Do you have any notes on this experience posted somewhere publicly? I'd love to see an overview of the issues between Swift and C++ libraries. (Boost? Not Boost? The nuances of Swift's type system?)

You can't communicate between Swift and C++ right now... it's on their bucket list though :)

Swift's popularity has undergone a meteoric rise for a young language according to TIOBE [http://www.tiobe.com/index.php/content/paperinfo/tpci/index....]

But that's not surprising given its ecosystem; as others have mentioned, language success has less to do about its theoretical benefits and more to do about what environments it allows the developer access to. Javascript is the case-in-point; I think few people would argue it is a well-designed language, but if you want to do web development, you're going to need at least a basic understanding of it, so it maintains brutal popularity.

The shocking thing I think isn't how fast Swift has grown, but how fast Objective-C has fallen. The stats from that site don't appear to show that Swift has made up for that.

The overall combination of Swift and Objective-C in those numbers make me believe Apple's decision to make Swift open source is less about goodwill, and more about stopping the bleeding.

Given the context it's used in, this is hardly surprising.

You'd probably get more upvotes if you didn't start out by complaining about hypothetical downvotes.

I've seen the opposite be true most of the time (and I have showdead on)

he is stating what he believes will be true. i didn't read a complaint anywhere. in fact, i think he was implying how sure he was about his folliwing statments, as he was ready to post them inspite of the (expected) downvotes.

expected things are, by their very nature, hypothetical until they come true or are proven false.

I think it's terrible. Parsing a JSON response into a Swift object takes a full-day to figure out when it's literally just JSON.parse(response) in any other language. You can't pass immutable structures (structs) into NSNotifications and I can't figure out why. It's confusing and poorly documented, and a complete chore to use.

React Native is a godsend.

>Parsing a JSON response into a Swift object takes a full-day to figure out when it's literally just JSON.parse(response) in any other language.

Something which has nothing to do with the language.

Here's how you do it with a popular Swift lib:

let json = JSON(data: response)

if let userName = json[0]["user"]["name"].string{ .... }

Eh, the awful state of their respective standard libraries seems to come up quite a bit when people are critiquing OCaml or D.

What on earth are you talking about? Parsing a JSON response in Swift is basically the same as it is in Objective C:

  let parsed: AnyObject? = NSJSONSerialization.JSONObjectWithData(data,
    options: NSJSONReadingOptions.AllowFragments,

This is truly not meant to be a snarky comment, but why is it so overtly verbose? I left Java years ago b/c it was making my fingers arthritic it seems. It doesn't look like much fun. Sure, IDEs do autocomplete, but still requires parsing by the human eye.

I'm sure you get used to it. Every language looks horrible the first time, then you learn to live with it and maybe love it, but on first glance that looks pretty awful.

Objective-C tends to err on the side of readability. Considering that source code is read many more times than it is written it's something I personally agree with. But yeah it's something you do get used to.

Java's problem is that C# exists and is a more expressive language.

But this isn't readable at all.

If you compare it JSON.parse(x) or fromJSON x it's just awful.

The idea that more verbose == more readable is so annoying. It seems to be especially egregious in languages like Ada, which look like this to me (an uninitiated):

    okay here comes a function definition get ready
      function MyFun() begin
        return 1
      end begin
      end MyFun
    end function okay we are done

Verbosity is part of the design philosophy behind Objective-C.

How is React "native" again? I'm not sure I understand how something not written in Swift or Obj. C is 'native.' This is only 30% snark; I actually don't know the answer. React is from the same people that thought html5 was a good idea for a mobile application right? I am not bashing React; I am only curious how it's considered native. Does it have official support from the iOS APIs? Can you integrate Objective C libraries with React? I just don't get it.

I also don't get how a JSON parse takes half a day. If you know the language it shouldn't take that much time. If you don't know the language then a complaint wouldn't be intellectually honest would it?

I don't think the parent was saying React is native. He's referring to a project called React Native, which provides native rendering (eg., <Button> binds to a native button implemented in ObjC) in a PhoneGap/Cordova-style wrapper that lets you implement bridged access to native code (to talk to Apple-specific APIs or to implement performance-sensitive code, for example). In such an app, you would be able to get a native "look and feel" despite the application being written mostly in JS.

As far as i understand. It uses a javascript runtime on its own thread to parse "instructions" to native components dynamically. To answer your questions. yes, yes, yesno.

Reacts philosophy is to learn once write everywhere. If you know reactjs then it is just as easy to write react native as it is exactly the same (clear and concise) but with different api's available. The instructions are written in javascript the rendering can be done in dom, obejective-c, java.. etc.etc.

Would it be intellectually honest to place a 30% snark if you don't know the language?

They mean "React Native"[0], not React itself :)

[0] http://reactnative.com

Have you tried SwiftyJSON ?


It's better than Objective C, so I like it. But if iOS was open to every language I doubt people would be paying much attention to it.

Yeah it's only possible to write iOS apps with C, C++, C#, Ruby, Javascript, Python, LUA...

Not with first-class support is isn't.

So? Is there any platform that has first class support for more than 1-2 languages?

Unix. The 'native' language there is C but there are dozens of languages that are well-supported. They can interoperate through OS-level abstractions like pipes and via C APIs on at the process level. The average Unix desktop or server is probably running code written in 5-10 languages, more or less transparently to the user.

First, you said with first class support from the vendor. Linux doesn't offer that for any language.

Second, that's not an app development platform. Gnome has C (and it's own unholy stack on top), KDE has C++.

You can code for OS X/iOS in all kinds of languages too, if you mean at the OS and not at the app development level.

I'm afraid you have me confused with another commenter regarding the 'first-class support' issue.

If "platform" means "app development platform" and that means "full-featured toolkit for the development of GUI applications that run in an integrated environment", then I would agree both that Gnome and KDE fit that bill and that they have limited language support (in my experience the only language worth using with those toolkits other than the respective C and C++ is Python).

> They can interoperate through OS-level abstractions like pipes and via C APIs on at the process level.

And you can't do this in OSX or iOS?

> The average Unix desktop or server is probably running code written in 5-10 languages, more or less transparently to the user

The average iPhone is probably running code written in just as many (if not more) languages - C, C++, Obj-C, Swift, and Javascript are a given before you even look at third party stuff. And, modulo sandboxing, you can do all the Unix stuff too.

Sandboxing is a pretty big modulo.

I actually don't have any opinion on how many languages have 'first-class' support on iOS. My understanding is that it's somewhat more complicated to create natural-seeming bindings from other languages to Objective-C APIs than it would be for C APIs, and that there are restrictions in the type of applications you can distribute using other languages (no downloadable code outside the App Store). Whether that makes those languages not 'first-class' on the platform is up to interpretation.

But the person I was responding to was specifically asking if any platform had first-class support for more than 1-2 languages. I don't know how to determine 'first-class' as a matter of principle, but I assumed that the multiple decades of polyglot software development on Unix environments would count. (OS X would count there, by the way.)

Sandboxing affects obj-c and swift code too.

I enjoyed it well enough. Didn't feel quite mature (refactoring tools, error handling, etc) and optionals are still a little frustrating, but it definitely feels like it has potential.

I love it. It's a gateway drug to functional programming, and "doing" functional programming, in turn, helps me write more expressive Swift code. It's still rough around the edges, but it's a beautiful language that's a joy to use.

> I haven't really enjoyed swift so far

I haven't either, but I haven't used it a lot so I can't give a final verdict on it. But I'm the kind of person that enjoys C++ (well, especially after C++11, feels like a new language and I'm slowly becoming a C++ fanboy), and I guess all those new system languages like Swift and Rust are supposed to fill the void for people that don't like the existing system languages available, so maybe it's not for me.

I find it far less frustrating to use than Java, as it actually has decent generics.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact