Hacker News new | past | comments | ask | show | jobs | submit login
Swift: Google’s Bet on Differentiable Programming (tryolabs.com)
675 points by BerislavLopac 3 months ago | hide | past | favorite | 539 comments



In all honesty, this sounds to me like a whole lot of BS and hype. The name is pretentious, the quotes are ridiculous ("Deep Learning est mort. Vive Differentiable Programming."; "there will be a need for the creation of a whole new set of new tools, such as a new Git, new IDEs, and of course new programming languages"). Maybe I am just ignorant and fail to grasp the importance of such capabilities in a machine learning context, but I have to say, the grandstanding is a bit grating.

Who knows, perhaps this will become the greatest thing since Lisp... What do I know.


A particularly salty developer I knew ages and ages ago once said (apocryphally, apparently) that there was an old Inuit proverb, “everyone likes the smell of their own farts.”

Google is still taking themselves very seriously while everyone else is starting to get bored.

The problem with being 25 is that you have about 8 years ahead of you before you figure out how full of shit everyone is in their twenties, and maybe another 8 before you figure out that everyone is full of shit and stop worrying quite so much about it.


>The problem with being 25 is that you have about 8 years ahead of you before you figure out how full of shit everyone is in their twenties, and maybe another 8 before you figure out that everyone is full of shit and stop worrying quite so much about it.

Boy there is some truth right here. It wasn't until long after I graduated undergrad that I realized just how much bullshit is out there. Even in the science world! When I started actually reading the methodology of studies with impressive sounding conclusions, I realized that easily 30-60% were just garbage. The specific journal really, really matters. I'd say 90% of science journalism targeting laymen is just absolute bullshit.

I started actually chasing down wikipedia citations and OMG they are bad!! Half are broken links, a large fraction don't support the conclusions they're being used for, and a massive fraction are really dubious sources.

I realized that so many people I respected are so full of shit.

I realized that so many of MY OWN OPINIONS were bullshit. And they STILL are. I hold so few opinions that are genuinely well-reasoned and substantiated. They are so shallow.

Yet, this is just how the world works. Human intuition is a hell of a drug. A lot of the people I respect tend to be right, but for all the wrong reasons. It's SOOOO rare to find people that can REALLY back up their thinking on something.


There's safety in numbers. In my experience people in general are pretty smart; there're lots of wolves in sheep's clothing.

Those wolves were the ones buying N95 Masks in January; Buying Flonase (OTC Glucocorticoid) 'just in case'.

If the alternative non-BS opinion/belief is un-popular (i.e. coronavirus is serious), it's easier and safer to just check out; tend your own garden.


It was the 16th century Inuit philosopher Erasmus:

http://www.artandpopularculture.com/Suus_cuique_crepitus_ben...


Steve Jobs had a slightly more optimistic way to say it: “ Everything around you that you call life was made up by people that were no smarter than you.”


A version of that saying I like is: “success is like a fart; only your own smells good.”


Who in the world thinks their own farts smell good? Yeesh.


Maybe you just dont like success !


Sturgeon's law applies both spatially and temporally.


This doesn't seem to be crap, but it does seem to be hype.

You can use CasADI to do automatic differentiation in C++, Python and Matlab today:

https://web.casadi.org/

Tight integration with the language may be beneficial in making it simpler to write, but its not like you can't do this already in other languages. Baking it into the language might be useful to make it more popular. Nobody should be doing the chain rule by hand in the 21st century.


"Point of view is worth 80 IQ points."

Make that ±80…


>The name is pretentious,

If we already have useful phrases like "embedded programming", "numerical programming", "systems programming", or "CRUD programming", etc, I'm not seeing the pretentiousness of "differentiable programming". If you program embedded chips, we often call it "embedded programming"; likewise, if you write programs where differentials are a 1st-class syntax concept, I'm not seeing the harm in calling it "differentiable programming" -- because that basically describes the specialization.

>the quotes are ridiculous ("Deep Learning est mort. Vive Differentiable Programming.";

Fyi, it's a joking type of rhetorical technique called a "snowclone". Previous comment about a similar phrase: https://news.ycombinator.com/item?id=11455219


I feel like calling it a snowclone is a stretch.

The whole point is you’re supposed to have the same “something” on both sides (X is dead, long live X), to indicate it’s not a totally new thing but a significant shift in how it’s done

The most well known one:

> The King is dead. Long live the King!

If you change one side, you’re removing the tongue-in-cheek nature of it, and it does sound pretty pretentious.


Not really. The sentence with the King was used to mean that the new King immediately got into function.

> Le Roi (Louis ##) est mort. Vive le Roi (Louis ## + 1) !

Using the sentence with Deep Learning and Differential Learning just suggests that Differential Learning is the heir/successor/evolution of Deep Learning. It does not imply that they are the same thing.

As a French person used to the saying, Le Cun probably meant that.


It does, actually. What matters is that there is a king, not who is the king.

The saying works because, as you say, it suggests a successor, but the successor has to use the same title, because what people want is a new king, so that nothing changes and they can live as they did before the king was dead, not a revolution with a civil war tainted in blood.

If you do volontarily change the title, it's because you think the new one will be better, which is pretentious.


> Using the sentence with Deep Learning and Differential Learning just shows how Differential Learning is an evolution. It does not imply that they are the same thing.

... where did I imply it means they're the same thing?

From my comment:

> indicate it’s not a totally new thing but a significant shift in how it’s done

You could say, an evolution?

-

A snowclone is a statement in a certain form. The relevant form that English speakers use (I'm not a French person, and this is an English article) is "X is dead, long live X", where both are X.

That's where the "joking" the above comment is referring to comes from, it sounds "nonsensical" if you take it literally.

If you change one X to Y, suddenly there's no tongue-in-cheek aspect, you're just saying "that thing sucks, this is the new hotness".

I suspect the author just missed that nuance or got caught up in their excitement, but the whole point of a snowclone is it has a formula, and by customizing the variable parts of that formula, you add a new subtle meaning or tint to the statement.


They’re kinda fucked because differential analysis was coined long ago to describe a set of techniques for attacking bad cryptography.

Differential programming would be less flashy but may be confusing.

I wouldn’t actually be interested in this topic much except for the top level comment complaining about them wanting a new version control system for this and now I’m a bit curious what they’re on about this time, so will probably get sucked in.


"differential" and "differentiable" are different words. Both are being used correctly. Is the problem only that the two words look kind of similar? That seems like an impractical requirement.


Yea, cryptocurrencies will never be known as crypto. The term was coined long ago as a short hand for cryptography.


God I hate the trend of using a prefix on its own as a stand in for '<prefix><thing>'. I think it's a symptom of politicians trying to sound stupid to avoid sounding elitist.

What do you think will have more impact on the economy, crypto or cyber?


I think using the shorten version is perfectly valid only provided the context is correct.

If you are talking to someone about cryptocurrency, referring to it as crypto later in the conversation in context is perfectly valid and doesn't lessen the meaning.

I do however agree with you when outside of it's context that these shortened names are horrible and effectively buzzwords.


Why would a language where it is possible to manipulate certain functions to get their derivatives require a new revision control system or IDE?

And speaking of Lisp - wasn't symbolic differentiation a fairly common thing in Lisp? (basically as a neat example of what you can do once your code is easy to manipulate as data).


Symbolic differentiation is a fairly common exercise but it is inefficient (the size of the derivative grows exponentially in the size of the original expression). "Automatic differentiation" is the term for the class of algorithms usually used in practice, which are more efficient while still being exact (for whatever "exact" means when you're using floating point :-)


AD still explodes for “interesting” derivatives: efficiently computing the adjoint of the Jacobian is NP-complete. And, naturally, the Jacobian is what you want when doing machine learning. There’s papers from the mid-90’s discussing the difficulties in adding AD Jacobian operators to programming languages to support neural networks. This article is just rehashing 25 year old problems.


Correction: Finding the optimal algorithm (minimal number of operations) for computing a Jacobian is NP-complete, but evaluating it in a multiple of the cost of a forward evaluation is standard.

Also, many optimizers that are popular in ML only need gradients (in which case the Jacobian is just the gradient vector). Second order methods are important in applications with ill-conditioning (such as bundle adjustment or large-scale GPR), but they have lots of exploitable structure/sparsity. The situation is not nearly as dire as you suggest.


Yep; I was imprecise!


Around 2000 I was accidentally inventing a Bloom filter variant (to this day I don’t know how I missed the Google papers at the time) for doing a large set intersection test between two machines.

Somehow, I ended up with a calculus equation for determining the right number of bits per entry and rounds to do to winnow the lists, for any given pair of machines where machine A found n entries and machine B found m. But I couldn’t solve it. Then I discovered that even though I did poorly at calculus, I still remembered more than anyone else on the team, and then couldn’t find help from any other engineer in the building either.

Eventually I located a QA person who used to TA calculus. She informed me that my equation probably could not be solved by hand. I gave it another day or so and then gave up. If I couldn’t do it by hand I wasn’t going to be able to write a heuristic for it anyway.

For years, this would be the longest period in my programming career where I didn’t touch a computer. I just sat with pen and paper pounding away at it and getting nowhere. And that’s also the last time I knowingly touched calculus at work.

(although you might argue some of my data vis discussions amount to determining whether we show the either the sum or rate of change of a trend line to explain it better. The S curve that shows up so often in project progress charts is just the integral of a normal distribution, after all)


The Jacobian is used all the time, but where do you end up needing the adjoint?


What's a link to that paper?



Thanks - I didn't think it was literally doing symbolic differentiation (I don't work in the area so literally had no idea) - but the basic idea that you apply some process to your code to get some other code doesn't sound that surprising to anyone who has used lisp (and I used to write tools in lisp to write numerical engineering simulations - admittedly a long time ago)


> the size of the derivative grows exponentially in the size of the original expression.

Only if you treat it as a tree, not a DAG.

edit: Sorry no, it's still linear, even for a tree.


exp(x) ?


A quick Google revealed DVC (Data Version Control): https://dvc.org/

Smashing your ML models into git or other text-based VCS probably isn't the best way to do it


Its not clear how thats different from git's LFS.


It is different. DVC is a server-less management tool that helps you organize and link to your storage backends. And move data from these backends to your workspace. Git LFS requires a dedicated server. And you can store data only on that server, instead of moving data between a multitude of storage backends (like Google Drive, S3, GCP, local drive).


To be fair they do have a page giving a comparison with git LFS and other related approaches to storing large files:

https://dvc.org/doc/understanding-dvc/related-technologies#g...

Mind you - DVC seems to be a platform on top of git rather than a replacement for git. So I'd argue that it's not really a new revision control system.


For the IDE part they have written some cool debuggers that help you understand the differentiation part and catch bugs on it. But I'm not sure why you couldn't just use the debugger, instead of a whole new IDE, much less why you would need a new RCS


You can refer to a talk about Differentiable Programming here: https://www.youtube.com/watch?v=Sv3d0k7wWHk the talk is for Julia, albeit the principles are general. If you skip to https://youtu.be/Sv3d0k7wWHk?t=2877 you can see a question asked: why would you need this.

In essence there are cases outside the well developed uses (CNN, LSTM etc.) such as Neural ODEs where you need to mix different tools (ODE solvers and neural networks) and the ability to do Differentiable Programming is helpful otherwise it is harder the get gradients.

The way I can see it being useful is that it helps speed up development work so we can explore more architectures, again Neural ODEs being a great example.


Is it very different from probabilistic programming (a term that is both older and easier to understand)?

Erik Meijer gave two great talks on the concept.

https://m.youtube.com/watch?v=NKeHrApPWlo

https://m.youtube.com/watch?v=13eYMhuvmXE


Yes, these are two very different things.

Differential programming is about building software that is differentiable end-to-end, so that optimal solutions can be calculated with gradient descent.

Probabilistic programming (which is a bit more vague) is about specifying probabilistic models in an elegant and consistent way (which than then be used for training and inference.)

So, you can build some kinds of probabilistic programs with differential programming languages, but not vice versa.


You're right, it sounds like BS because it's BS.

Swift was railroaded in Google by Chris Lattner, who has since left Google and S4TF is on death watch. No one is really using it and it hasn't delivered anything useful in 2.5 years


Does that really matter that not many people use it? Apple's carve-out of Objective C from the broader C ecosystem spanned something like 25 years.

Sure, the 90's were a rough period for them, but I think a series of failed OS strategies and technical debt are more responsible for that than just what language they used.

You could argue that their ambitions re Swift scaling from scripting all the way to writing an entire OS might never grow substantially outside Apple, but there's also the teaching aspect to think about.

"Objective C without C" removes a whole class of problems people have in just getting code to run, and I'll bet it shapes their mind in how they think about what to be concerned about in their code v what's just noise.


Sometimes things take a little longer to develop. I don't know who will create it, but from my perspective, the need for a statically typed "differentiable" language is extremely high and C++ is not it.


> the need for a statically typed "differentiable" language is extremely high

This is not what Google has found, actually. Teams who wanted to use this for research found that a static language is not flexible enough when they want to generate graphs at runtime. This is apparently pretty common these days, and obviously Python allows it. Especially with JAX that traces code for autodiff


> This is not what Google has found, actually.

Or has at least found that existing solutions for statically typed "differentiable" programming are ineffective, and I'd agree.

But having some way to check types/properties of tensors that you are doing operations to would really help to make sure you don't get your one hidden dimension accidentally switched with the other or something. Some of these problems are silent and need something other than dynamic runtime checking to find them, even if it's just a bolt-on type checker to python.

There are a lot of issues with our current approach of just using memory and indexed dimensions. [0]

[0]: http://nlp.seas.harvard.edu/NamedTensor


Flexible enough or just... former statisticians have enough on their plate than learning programming so lets use the simplest popular language in existence?


AIUI they hired another huge contributor to LLVM/Clang/Swift to work on it so I'm not so sure it's on death watch.


The article gives specific and reasonable motivations for why you might want a new language for machine learning. There are already new tools. Like Pytorch. If you have never used Jupyter notebooks instead of an IDE, give it a try for a few weeks. It was the biggest boost I've seen to my coding productivity in literally decades. A new Git? I don't quite get that one. But considering the author arguably already got three out of his four claims right, maybe there is some reasoning behind that Git claim too.


> If you have never used Jupyter notebooks instead of an IDE, give it a try for a few weeks. It was the biggest boost I've seen to my coding productivity in literally decades

Really? I seem to shoot myself in the foot a lot with jupyter notebook. I can't count the number of times my snippet was not working and it was because I was reusing some variable name from some other cell that no longer exists. The amount of bugs I get in a notebook is ridiculous. Of course, I'm probably using it wrong


If you cant do a "restart kernel and run all cells" without errors, your notebook is not in a good state. But somehow people dont seem to do this regularly and then complain that notebooks are terrible, when its their own process that is shooting them in the foot.


Imo people’s love of Jupyter notebooks is another one of those “this is My Thing and I love it despite the flaws” situations.

Jupyter notebooks are painful to read, the allow you to do silly stuff all too easily, they’re nightmarish to debug, don’t play well with git, and almost every. Single. One. I’ve ever seen my teammates write eschewed almost every software engineering principle possible.

You’re not using them wrong, they shepard you to working very “fast and loose” and that’s a knife edge that you have to hope gets you to your destination before everything falls apart at the seams.


> don’t play well with git

I agree, and this is why I built -

- ReviewNB - Code review tool for Jupyter notebooks(think rich diffs and commenting on notebook cells)

- GitPlus - A JupyterLab extension to push commits & create GitHub pull requests from JupyterLab.

[1] https://www.reviewnb.com/

[2] https://github.com/ReviewNB/jupyterlab-gitplus


Instead of building things to make Notebooks play nicely with git, why not relegate notebooks to explicitly local exploratory work and when something needs to be deployed have it be turned into proper scripts/programs?


That's on your teammates not the technology. Like any code if you want it to be readable and robust you have to spend the time cleaning up and refactoring the it. Lots of notebooks are easy to read and run reliably.


Did you happen to see a thread on here a week or so ago about “It’s not what programming languages let you do, it’s what they shepard you to do”? (Strongly paraphrased there)

That’s my issue with Jupyter notebooks, between them and Python they implicitly encourage you to take all kinds of shortcuts and hacks.

Yes, it’s on my teammates for writing poor code, but it’s on those tools for encouraging that behaviour. It’s like the C vs Rust debate right: yes people should write secure code, and free their memory properly and not write code that has data races in it, but in the majority of cases, they don’t.


I didn't see that thread. Based on my experience, I don't really buy the premise. I'm not saying different languages can't somewhat nudge you a tiny bit towards better practices. Or simply not allow you to do certain things, which isn't really sheparding is it? But the vast majority of great engineering I have seen is mostly about the team and the many decisions they have to make in the course of one day. Which quickly adds up.

Quality engineering mostly comes from people, not languages. It is about your own personal values, and then the values of the team you are on. If there were a magic bullet programming language that guided everyone away from poor code and it did not have tradeoffs like a hugely steep learning curve (hi Haskell) then you would see businesses quickly moving in that direction. Such a mythical language would offer a clear competitive advantage to any company who adopted it.

What you are looking at really is not good vs. bad, but tradeoffs. A language that allows you to take shortcuts and use hacks sounds like it could get you to your destination quicker sometimes. That's really valuable if your goal is to run many throw-away experiments before you land on a solution that is worth spending time on improving the code.


Data analysis notebooks are annoying to read because you're forced to choose between:

a) ugly, unlabelled plots

b) including tons of uninteresting code that labels the axes (etc)

c) putting the plotting code into a separate module.

There are some extensions that do help with this, but extensions also kinda defeat the whole purpose of a notebook.


Yeah really, but I'm a very experienced dev so what I'm getting from it is likely very different from your experience. Consider looking into git or some other version control. If you are deleting stuff that breaks your code, you want to be able to go back to a working version, or at least look at the code from the last working version so you can see what how you broke it.


I could say the same about Python, honestly. Nonexistent variables, incorrectly typed variables, everything-is-function-scoped variables.


Hi, author here! The git thing is regarding model versioning. The managing of a ton of very large and slightly different binary blobs is not git's strong point imo.

There are a ton of tools trying to fill this void, and they usually provide things like the comparison of different metrics between models versions, which git doesn't provide.


Jupyter notebooks as idea, go back to REPL workflows in Lisp Machines, Common Lisp commercial IDEs, Smalltalk, Mathematica actually.


Maybe versioning for models?


Deep learning is the new Object Oriented Programming and Service Oriented Architecture


What about SPA and serverless?


Differentiable programming allows one to specify any parametrized function and allows one to use optimization to learn the objective.

There is definitely some need for an EDSL of some sort, but I think a general method is pretty useless. Being able to arbitrarily come up with automatic jacobians for a function isn't really language specific, and usually much better results are obtained using manually calculated jacobians. By starting from scratch you lose all the language theory poured into all the pre-existing languages.

I'm sure there'll be a nice haskell version that works in a much simpler manner. Here's a good start: https://github.com/hasktorch/hasktorch/blob/master/examples/...

I think it's pretty trivial to generalize and extend it beyond multilinear functions.


Be hard on the article, but easier on the concept - I think there is a lot of potential for differential programming w/ Swift, but this article is not a good advocate.


> need for the creation of a whole new set of new tools, such as a new Git, new IDEs, and of course new programming languages

The greatest possible barrier to adoption.


how do you get formatting to work in hn?


There a just a few simple rules, see https://news.ycombinator.com/formatdoc

I would add that verbatim text is sometimes hard to read because it doesn't wrap long lines and small screens require the reader to scroll horizontally so try not to use it for large/wide blocks of text.

Also bullet lists are usually written as separate, ordinary paragraphs for each item with an asterisk or dash as the paragraphs first character.


appreciated!


Reading through the top level comments, they are all a form of surface level aversion to the unfamiliar. It really highlights that many industry trends are based on hot takes and not any sort of deep analysis. This makes sense why Javascript and C++ will remain as entrenched favorites despite their technical flaws.

For those who actually spent time with Swift, and realize its value and potential, consider yourself lucky that large portions of the industry have an ill informed aversion to it. That creates opportunity that can be taken advantage of in the next 5 years. Developers who invest in Swift early can become market leaders, or run circles around teams struggling with slowness of python, over-complexity of c++.

Top three comments paraphrased:

> 1) "huh? Foundation?"

but you have no qualms with `if __name__ == "__main__"` ?

> 2) "The name is pretentious"

is that an example of the well-substantiated and deep technical analysis HN is famous for?

> 3) Swift is "a bit verbose and heavy handed" so Google FAILED by not making yet another language.


You're completely misinterpreting (I'm the author of (1)).

C++ and javascript are languages of professional software engineers, of which there are many many more languages with various pros and cons.

Python has been the defacto standard in scientific/data/academic programming for decades. The only other language you could say rivals it would be MATLAB which is even more simplistic.

My point is that simplicity and clarity matters to people who don't care that much about programming and are completely unfocused on it, they are just using it do get data for unrelated research.

'if __name__ == "__main__"' is not in the example code nor is it a required part of a python program so not really sure what your point is here.


> Python has been the defacto standard in scientific/data/academic programming for decades

In my experience (Genomics) this is simply not true. Python has caught on over the last 5 or so years, but prior to that Perl was the defacto language for genetic analysis. Its still quite heavily used. Perl is not a paragon of simplicity and clarity.


I was in academic compsci/ai from 2001-2017 and it was entirely c++ and python in my department, except for one oldschool professor who used delphi.


Haha, there is always one :)

I feel like trying out various languages/frameworks would affect compsci labs a lot less than other fields, since the students probably have some foundational knowledge of languages and have already learned a few before getting there. Might be easier for them to pick up new ones.


At my University AI and ML were taught using Java. It was more handy to both teachers and students since most other courses used Java.


It may not be true for a niche field. It is true broadly for computation in academia.


I don't find this response convincing because:

(a) While I'm being honest that my observations are based on the fields I have experience, there is no such justification that "It is true broadly for computation in academia" in your comment.

(b) Interpreting "niche" as "small" (especially given your "true broadly" claim): Computational genetics is huge in terms of funding dollars and number of researchers.


Have you honestly not heard of R? Which had more mainstream data science mindshare than python as recently as 5 years ago?


I have, my impression when doing an applied math degree more than 10 years ago was that python was by far more prevalent than R. I know through my wife that python is much more prevalent in bio-informatics and bioengineering too.

Doesnt really change my argument though, R is also a slow but simple language that is popular among academics but not professional software engineers. My whole point is that Swift is never going to be popular with academics because the syntax isn't simple enough.


>MATLAB which is even more simplistic [than Python].

The person you are replying to didn't call Python simplistic (and it certainly is not simple IMHO), they called it slow.


Python has only been around for 30 years? That’s 3 decades. Of which maybe only part of the last 1 had Python gained traction in science and data.


Hasn't been my experience. I was programming mostly in python in 2007 in applied math, oceanography, and physics classes. It had already been established in those fields (at least at my university) in 2007 so it's been at least 15 years.


"IT is the only industry more fashion-driven than the fashion industry." ~RMS (or somebody else, I can't be bothered to look it up right now.)

It's a problem.


Larry Ellison said that.


Cheers


>4)"Google hired Chris Lattner and he forced Swift down their throat."

Does anyone force anything on Google? This seems to express little confidence in the competence of Google and their people. Perhaps Google chose Swift and brought Lattner in for his obvious expertise.


The biggest drawbacks of Swift are the legacy relationships it has to the Apple ecosystem.

Yet swift is open source, and Apple and the community can fork it if they so choose. This is great news for me personally as an iOS developer and an ML noob who doesn't want to write Python. I can't comment on Julia because I have no experience with it, but I applaud the efforts to build the Swift ecosystem to challenge Python.

I think a lot of the criticisms so far are that it's early days for Swift in ML, and that's one point the author is emphasizing.


Just look at how successful Objective-C has been outside NeXT and Apple during the last 30 years.


I have spent 2 years of my life with Swift and I would say that I have a very well informed aversion to the language.

Some ideas in the language or parts of its core concepts are really good. First class optionals and sum types, keyword arguments, etc., I liked all of those.

Unfortunately, by and large, Swift is lipstick put on a pig. I have never used any other language that regularly gave me type errors that were WRONG. Nowhere else have I seen the error message "expression is too complex to be type-checked", especially when all you do is concatenating a bunch of strings. No other mainstream language has such shoddy Linux support (it has an official binary that works on Ubuntu... but not even a .deb package; parts of Foundation are unimplemented on Linux, others behave differently than on macOS; the installation breaks system paths, making it effectively impossible to install Python afterwards[1]). Not to mention, Swift claims to be memory-safe but this all flies out of the window once you're in a multithreaded environment (for example, lazy variables are not threadsafe).

In addition, I regularly visited the Swift forums. The community is totally deluded and instead of facing the real problems of the language (and the tooling), it continues to bikeshed minor syntactic "improvements" (if they even are improvements) just so the codes reads "more beautifully", for whatever that is supposed to mean.

But the worst thing is how the community, including your post, thinks Swift (and Apple) is this godsend, the language to end all language wars, and that everyone will eventually adopt it en masse. Even if Swift were a good language, that would be ridiculous. There was even a thread on that forum called "crowdfunding world domination". It has since become awfully quiet...

[1]: https://bugs.swift.org/browse/SR-10344


>Developers who invest in Swift early can become market leaders, or run circles around teams struggling with slowness of python, over-complexity of c++.

While other people will do the sensible thing and learn Rust. Because it runs circles around Swift, it offers many paradigms and can be used in almost any industry, operating system and product, not just developing apps for Apple's ecosystem.

Swift will take over the world when Apple will take over the world which is safe to assume it will never happen.

I am not saying at all that is bad to learn Swift and use Swift, but have correct expectations about it.


Google is working on multiplatform support for swift, and apple seems to be on board.


I think you misunderstood criticism about the name differential programming and the idea that building in a gradient operator in to a language is somehow a breakthrough that warrants the label "software 2.0".

This is not really about swift. Swift seems to have been chosen because the creator was there when they picked the language, even though he left.


I think my point stands that the criticism on this thread is mostly a surface level reaction and hung up on meaningless slogans like "software 2.0" or "breakthrough".

You use of the word "seems" is very apt here.

Have you considered that Google might have hired Lattner precisely because he is the founder of LLVM and Swift, and they hoped to leverage his organizational skills to jump start next generation tooling? We know google is heavily invested in llvm and C++, but dissatisfied with the direction C++ is heading [0]. They also are designing custom hardware like TPUs that isn't supported well by any current language. To me it seems like they are thinking a generation or two ahead with their tooling while the outside observers can't imagine anything beyond 80s era language design.

[0] https://www.infoworld.com/article/3535795/c-plus-plus-propos...


I'm a deep learning researcher. I have an 8 GPU server, and today I'm experimenting with deformable convolutions. Can you tell me why I should consider switching from Pytorch to Swift? Are there model implementations available in Swift and not available in Pytorch? Are these implementations significantly faster on 8 GPUs? Is it easier to implement complicated models in Swift than in Pytorch (after I spend a couple of months learning Swift)? Are you sure Google will not stop pushing "deep learning in Swift" after a year or two?

If the answer to all these questions is "No", why should I care about this "new generation tooling"?

EDIT: and I'm not really attached to Pytorch either. In the last 8 years I switched from cuda-convnet to Caffe, to Theano, to Tensorflow, to Pytorch, and now I'm curious about Jax. I have also written cuda kernels, and vectorized multithreaded neural network code in plain C (Cilk+ and AVX intrinsics) when it made sense to do so.


I've taken Chris Lattner / Jeremy Howard's lessons on Swift for TensorFlow [0][1]. I'll try to paraphrase their answers to your questions:

There aren't major benefits to using Swift4TensorFlow yet. But (most likely) there will be within the next year or two. You'll be able to do low level research (e.g. deformable convolutions) in a high level language (Swift), rather than needing to write CUDA, or waiting for PyTorch to write it for you.

[0] https://course.fast.ai/videos/?lesson=13 [1] https://course.fast.ai/videos/?lesson=14


You'll be able to do low level research (e.g. deformable convolutions) in a high level language (Swift), rather than needing to write CUDA

Not sure I understand - will Swift automatically generate efficient GPU kernels for these low level ops, or will it be making calls to CuDNN, etc?


The first one. At least as of last year, Swift4TensorFlow's goal is to go from Swift -> XLA/MLIR -> GPU kernels.


Sounds great! I just looked at https://github.com/tensorflow/swift - where can I find a convolution operation written in Swift?


You can't. It won't be available for at least a year I'm guessing.

Even then I'm not sure what granularity MLIR will allow.

On the other hand you can do it in Julia today. There is a high-level kernel compiler and array abstractions but you could also write lower level code in pure Julia as well. Check out the Julia GPU GitHub org


If it's not ready I don't see much sense in discussing it. Google betting on it does not inspire much confidence either. Google managed to screw up Tensorflow so bad no one I know uses it anymore. So if this Swift project is going to be tied to TF in any way it's not a good sign.

As for Julia, I like it. Other than the fact that it counts from 1 (that is just wrong!). However, I'm not sure it's got what it'd take to become a Python killer. I feel like it needs a big push to become successful in a long run. For example, if Nvidia and/or AMD decide to adopt it as the official language for GPU programming. Something crazy like that.

Personally, I'm interested in GPU accelerated Numpy with autodiff built in. Because I find pure Numpy incredibly sexy. So basically something like ChainerX or Jax. Chainer is dead, so that leaves Jax as the main Pytorch challenger.


I was looking around for a language to write my own versions of convolution layers or LTSM or various other ideas I have. I thought I would have to learn c++ and CUDA, which from what I hear would take a lot of time. Is this difficult in Julia If I would go through some courses and learn the basics of Julia?

This would really give me some incentive to learn the language.


You could just use LoopVectorization on the CPU side. It's been shown to match well-tuned C++ BLAS implementations, for example with the pure Julia Gaius.jl (https://github.com/MasonProtter/Gaius.jl), so you can follow that as an example for getting BLAS-speed CPU side kernels. For the GPU side, there's CUDAnative.jl and KernelAbstractions.jl, and indeed benchmarks from NVIDIA show that it at least rivals directly writing CUDA (https://devblogs.nvidia.com/gpu-computing-julia-programming-...), so you won't be missing anything just by learning Julia and sticking to using just Julia for researching new kernel implementations.


In that benchmark, was Julia tested against CuDNN accelerated neural network CUDA code? If not, is it possible (and beneficial) to call CuDNN functions from Julia?


That wasn't a benchmark with CuDNN since it was a benchmark about writing such kernels. However, Julia libraries call into optimized kernels whenever they exist, and things like NNLib.jl (the backbone of Flux.jl) and KNet.jl expose operations like `conv` that dispatch CuArrays to automatically use CuDNN.


I’m not telling you to switch. I don’t think the S4TF team is telling you to switch anytime soon. At best you might want to be aware and curious about why Google is investing in a statically typed language with built in differentiation, as opposed to python.

Those that are interested in machine learning tooling or library development may see an opportunity to join early, especially when people have such irrational unfounded bias against a language, as evidenced by the hot takes in this thread. My personal opinion, that I don’t want to force on anyone, is that Swift as a technology is under-estimated outside of Apple and Google.


Please read the article. It answers your question pretty straightforwardly as "no, it's not ready yet."

But it also gives reason it shows signs of promise.

So you should get involved if you are interested in contributing to and experimenting with a promising new technology, but not if you're just trying to accomplish your current task most efficiently.


Google hopes you will be using their SaaS platform to do ML, not just use your own server. This is one of the reasons they push hard to develop some instruments.


When it’s cheaper for 24/7 training jobs than buying equivalent hw from Nvidia - sure, why not.


You should probably just read the article before aggressively rejecting a premise it is not suggesting.


Your point doesn't stand because what you said was a defensive reaction to what you thought was criticism of swift.

I think you have bought into the coolaide pretty hard here. Everything you are saying is a hopeful assumption of the future.


> To me it seems like they are thinking a generation or two ahead with their tooling while the outside observers can't imagine anything beyond 80s era language design.

Given the ML and Modula-3 influences in Swift, and the Xerox PARC work on Mesa/Cedar, it looks quite 80s era language design to me.


Swift inherits some APIs from Objective C.

You have to use something like CFAbsoluteTimeGetCurrent while even in something not very modern like C# you would use DateTime.Now()


You realize with any modern C++ developer, they can program basically the same stuff, in the same number of lines modulo 4 lines of #includes?

    #include <numeric>
    #include <iostream>
    #include <chrono>
    #include <vector>
    
    int main(int argc, char** argv) {
      for (int i = 0; i < 15; i++) {
        std::vector<int> result;
        auto start = std::chrono::system_clock::now();
        for (int j = 0; j < 3000; j++) {
          result.push_back(i);
        }
        auto sum = std::accumulate(result.begin(), result.end(), 0);
        auto end = std::chrono::system_clock::now();
        std::cout << (end - start).count() << " " << sum << std::endl;
      }
    }
I think you underestimate what a multiple decades of coding 40hrs a week gives you in terms of development speed.


you may have replied to the wrong thread.

But I'd like to point out that while Google has some of the top C++ experts working for them, is heavily involved in C++ standardization and compiler writing process, in 2016 they claimed to have 2 billion lines of C++ running their infrastructure...

.. and yet they don't suffer from familiarity bias or the sunken cost fallacy I hear in your comment.

Instead Google C++ developers are sounding an alarm over the future direction the language and its crippling complexity:

http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p213...


Google is also known for having brain dead guidelines for C++, that speak against community best practices.

Just like with Go, their monorepo and internal tooling deturps the understanding how everyone else actually uses C++.


Google really messed up here, they had an unprecedented opportunity to create a new language for numeric computation and rally the scientific community around that. They hired Chris Lattner and he basically steamrolled that dream by forking Swift.

I don’t see people running over here to write numerical libraries like you see in Julia, that’s largely because of the crowd around Swift. The language is also a bit verbose and heavy handed for what data scientists would prefer. Latner was too close to Swift to understand this. The blame really falls on google project management.


Google creating a new language? To shut it down in 3 years? Swift will be around in 15 years. It would be Microsofts F#, or Googles Dart all over again. It's a monumental task to create a language that people want to use, and an even bigger task to create tools around it (IDEs, language-specifications, cross platform frameworks, package management).

I know this is a hot take but... I doubt Google has the capability to be frank. They created Dart and Go (a.k.a generics ain't necessary). They created Tensorflow 1 which is totally different from Tensorflow 2.

Swift may not be the best but Swift is starting to become such a large part of Apple so it will have backing no matter the internal politics.

The language is not where the battle will be, it will be the tooling.


I would disagree, first Go has been tremendously successful and yes will have generics soon. The ML community really needs a better language than Python, the current alternatives (Julia, Nim, R) are alright but seem to miss the mark in this arena. I see few data scientists excited about Swift, its too heavy handed and deeply embedded in the Apple iOS community.

People are searching for a better language in this space and it's something that often needs a corporate backing. Google is aware of this problem and hired Chris Latner to fix it, its just a bit of unfortunate oversight, I guess we'll keep using Python for now.


Julia is I think perhaps more focused on ML and data analysis, but Nim has some neat tricks up it's sleeves too:

https://github.com/mratsim/Arraymancer


Nim is a languge that has good performance and I had good experience porting an enterprise Python application to Nim (for performance gain). For a new user the risk obviously is the newness of Nim but the Nim team was very helpful and prompt whenever I posted a question. Its a very complete and surprisingly issue-less language.

Hopefully Arraymancer will help increase its reach, I wish the implementors all the best.


I like Nim quite a bit, I would put it and Julia as the best contenders at the moment.


I don't know that the ML community necessarily _needs_ a better language than Python for scripting ML model training. Python is decent for scripting, a lot of people are pretty happy with it. Model training scripts are pretty short anyway, so whatever language you write it in, it's just a few function calls. Most of the work is in cleaning and feature engineering the data up front.

Perhaps a more interesting question is whether the ML community needs a better language than C++ for _implementing_ ML packages. TensorFlow, PyTorch, CNTK, ONNX, all this stuff is implemented in C++ with Python bindings and wrappers. If there was a better language for implementing the learning routines, could it help narrow the divide between the software engineers who build the tools, and the data scientists who use them?


I think the ML community really needs a better language than Python but not because of the ML part, that works really good, its because of the Data Engineering part (which is 80-90% of most projects) where python really struggles for being slow and not having true parallelism (multiprocessing is suboptimal).

That said I love Python as a language, but if it doesn't fix its issues, on the (very) long run its inevitable the data science community will move to a better solution. Python 4 should focus 100% of JIT compilation.


I've found it generally best to push as much of that data prep work down to the database layer, as you possibly can. For small/medium datasets that usually means doing it in SQL, for larger data it may mean using Hadoop/Spark tools to scale horizontally.

I really try to take advantage of the database to avoid ever having to munge very large CSVs in pandas. So like 80-90% of my work is done in query languages in a database, the remaining 10-20% is in Python (or sometimes R) once my data is cooked down to a small enough size to easily fit in local RAM. If the data is still too big, I will just sample it.


Is this tangential advice, or an argument that the current tools are good enough?


It's an argument that Python being slow / single-threaded isn't the biggest problem with Python in data engineering. The biggest problem is the need to process data that doesn't fit in RAM on any single machine. So you need on-disk data structures and algorithms that can process them efficiently. If your strategy for data engineering is to load whole CSV files into RAM, replacing Python with a faster language will raise your vertical scaling limit a bit, but beyond a certain scale it won't help anymore and you'll have to switch to a distributed processing model anyway.


Yep this is the key, its not really the data science end its the engineering piece.


Can you get things done in python/c++ sure, but the two language problem is a well known issue, and python has a number of problems. People certainly want a better option, and google investing as much as they did validates that notion.


Yes, so to me, the key question is not whether Swift can replace Python's role, but whether it can replace C++'s role, and thereby also making Python's role unnecessary and solving the two-language problem in the process.


I think we can all agree that C++ is a dragon that needs to be slain here. Swift could potentially get close to that for most of the needs, but I still wouldn't bet data scientists would write swift.


As a data scientist, most of my projects have been individual--I'm generally the only person writing and reading my code. No one tells me which language I have to use. Python and R are the most popular, and I use either one depending on which has better packages for the task at hand. I don't use Julia because I don't see enough of a benefit to switching at this point. But I really don't care, they're just tools, and I will use any language, Julia, Swift, whatever, if I see enough of a benefit to learning it. I would just take a day or two and learn enough of it to write my scripts in it.

So I think that's the good news--because of the more independent nature of the work, you generally can win data scientists over to a new language one at a time, you don't necessarily need to win over an entire organization at once.

Getting a company or a large open-source project to switch from C++ to Swift or Rust or whatever, seems much harder.


Ideally they'd get behind a strict subset of typed python that could be compiled the same way that cython is. Numba, PyTorch JIT and Jax are already handling a decent chunk of the language.


Just like RPython.


RPython is not intended for humans to write programs in, it's for implementing interpreters. If you're after a faster Python, you should use PyPy not RPython.

Numba gives you JIT compilation annotations for parallel vector operations--it's a little bit like OpenMP for Python, in a way.


I just look forward to have a proper JIT as part of regular Python, as PyPy still seems to be an underdog, and JIT research for dynamic languages on GraalVM and OpenJ9 seems more focused on Ruby, hence why I kind of hope that Julia puts some pressure into the eco-system.


What is deficient about Julia? I've been heavily tempted to try it out but if there are problems I may invest my previous time elsewhere.


For what it's worth, I think learning Julia would be a fantastic investment. I think it made me a much better programmer because it substantially lowers the barrier between 'developers' and 'users'.

I also don't think I could ever go back to using a langue that doesn't have multiple dispatch, and I don't think any language out there has a comparable out-of-the-box REPL experience.


Common Lisp.


Julia is really nice the only problems are 1) its focus is it being used from the REPL. While you can use a .jl script from the cli it feels wrong because of 2) its lack of a good AOT compilation/runtime/static binary option. Its JIT compiler is really good but you pay a price on startup and first runs hence Julians usually just have long running REPLs they don't close. 3) the ecosystem is still immature there are some amazing parts but also a lot of empty or poor parts still.


Community and libraries.

If I'm a person who wants to do some data science or whatever and I have very little software background I want there to be libraries that do basically everything I ever want to do and those libraries need to be very easy to support. I want to be able to Google every single error message I ever see, no matter how trivial or language specific, and find a reasonable explanation. I also want the environment to work more or less out of the box (admittedly, python has botched this one since so many machines now have a python2 and a python3 install).


Julia punches well above it's weight in libraries, especially for datascience and has the best online community I've ever been a part of. Googling an error in julia definitely won't give you nearly as many StackOverflow hits, but the community Discourse, Slack and Zulip channels are amazingly responsive and helpful.

I think a big advantage of Julia is that it has a unusually high ratio of domain experts to newbies, and those domain experts are very helpful caring people. It's quite easy to get tailored, detailed personalized help from someone.

This advantage will probably fade as the community grows, but at least for now, it's fantastic.


Eh wrote it for a couple years my tldr; multiple dispatch is odd, types are too shallow, jit is really slow to boot, the tooling is poor.


I've a lot more experience with Julia than any other language (and am a huge fan/am heavily invested). My #2 is R, which has a much more basic type system than Julia.

So -- as I don't have experience with languages with much richer type systems like Rust or Haskell -- it's hard to imagine what's missing, or conceive of tools other than a hammer. Mind elaborating (or pointing me to a post or article explaining the point)?


I found multiple dispatch to be odd at first, but after adapting my mindset a bit I really like it. It makes it really easy to just drop your functions specialized for your types into existing libraries, for example. It's a win for code reuse.

What do you mean by "types are too shallow"?

Yes, jit can be slow to boot, but I think this is an area they're going to be focusing on.

"the tooling is poor" Not sure I agree here. I think it's great that I can easily see various stages from LLVM IR to x86 asm of a function if I want to.


What version did you last use?

Boot times still aren't ideal, but I find it takes about .1 seconds to launch a julia repl now. First time to plot is still a bit painful due to JIT overhead, but that's coming down very aggressively (there will be a big improvement in 1.5, the next release with differential compilation levels for different modules), and we now have PackageCompiler.jl for bundling packages into your Sysimage so they don't need to be recompiled every time to you reboot julia.

I also think the tooling is quite strong, we have ana amazingly powerful type system and I would classify discovering multiple dispatch as a religious experience.


> I see few data scientists excited about Swift, its too heavy handed and deeply embedded in the Apple iOS community.

Which is unfortunate, because it would probably be the best language if it were controlled by a non-profit foundation like Python. As it stands it's basically unusable.


Why do you think Swift would be the best language? I am doing a lot of C#, and so far have not seen anything in Swift, that would make it feel better. In fact, at this moment even Java is doing leaps forward, so will quickly catch up on syntax.

And C# and Java have a benefit of JIT VM by default, meaning you only build once for all platforms, unless you need AOT for whatever rare reason (which they also have).


I'd say the culture is very, very different. Java/C#-heads are in love with OOP and create layers on layers everywhere, hiding as much state and methods as they can (you can't use this operator, you'll shoot yourself in the foot!) and rarely doing pure functions. It's just a long way from how math works.

Not saying it wouldn't work, it definitely would, but I think I'd rather switch profession than deal with Maven and Eclipse in 2020.

Swift culture is more about having non-mutable structs that in turn is extended via extensions and heavy use of copy on write when mutable structs are needed. It's a small difference but it's there.


I fail to see how culture is related to the language.

You have a weird notion of mutable by default in either Java or .NET. The former is notorious for builder patter because of that exact reason. Does Swift have special syntax for copy + update like F#: { someStruct with X = 10 }?

Never had problems with Maven. How is Swift different?

People have not been using Eclipse much for a while. There is IntelliJ IDEA for Java and Resharper for C#.


I might be wrong but as I've understood it Builder Pattern is mostly used as a solution to mitigate mutable state from being accidentally shared. Which is duct taping around the complexity instead of removing it.

In Swift the copy on write happens as an implementation detail: https://stackoverflow.com/questions/43486408/does-swift-copy...

I don't really know why but the coding patterns (what I call culture) that are popular for each language are very, very different even when they can support the same feature-set.


My understanding is C#'s behavior around structs is identical, except it is not called "copy-on-write". From what I see this behavior is identical to copy-always from language spec standpoint of view. If an actual copy is created is down to the code generator, but semantics is the same.


Yeah, I really shouldn't have lumped C# and Java together, sorry about that.


I've been recently wondering why something like Javascript/Typescript could not grow into the role.

* Ubiquitous.

* Not owned by a single corporation.

* Fairly performant runtime characteristics, with multiple implementations.

* Optional typing for quick explorations.

* Quite pleasant to use in its 2020 incarnation.

* The community has a proven process, tooling and track record of incrementally improving a language and its ecosystem.

Please don't shoot, I'm interested in constructive criticism.


Because JS wasn't designed for scientific computing like R and Julia are. Best case scenario is that you reimplement all the libraries Python has, but then you're just replacing Python with another generic scripting language instead of a language built for that purpose. Why would data scientists bother switching to JS when Python already has those libraries, and Julia and R have better numeric and data analysis support baked in?

And if Python, Julia and R don't cut it, then there's no reason to think another scripting language would. Instead you'd be looking at a statically typed and compile language with excellent support for parallelism.


JavaScript is a mess of a language in general. But even if that was not true, it is definitely not designed for numerical computation, nor symbolic manipulation, nor high performance, nor reliability.

Going from Python to JavaScript is a step backwards, not forward.


A big non starter in my opinion is that JavaScript doesn't even have proper numerical types. Everything is a float.


Feels like there's too much impedance mismatch, as integers aren't a first-class citizen in JS. You can use array buffers, but... I imagine you would want precise control over numerical representations everywhere to fully auto-differentiate code.


In addition to your list here, I am drawn by ES6 on the backend. This is not javascript of the early days.

On Differentiable Programming, you might be aware of a tensorflow counterpart in javascript:

https://www.tensorflow.org/js

I tested this to a certain extent and its not a toy. Its well thought out product from a very talented team, and has the ease of coding that we love about javascript. It can run on browsers!

This being said, we should note the strengths of a statically compiled language with the ease of installation and deployment like with Go, Rust, Nim, etc. in enterprise scale numerical computing.


I have similar thoughts. I think that Typescript could allow for a lot of the "bolt-on" type checking that I find appealing with static languages and most of these are just interfaces to the same C/C++ framework, so no reason you couldn't create typescript bindings..


I have to agree. Google suffers from a really bad case of ADHD when it comes to creating anything for consumption by outside developers...there is a long list and Dart and GWT are just two that stand out because there are large codebases out there that suffered because of these decisions.

Frankly, I'm surprised that Go has made it this far - I mean, it's a great language, I get it, but Google is fickle when it comes to things like this.


Go succeeded because it was largely ignored by Google. Dart was the darling language which flipped miserably and has only recently became relevant.


Dart's relevancy is heavily dependent on Flutter's success.

Seeing how Chrome team pushes for PWAs, Android team woke up and is now delivering JetPack Composer, and we still don't know what is going to happen with Fuchsia, the question remains.


If you know that Go was designed as a migration path away from Python, it's a bit easier to understand why it's gotten so much support from Google.


What makes you think Dart and Go have been unsuccessful?

Both are relatively young languages with rapidly growing adoption. Dart was on a downward trend but has seen a rejuvenation in the last few years thanks to flutter.


I was looking into Dart the other day because I guess Google is insisting on using it for Flutter and… it's someone in 2011's idea of a better JavaScript, except it's worse than modern ES2020 JavaScript? Why would anyone prefer Dart to modern JS/TypeScript? It's just not as good.


Dart has gotten better, but agreed that the overall developer experience with TypeScript is far ahead - especially with vscode.


I just was looking at their code samples on the website, and I was really unimpressed with it. Why learn a new language if there’s nothing distinctive or better about it, you know? It’s just a better ES5 and a worse TypeScript.


Only if you need to ship something with Flutter, the last attempt to rescue Dart.

However it still remains to be seen how long they will fund the team.

Chrome team cares about PWAs and Android team about JetPack Composer, and eventually having it compatible with Kotlin/Native (for iOS).

So it is still a big question mark why bother with Flutter, specially when it still lacks several usable production ready plugins for native features.


In hindsight, I wonder if using Swift would seem like a good idea in retrospect. AFAIK NVIDIA has dropped CUDA support on Mac, and outside Mac and a bit of Linux; how much support does Swift have?

Even though the article talks about the "why not Julia", which is the highest comment at time of typing ... Choosing a cross-compatible language would have kept more people interested in the long-run. Why should I as a Windows user want to learn a language just to use it with Tensorflow; when I'm not sure if such language support and other tooling will generally come to Windows?


Yeah its a very closed ecosystem, that is heavily Apple centric. The Apple war with NVidia is ridiculous, I guess they are just betting on TPUs or AMD to make up the difference. It is also strange that its so tied to Tensorflow rather than being just a better numeric language. Julia is definitely a better option, but I still feel we can do better than that.


Swift is only 6 years old and is growing rapidly on Linux. Despite IBM bowing out it's server frameworks (vapor and Kitura) continue their development. There are Swift compilers for many platforms.


A compiler alone doesn't make an eco-system.


I don't understand why Swift for Windows has not been updated for 2 years. I guess no one, especially Apple, cares about Swift becoming a general purpose language. For that reason alone, I'm skipping over it, although I hear interesting things about it.

https://swiftforwindows.github.io/


A quick search on the Swift forums brings you an announcement that the Swift team is going to support the Windows port which is already mostly finished and will be available in Swift 5.3 [0] and above [1].

[0] https://swift.org/blog/5-3-release-process

[1] https://forums.swift.org/t/on-the-road-to-swift-6/32862


"Saleem Abdulrasool is the release manager for the Windows platform (@compnerd), is a prolific contributor to the Swift project and the primary instigator behind the port of Swift to Windows." Saleem's github, https://github.com/compnerd, lists swift-win32 repo, which is a "a thin wrapper over the Win32 APIs for graphics on Windows." So it's one person wrapping Win32. Not too promising yet, but it's early and there's room for Windows programmers to get involved.


Incorrect. That GitHub repository isn't the Swift port for Windows.

This is the actual port which has the CI and Installer for Swift on Windows: [0]

[0] https://github.com/compnerd/swift-build/releases


I took that text from "5-3-release-process" I'm not talking about Swift compiling on Windows, I'm talking about the GUI situation, but I'll install it and hopefully be pleasantly surprised with a full featured GUI SDK. But don't get me wrong, a supported compiler and std lib for Windows from Apple is a fantastic start.


The next release of swift (5.3) is scheduled to have Windows support (along with more Linux distros) https://swift.org/blog/5-3-release-process/


I don't agree that Windows support affects Swift being general purpose or not.

If a Windows dev can target other platforms using Linux subsystem or containers, the only downside becomes an inability to target Windows desktops and servers, which are not hugely important targets outside of enterprise IT.


If enterprise IT is excluded I wonder what "general purpose" even means.


Enterprise IT is a user/customer category, not a purpose. And I didn't mean it to include apps. I meant that Swift is not suitable for IT depts that must use Windows servers, which is not that common anymore.


There are some very common purposes that are closely associated with enterprise IT. Writing line of business applications for Windows environments (server and/or desktop) is one of them.

But there is of course a sense in which Swift is a general purpose language as opposed to something like SQL if that's what you mean.

Unfortunately, right now Swift is not (yet) a pragmatic choice for anything other than iOS/macOS apps.


> Writing line of business applications for Windows environments (server and/or desktop) is one of them.

I disagree that this is common. Most people are on web + browser + Office these days.


So where are the Swift bindings for Oracle, Informix, SQL Server, DB2?


Web includes a very large number of Windows Server environments, either on premises or in the cloud. You really underestimate how dominant Windows is in many enterprises.


Agreed. It would have been better for them to back Julia as it was already a language for numerical computing. And it has an LLVM backend. With Google's backing Julia could be way ahead of where it is now.


I am now growing into thinking Swift may be a very good language, but may also happens to be not the best language for anything. And I am not sure if that is a good thing or a bad thing.


Yea I don't hate the language I sort of see it as a better Java. I just don't think its right for the ML/Data science community.


But the question is whether it's worth using it over Kotlin (which is also better Java) - Swift is hamstrung because its primary implementation of standard libraries is limited to single vendor platform. If you try to run it on Linux/Windows, you're forever going to be fighting compatibility quirks stemming from the fact that the main libraries only target the Apple implementation of dependencies. It's a similar situation as Mono users were stuck in.


+100 to this.

Kotlin is a language by a small company that only grew out of sheer love by the world. Even Google had to throw in the towel and officially support Kotlin. Flutter is a reflection of "creating a new language" - Dart.

Kotlin doesnt have the heavy handed lock-in of Swift. Wouldnt Swift fundamentally by handicapped by Apple's stewardship ? What if Lattner wants to add new constructs to the language ?


Google had no other option given that:

1 - Many InteliJ plugins are now written in Kotlin (Android Studio)

2 - They really screwed up with Android Java dialect and people were looking for alternatives

3 - They need an exit story for when the hammer of justice finaly falls down on how they screwed up Sun


Kotlin is an interesting option, especially now that it compiles to the LLVM. One of the issues I see is the mind space these languages occupy. Kotlin is deeply linked to the Andriod community which oddly feels like baggage.

Julia has done a great job at marketing itself as the language built for modern numerical computing by numerical computing people. They have effectively recruited a lot of the scientific community to build libraries for it. I think the language is flawed is some deep ways, but there is a lot to learn in how they positioned themselves.


How is Julia flawed?

I like Kotlin, but the garbage collector isn't really meant for numerical computing I'd guess, and I doubt having to think about LLVM and JVM and JS at the same time is going to work out well for it when it needs such a heavy, heavy focus on performance.


From the language selection doc[1]:

"Java / C# / Scala (and other OOP languages with pervasive dynamic dispatch): These languages share most of the static analysis problems as Python: their primary abstraction features (classes and interfaces) are built on highly dynamic constructs, which means that static analysis of Tensor operations depends on "best effort" techniques like alias analysis and class hierarchy analysis. Further, because they are pervasively reference-based, it is difficult to reliably disambiguate pointer aliases.

As with Python, it is possible that our approaches could work for this class of languages, but such a system would either force model developers to use very low-abstraction APIs (e.g. all code must be in a final class) or the system would rely on heuristic-based static analysis techniques that work in some cases but not others."

[1] https://github.com/tensorflow/swift/blob/master/docs/WhySwif...


Their justification for picking Swift over Julia rings a bit false, unless one reduces it to "we're familiar with Swift, and that's why".

They can't argue for Swift over Julia due to community size, given that Julia is far more portable, and more familiar to users in the scientific domain. 'Similarity of syntax to Python' is another very subjective 'advantage' of Swift: Later in the same document they mention "mainstream syntax" - that is, Swift having a different syntax from python - as an advantage.

I wonder whether they just decided on the language in advance, which is totally fine, but we could do without the unconvincing self-justification.


Julia may be portable but it doesn't run well at all on smaller embedded devices like a Pi or Nano for example and its compiler will be an issue on most mobile devices outside of terminal emulators.


So where are the Swift compilers for smaller embedded devices like a Pi or Nano?


Non-existent to my knowledge.


They seem to be saying, they could have picked Julia, but were just more familiar with Swift:

> and picked Swift over Julia because Swift has a much larger community, is syntactically closer to Python, and because we were more familiar with its internal implementation details - which allowed us to implement a prototype much faster.

I think it's very debatable to claim Swift is more similar to Python syntactically, as Julia looks more like a dynamic language to the user. Also, Julia is closer to languages like Matlab and R, which many mathematical and scientific programmers are coming from.

Swift has a much larger community, but it's not clear how big the overlap is between iOS app developers and Machine Learning developers. It probably would make deploying models on iOS devices easier, however.


Exactly. While I think that Swift is a batter language than Kotlin for a couple of reasons, I'd much rather actually use Kotlin for a project.


Don't forget C# and Scala are better Javas as well.


That's where Python has been for a while. It's a good place to sit.


Well yes, and precisely because the places for 2nd best continues to belongs to python in many areas which is the problem. It is not clear whether Swift is trying to compete with the best or the 2nd best. It seems to be to have wrong sets of trade offs.


"The second best tool for your project!"


Well, if that’s true for every project...


IMO the first three lines of the program basically explain why academics and data programmers are never going to use Swift:

Python:

  import time
  for it in range(15):
     start = time.time()
Swift:

  import Foundation
  for it in 0..<15 {
     let start = CFAbsoluteTimeGetCurrent()
This is why people like Python:

- import time: clearly we are importing a 'time' library and then we clearly see where we use it two lines later

- range(15): clearly this is referring to a range of numbers up to 15

- start = time.time(): doesnt need any explanation

This is why academics and non-software engineers will never use Swift:

- import Foundation: huh? Foundation?

- for it in 0..<15 {: okay, not bad, I'm guessing '..<' creates a range of numbers?

- let start = CFAbsoluteTimeGetCurrent(): okay i guess we need to prepend variables with 'let'? TimeGetCurrent makes sense but wtf is CFAbsolute? Also where does this function even come from? (probably Foundation? but how to know that without a specially-configured IDE?)

EDIT: Yes everyone, I understand the difference between exclusive and inclusive ranges. The point is that some people (maybe most data programmers?) don't care. The index variable you assign it to will index into an array of length 15 the way you would expect. Also in this example the actual value of 'it' doesn't even matter, the only purpose of range(15) is to do something 15 times.


> - range(15): clearly this is referring to a range of numbers up to 15

Is it? Does that range start at 0 or 1 or some other value? Does it include 15 or exclude it?

> - start = time.time(): doesnt need any explanation

Doesn't it? Is that UTC or local time? Or maybe it's CPU ticks? Or maybe the time since the start of the program?

You've basically just demonstrated the assumptions you're used to, not any kind of objective evaluation of the code's understandability.


In the Python version, you can mostly understand someone else’s code right off the bat, even as a mostly non-technical reader.

The details (e.g. that list indexes and ranges start at 0 by default and are half-open) are consistent and predictable after just a tiny bit of experience.


"Understanding" entails knowledge of the semantics. This means understanding the meaning of syntax, and the behaviour of the functions being called. This is true of both code fragments presented, so if you find one more intuitive than the other, that's your bias and not necessarily some objective feature of the syntax and semantics.

Maybe most people share your bias, and so it could qualify as a reasonable definition of "intuitive for most humans", but there's little robust evidence of that.


The first (and most important) level of “understanding” is understanding the intended meaning. Programming, like natural language, is a form of communication. The easier it is for someone to go from glancing at the code to understanding what it is intended to do, the more legible.

I claim that it is easier to achieve this level of understanding in Python than most other programming languages. (And not just me: this has been found to be true in a handful of academic studies of novice programmers, and is a belief widely shared by many programming teachers.)

Using words that the reader is already familiar with, sticking to a few simple patterns, and designing APIs which behave predictably and consistently makes communication much more fluent for non-experts.

There are deeper levels of understanding, e.g. “have carefully examined the implementation of every subroutine and every bit of syntax used in the code snippet, and have meditated for decades on the subtleties of their semantics”, but while helpful in reading, writing, and debugging code, these are not the standard we should use for judging how legible it is.


I also agree that the Swift is more clear, but only because it seems very "this is what is happening, go look up what you don't know"

disclaimer: I use swift, but also I have used python.


Languages that dump crap into the namespace on import are doing the wrong thing. Python has from x import * and every style guide says to never use it. Swift has a lot of other nice features, but the import thing is really a bungle. It is worse for everyone, beginners and experienced users alike. It is even bad for IDE users because you can't type imported-thing.<tab> to get the autocomplete for just the import. You're stuck with the whole universe of stuff jamming up your autocomplete.


`import func Foundation.CFAbsoluteTimeGetCurrent` imports just that function.


I don't necessarily think Swift is more clear, just that the original argument was unjustified in claiming Python was "clearly" superior.


>Is it? Does that range start at 0 or 1 or some other value? Does it include 15 or exclude it?

This is like reading a novel that says, "And then Jill began to count." and then asking the same questions. A non-technical reader does not need to know these details. The smaller details are not required to grok the bigger picture.

>Doesn't it? Is that UTC or local time? Or maybe it's CPU ticks? Or maybe the time since the start of the program?

When is the last time someone asked you, "Know what time it is?" and you responded with, "Is that UTC or local time?" Same thing, these details do not and should not matter to a non-technical reader.

Keep in mind, the audience is for non software engineers from people who barely know how to code, to people who do not know how to code but still need to be able to read at least some code.


> Does that range start at 0 or 1 or some other value?

What does range mean? Is it English? Attacking every single possible element of a language is not compelling. The defacto standard, is 0 indexing. The exceptions index at 1.

> - start = time.time(): doesnt need any explanation

People often oversimplify the concept of time. However, on average, the cognitive load for Python is lower than most. Certainly less than Swift. In one case I would look up what time.time actually did and in the case of Swift, I would throw that code away and work on another language with less nonsensical functions, like PHP. /s


> The defacto standard, is 0 indexing. The exceptions index at 1.

"Defacto standards" are meaningless. The semantics of any procedure call are completely opaque from just looking at an API let alone a code snippet, especially the API of a dynamically typed language, and doubly-so if that language supports monkey patching.

So the original post's dismissive argument claiming one sequence of syntactic sugar and one set of procedure calls is clearer than another is just nonsense, particularly for such a trivial example.

> However, on average, the cognitive load for Python is lower than most.

Maybe it is. That's an empirical question that can't be answered by any arugment I've seen in this HN thread.


> Defacto standards" are meaningless.

No they arent. A mismatch between what is expected and doesnt happen, within a specific context contributes to cognitive load. "Intuitive" is a soft term with a basis in reality. The only language (Quorum) that has made an effort to do analyses was largely ignored. Usability in languages exist, with or without the numbers you wish for. Swift is less uable than many some languages and more than others.


The use of `let` to declare immutable values is well-established in programming languages. Academics have no problem with this (and, indeed, prefer it -- at least, everybody I've talked to about it in the PL research community seems to prefer it). The same or a similar form is used in Scala, OCaml, JavaScript, Lisp, Scheme, etc. Some of these languages provide mutable contrasting forms, such as `var`. Tracking mutability so explicitly allows for more advanced static analyses.

Using `..<` and `...` is pretty simple to figure out from context. The former produces an exclusively-bounded interval on the right, while the latter is an inclusive interval. This is functionality that more languages could stand to adopt, in my opinion.

I agree that the names themselves are not very transparent. However, they become less opaque as you learn the Swift ecosystem. Admittedly, this makes them not as immediately user-friendly as Python's simple names, but it's not as though they're some gigantic obstacle that's impossible to overcome.

Personally, I like Swift a lot (even though I never use it). It has a syntax that has improved on languages like Java and Python, it's generally fast, it's statically typed, and it has a large community. The fact that implicit nullable types are discouraged directly by the syntax is phenomenal, and the way Swift embraces a lot of functional programming capabilities is also great. If it weren't so tied to Apple hardware, I would likely recommend it as a first language for a lot of people. (I know that it runs on non-Apple hardware, but my understanding is that support has been somewhat limited in that regard, though it's getting better.)


> However, they become less opaque as you learn the Swift ecosystem

IMO that's essentially the problem. Most people* don't want to have to learn the ecosystem of a language because it's not their focus.

The other issue is that when you start googling for information about the Swift ecosystem, you're not going to find anything relevant to academic, mathematical, or data-science programming. All the information you will find will be very specific to enterprise-grade iOS and macOS development, which will be a huge turn-off to most people in this community.

EDIT: *academics


Writing off a language/syntax/toolset because you couldn’t be bothered doing < 5 minutes of searching to figure out something that will probably yield net-benefits in the future is an incredibly myopic thing to do in my opinion.


> you're not going to find anything relevant to academic, mathematical, or data-science programming

Yet.

The question is whether Google and other Swift enthusiasts can change that over time.


Like you said: people in PL research. They specifically work on researching programming languages. But that is just a tiny fraction of what academic world has.


> The use of `let` to declare immutable values is well-established in programming languages.

Javascript, Swift, and VBA have let.

C, C++, Java, C#, PHP, Python, Go don't have it.

I'm also willing to bet that if you haven't studied math in English let is a non-obvious keyword.


As a 10 year old child in the early 80s:

    10 LET A$ = "Hello world"
    20 PRINT A$
    30 GOTO 10


Are academics born with python knowledge? you still need to learn that range(10) is exclusive of the number ten, and that 'time' itself is not a function. Julia for example is much further from 'natural language' programming and seems quite popular.

It's more important that the language can accurately and succinctly represent the mental model for the task at hand, and the whole point of this article is that Swift can offer a syntax that is _more_ aligned with the domain of ML while offering superior performance and unlocking fast development of the primitives.


Julia is similar to matlab by design, which makes it easier for science and engineering folks who are already familiar with it.

I think functional programming advocates underrate simplicity of procedural languages. Programming is not math, algorithms are taught and described as a series of steps which translate directly to simple languages like Fortran or Python.

I think ML is great, but I’m skeptical if it is a big win for scientific computing.


Are algorithms and theory behind them not math themselves?


They are proven with with math, but their implementation in code certainly isn’t. If it were that simple, we would be using languages like Coq and TLA+ for writing software. But we usually don’t, because math does not cleanly translate into usable programs, it needs a human to distill it into the necessary steps the computer must follow.


No really. They are math themselves. Algorithms have nothing to do with implementation. The whole CLRS books algorithms are written with pseudocode. By your logic Turing machines and many other models of computations are not math. Just something is imperative doesn't mean it's not mathematics.


This is a pretty pedantic definition.

Plenty of excellent programmers are not mathematicians. How would that work if programming were just math? That’s like saying physics is just math while ignoring all of the experimental parts that have nothing to do with math.


Range is a concept from mathematics, so an academic should know it regardless if they know Python or not.

Most of the concepts in Python come from academics and mathematics, so it's an easy transition. I don't think math has a time concept in a straight forward way, so time is an edge case in Python.


Have you ever come across a bug where range(10) doesn't get to 10? Even if it is assumed knowledge, it doesn't seem to me to even approach the level of assumed knowledge of time coming from a 'Foundation' library rather than... you know... a time library.


CFAbsoluteTimeGetCurrent is a long deprecated API, so I'm not sure where that's coming from.

A current and more readable way of expressing this would be

  let start = Date().timeIntervalSinceReferenceDate
If you don't need exact seconds right away, you can simplify further to just:

  let start = Date()
which is easily as simple as the Python example.


This is not the same thing.

CACurrentMediaTime() / CFAbsoluteTimeGetCurrent() are first of all not deprecated (just check CFDate.h / CABase.h) but return a time interval since system boot so they are guaranteed to be increasing. It's just a fp64 representation of mach_absolute_time() without needing to worry about the time base vs seconds.

Date() / NSDate returns a wall clock time, which is less accurate and not guaranteed to increase uniformly (ie adjusting to time server, user changes time etc)


Oops, you're right on the deprecation point. CFAbsoluteTimeGetCurrent is not itself deprecated but every method associated with it is [1].

Also CFAbsoluteTimeGetCurrent explicitly calls out that it isn't guaranteed to only increase. CACurrentMediaTime is monotonic though.

CFAbsoluteTimeGetCurrent also returns seconds since 2001 and is not monotonic, so there's really no reason to use it instead of Date().timeIntervalSinceReferenceDate. The most idiomatic equivalent to the Python time method is definitely some usage of Date(), as time in Python doesn't have monotonic guarantees either.

[1] https://developer.apple.com/documentation/corefoundation/154...


> CFAbsoluteTimeGetCurrent is not itself deprecated but every method associated with it is.

Because they deal with calendar-related stuff that is better accessed through Date.


”CACurrentMediaTime() / CFAbsoluteTimeGetCurrent() […] are guaranteed to be increasing.”

That is true for CACurrentMediaTime, but that time stops when the system sleeps (https://developer.apple.com/documentation/quartzcore/1395996... says it calls mach_absolute_time, and https://developer.apple.co/documentation/driverkit/3438076-m... says ” Returns current value of a clock that increments monotonically in tick units (starting at an arbitrary point), this clock does not increment while the system is asleep.”)

Also (https://developer.apple.com/documentation/corefoundation/154...):

Repeated calls to this function do not guarantee monotonically increasing results. The system time may decrease due to synchronization with external time references or due to an explicit user change of the clock.


Python's time.time() call is also going to be affected by system time changes and thus not guaranteed to increase uniformly. So Date() in Swift and time.time() in Python are the same in that regard.


Correct, the appropriate function is time.monotonic().


Python's time() call is returning the unix epoch wall clock time. Newbies (and most engineers TBH) are not going to know the subtleties and reasons why you'd use a monotonic clock or to even think of using one or another.

So for this comparison, it is better to use Date().


CFAbsoluteTimeGetCurrent() returns wall clock time, as far as I can tell the exact same thing as -[NSDate timeIntervalSinceReferenceDate].

https://developer.apple.com/documentation/corefoundation/154...

https://developer.apple.com/documentation/foundation/nsdate/...


"long deprecated" as in 20 years long; the CF APIs exist mostly for compatibility with Mac OS 9. The only time you really would need to use those functions nowdays is for interfacing with a few system services on Apple platforms like low-level graphics APIs and whatnot.


CoreFoundation is in no way deprecated.


You're right; I quoted the parent comment even though "deprecated" was not an accurate word choice here, sorry. CF is not deprecated because it is needed on occasion when programming for Apple platforms, but the APIs are largely obsolete.


What's amusing is that -[NSDate timeIntervalSinceReferenceDate] is actually the older of the two, going back to NeXTStep's Foundation introduced with EOF 1994 and later made standard with OPENSTEP 4.0.


0..<15 is a range of numbers from 0, up to but not including 15. 0...15 is the corresponding range including 15.

I find this notation slightly clearer than the python version. It took me some time to remember whether range(15) includes 15 or not.


Also, does it start with 0 or 1 (or -3023 for that matter)? As a programmer you would assume of course it starts at 0, but since this thread talks about "non-programmer" academic types I think it's worth mentioning. What if I want a range of 1-15, or 20-50, can I still use range()? I can't tell from the Python example but I can tell exactly what I would need to change with the Swift one to make it work exactly how I'd want.


Very true, and this is especially important in data science, where the majority of languages, other than Python, are 1 indexed (Matlab, Julia, R, FØRTRAN).


CS student here so not much of a highly valued input.

Once I knew these two facts, it didn't add much confusion.

1. Indexing starts from 0

2. Thus, range can be thought "from up to one before x", x here would be 15.

And I learned this pretty early and did not get confused later on.


You learned it for one language. Now imagine that you're working with a handful of languages regularly, some of which have 1-based indexing, some 0-based, some of which may have closed ranges, others half-open ranges.

If you're anything like me, you'll end up spending quite a bit of time looking up the documentation to the range operator to remind yourself how this week's language works again.


You're kind of proving you've never used Swift before. The real problem with Swift has nothing to do with the Syntax or API. It has to do with the segmentation in documentation, training materials, best practices, and design patterns. The journey from 0 to "best way to do x" is awful with Swift compared to other languages. It's pretty damn telling that the best way to learn iOS dev is still piecing together random Ray Wenderlich blogs (written across different versions of Swift!).


The Swift manual is pretty good actually, documentation around Foundation is pretty complete although a bit spare but yeah....UIKit and other libraries used for creating iOS applications are really not very well documented. The last few years I've been copying code from WWDC demo's to learn about new stuff. I tried to learn how to capture RAW data when the API was just out, so no Stack Overflow answer out yet. It was hard as hell.

But anyway that's not a Swift problem. Swift itself is pretty easy to get into.


Ya it is UI bound (though you’d think that’s where a lot of the documentation would be!). I’d also say JSON handling and other data handling aspects are poorly documented, would you agree?


I agree Codable is one of the worst ways of dealing with JSON apart from manually parsing through dictionaries. I mean It Just Works on a very clean and consistent API, but if people started to mix snake_case with PascalCase, return "1" or any other garbage shit people write when they only thing they have to care about it JS clients then you're typing a lot of unreadable boilerplate.

Since we have custom attributes I will investigate soon if there's a nice framework around that can make it work a bit like the usual JSON libraries in for example C# work.


> - start = time.time(): doesnt need any explanation

Python's no saint when it comes to time stuff either. I had some code using time.strptime() to parse time strings. It worked fine. Then I needed to handle data from a source that included time zone information (e.g., "+0800") on the strings. I added '%z' to the format string in the correct place--and strptime() ignored it.

Turns out that if you want time zones to work, you need the strptime() in datetime, not the one in time.

BTW, there is both a time.time() and a datetime.time(), so even that line that needs no explanation might still cause some confusion.


Python isn’t alone in that. Designing a date&time library is so tricky that, even after having seen zillions of languages fail and then replace their date&time library with a better one, most new languages still can’t do it right the first time.

I think the main reason is that people keep thinking that a simple API is possible, and that more complex stuff can initially be ignored, or moved to a separate corner of the library.

The problem isn’t simple, though. You have “what time is it?” vs “what time is it here?”, “how much time did this take?” cannot be computed from two answers to “what time is it?”, different calendars, different ideas about when there were leap years, border changes can mean that ‘here’ was in a different time zone a year ago, depending on where you are in a country, etc.

I guess we need the equivalent of ICU for dates and times. We have the time zone database, but that isn’t enough.


I use Python a few times a month for some simple scripting usually. Every time I have to look up how to use `range()` correctly, usually because I forgot if it's inclusive or exclusive. Academics that are used to Matlab or Julia will also have to look up if it starts at 0 or 1.

Furthermore, it's obvious what `time()` does in this context, but if I was writing this code I would _absolutely_ have to look up the correct time function to use for timing a function.


Maybe, but by that metric nobody would ever be using Java.


That's exactly my point. The only people who use Java are professional software engineers mostly working at very large companies with teams in the hundreds. Almost nobody in academia uses Java.


None of them use Python either. A lot just use MATLAB.


> None of them [academics] use Python either.

Here’s a paper by Travis Oliphant describing SciPy that has >2500 citations. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C33&q=pyt...

In many fields of science Python is already the dominant language, in others (like neuroscience), the writing is on the wall for Matlab. Approximately all the momentum, new packages, and new student training in systems neuroscience that I’ve seen in the last 5 years is in Python.


I apologize :-/ I should have been more clear. What I was referring to was in a little more broader context than just Data Science or ML. Many of the Engineering and Math PhDs I work with typically use MATLAB or Mathematica.


It really depends heavily on the engineering field. I do work in optical/physical engineering (photonics, nonlinear optics, quantum computing) and essentially operations research (optimization theory) and almost everything we use is Python (as a C + CUDA wrapper/HDL-type thing) and Julia (which I'm trying to introduce for code reusability, even if it is only marginally slower than the former).

At least in my university, most people really do use Python + C and Julia for many, many cases and MATLAB and such are used mostly in mechanical and civil engineering, some aero-astro (though a ton of people still use Python and C for embedded controllers), and Geophysics/Geophysical engineering (but, thanks to ML, people are switching over to Python as well).

I think even these fields are slowly switching to open versions of computing languages, I will say :)


Yeah I know what you mean. I'm mechanical engineering (Controls) and the vast majority of them still use MATLAB, but they are slowly moving towards more open computing languages. I can only consider this a great thing! :)

The issue I see is with the undergraduate curriculum in many Universities. This is where I see the legacy use of MATLAB is really hurting the future generation of students. Many still don't know proper programming fundamentals because MATLAB really isn't set up to be a good starting point for programming in general. To me, MATLAB is a great tool IF you know how to program already.


Oh yeah, it’s a killer I’m not going to lie. I have the same problem with some classes here (though I haven’t taken one in years) and it’s quite frustrating since students are forced to pay for mediocre software in order to essentially do what a normal calculator can do anyways (at least at the undergrad level).


I work in a massive research institution with a lot of medical doctors. They almost all use R if they can program. I try to encourage the use of Python to help them slowly pick up better programming fundamentals so they dont miss out on whatever the next wave is in a decade. Learning R doesn't teach you much about other languages but IMO learning Python can help you move languages.


> Many of the Engineering and Math PhDs I work with typically use MATLAB or Mathematica.

Yes, and the government still needs COBOL programmers.

Going forward, I believe Python has far more momentum than either MATLAB or Mathematica. I think far more MATLAB and Mathematica users will learn Python than the other way around in the future, and far more new scientific programmers will learn Python than either of those.


I really believe so too! I just hope that goes downstream to the undergrads in the fields too.


MATLAB's foothold in academia is due to legacy familiarity, cheap (but not free) academic licensing, a nice IDE, and good toolboxes for certain niches (Simulink, Control Toolbox). I used MATLAB for 12 years in academia and would consider myself an advanced user.

However, when I left academia (engineering) 8 years ago, its use was already declining in graduate level research, and right before I left most professors had already switched their undergrad instructional materials to using Python and Scilab. I observed this happening at many other institutions as well. Anecdotally, this trend started maybe 10 years ago in North America, and is happening at various rates around the world.

I'm in industry now and MATLAB usage has declined precipitously due to exorbitant licensing costs and just a poor fit for productionization in a modern software stack. Most have switched to Python or some other language. My perception is that MATLAB has become something of a niche language outside of academia -- it's akin to what SPSS/Minitab are in statistics.


I'm not denying any of this and agree with your analysis about MATLABs use. I'm just saying that it's still used a lot more than people on Hacker News like to think.

The University I work at still teaches MATLAB to new engineering students still.


Oh I understand, I was more responding to your original statement "None of them use Python either. A lot just use MATLAB" which would be an unusual state of affairs in this day and age, though I have no doubt it is true in your specific situation. It's just that your experience seems circumscribed and uncommon in academia today (well insofar as I can tell -- I don't know the culture at every university).


...nobody in academia uses python? I would strongly disagree. The whole point of this Swift library is to provide an alternative to PyTorch which is clearly very popular in the community.


Being in academia myself, I have to disagree as well. Academia has it's own languages and tools it prefers. They have just recently started warming up to Python.


MATLAB is used by a lot of engineers, Mathematica is used by mathematicians/physics theorists, Python is used widely by a lot of different fields.


MATLAB and Mathematica are the primary tools I see used by people at my uni. People are just starting to warm up to Python.


In bioinformatics / computational biology, Python is absolutely ubiquitous.

Same for any other field that uses ML extensively.


Academics and data programmers are not known for using Java.


Data engineers use java a lot although, w/ hadoop, kafka, GIS libraries, hive, etc.


CS academics definitely use Java.


Well Java is not intended to replace python for TF.


seeing 0..<15 without knowing the language I think, hmm, a range from 0 to 14.


That's exactly what it is, and so is python's range(15):

    >>> list(range(15))
    [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14]
The important thing with syntax is to avoid the illusion of understanding. That's when the language user is confident that the syntax means one thing when it actually means something else. If the user is not sure what something means, they'll look it up in docs or maybe write a few toy examples to make sure it does what they think it does. Python's range() is ambiguous enough that I did this when I was learning the language. I was pretty sure it would create a range from 0 to 14, but I wanted to make sure it wasn't inclusive (0-15).

Examples of the illusion of understanding abound. These aren't all true for everyone, and HN users have been writing software long enough to have internalized many of them, but every language has them:

- Single equal as assignment. Almost every newbie gets bitten it. They see "=" and are confident that it means compare (especially if it's in a conditional).

- x ^ y means xor, not "raise x to the power of y"

- "if (a < b < c)" does not do what newbies think it does.

- JavaScript's this.

Sometimes syntax can make sense on its own, but create the illusion of understanding when combined with another bit of syntax. eg: Python newbies will write things like "if a == b or c" thinking that it will be true if a is equal to either b or c.

The illusion of understanding is the cause of some of the most frustrating troubleshooting sessions. It's the thing that causes newbies to say, "Fuck this. I'm going to do something else with my life."


>Examples of the illusion of understanding abound. These aren't all true for everyone, and HN users have been writing software long enough to have internalized many of them, but every language has them

>The illusion of understanding is the cause of some of the most frustrating troubleshooting sessions. It's the thing that causes newbies to say, "Fuck this. I'm going to do something else with my life."

About 14 years ago (give or take up to 4 years) I read about a study that was done at a prestigious CS university, where some tests were given entering CS students at the beginning of the course to see who was ok with arbitrary logic and syntax and who was not, IIRC it was 40% of the class who would get hung up on "but why" and "it doesn't make sense" and would end up failing but the ones who were able to cope with the arbitrary nature of things would graduate and the others would end up dropping out, changing studies.

About every couple of months I wish I could find that damn paper / study again.

NOTE: my memories of this study might also have been faded by the years, so...


Are you thinking of The Camel has Two Humps[1]? I don't think it was ever published in a journal and the author later retracted it.[2]

It seems like the conclusions of the study were overstated, but the general idea is correct: Those who apply rules consistently tend to do better at programming than those who don't. This is true even if their initial rules are incorrect, as they simply have to learn the new rules. They don't have to learn the skill of consistently applying rules.

1. http://www.eis.mdx.ac.uk/research/PhDArea/saeed/paper1.pdf

2. http://www.eis.mdx.ac.uk/staffpages/r_bornat/papers/camel_hu...


I guess it is, it certainly seems like, although I had the memory that their main claim was that the ability to handle arbitrary mental models (not completely logical ones) was the differentiator between those who succeeded and not.

And embarrassingly this thing I've gone around believing for the last 14 years isn't so.


I think were JavaScript's this is concerned, it's different than the others - the others are just little tricky bits of syntax, this is it's own little problematic area of deep knowledge in the language.

It's more like saying people don't understand all of how pointers work in C.


To be honest my thought is "what the fuck is this? It's probably 0-14, with steps of 1, but I have no idea what i would cahange to get steps of 0.1.


I haven't read OP yet but I don't see what is the issue here. I honestly think what you see as issues are perhaps because of your lack of exposure to a wider range of languages?

CF clearly is a prefix for a library. I'll take an educated guess it means Core Foundation? Pretty common pattern of naming things, with +/- to be certain. And once you've seen it, it is just there, and you know precisely what it means. So 10 minutes of your life to learn what CFxxx() means.

Let. I like lets. Some don't. Surely we can coexist?

x..y is also not unique to Swift. It has a nice mathematical look to it, is more concise.

Btw, is that 'range' in Python inclusive or exclusive? It isn't clear from the notation. Must I read the language spec to figure that out? .. /g


> CF clearly is a prefix for a library. I'll take an educated guess it means Core Foundation?

It does, and the prefix is only there because C's namespacing isn't great.


`let` is not a terribly hard to understand keyword. Especially if you've had exposure to functional programming. Most academics I knew actually started out programming the functional way, rather than OO. So I'm not sure if I agree 100% with what your saying.


It's not terribly difficult to understand `let` if you have a background in math, given that nearly every formal proof defines variables with `let x:=`


I think only academics with a background in CS will typically be familiar with functional programming.

For everyone else, they will have used something simpler like Excel, C, R, Python.


The "Foundation" and "CFAbsoluteTimeGetCurrent" are very easily fixable surface level details.

"range(15)" vs "0..<15" could go either way.

"let" vs "var" in Swift is indeed something that adds verbosity relative to Python, and adds some cognitive load with the benefit of better correctness checking from the compiler. Very much a static vs dynamic typing thing. That's where you'll see the real friction in Swift adoption for developers less invested in getting the best possible performance.


"Python is slow" argument just shows complete ignorance about the subject. (and there may be good arguments for not using python)

First of all if you are doing "for i in range(N)" then you are already doing it wrong, for ML and data analytics you should be using NumPy "np.arange()", Numpy arange doesnt even run in "python" it's implemented in C. So it may even be faster than swift '..<' . Let me know when you can use swift with spark.


This is actually one of the most frustrating parts about using python. You can’t write normal python code that performs well. Instead you have to use the numpy dsl, which I often find unintuitive and too often results in me needing to consult stack overflow. This is very frustrating because I know how I want to solve the problem, but the limitations of the language prevent me from taking the path of least resistance and just writing nested loops.


my point is that the benchmark is deceiving, again if you are doing data analytics or ML, then you already are using numpy/pandas/scipy, so thats not a valid argument.


But it is. A good compiler could unroll my loop and rewrite it with the appropriate vector ops. But that isn’t possible with just python right now.


The way a range is defined in Swift looks scary for a programmer but immediately looks very natural as soon as you imagine you've forgotten programming an only know math.

time.time() (as well as datetime.datetime.now() and other stuff like that) always looked extremely ugly to me. I would feel better writing CFAbsoluteTimeGetCurrent() - it seems more tidy and making much more sense once you calm down and actually read that.


Python is great for scripting or rapid prototyping because of this, but I can definitely understand why someone would want a more literal language like Swift. Even in your example you can glean more information from the Swift code.


Been writing Swift for years, this is a very weak argument :)


”start = time.time(): doesnt need any explanation”

So, is that evaluated when that statement is run, when the value is first read (lazy evaluation, as in Haskell), or every time ‘start’ gets read. For example, Scala has both

  val start = time.time()
(evaluates it once, immediately),

  lazy val start = time.time()
(evaluates it once, at first use), and

  def start = time.time()
(creates a parameterless function that evaluates time.time() every time it is called)


A lot of academics write C


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: