Hacker News new | past | comments | ask | show | jobs | submit login
Learn Python 3 the Hard Way (learnpythonthehardway.org)
354 points by jscholes on Feb 24, 2017 | hide | past | favorite | 180 comments



This guy has a very stranger way to promote his job.

He firstly said that Py3 sucks and he can't teach it... and now he is teaching it ;-)

[0] https://learnpythonthehardway.org/book/nopython3.html


Which turned into a big internet fight, and he changed his mind.

It's good to see people willing to change the way they do things, especially when that means publicly contradicting their former selves and with such an intense personality. It's oddly inspiring.


Also, when he wrote it was kind of true. A lot of things hadn't been ported to Python 3 yet and using it was asking for pain. Now finally we are in the opposite situation where many things are ending support for Python 2 and the easier route is Python 3.


He wrote his rant a few months ago, which was well after every major library switched to Python 3.

Dunno if you mean his previous book or the rant where he said, among other things, that Python 3 wasn't turing complete.


He updated it a few months ago, but he's had versions of this for quite some time. Yes, maintaining it in late 2016 is somewhat stubborn, but it made a lot of sense in the early years of Python 3.


I have a difficult time believing that we're talking about the same rant, because the one I'm talking about would not have made any sense even in the early years of Python 3.

Oh, I see it's been updated with a sober disclaimer now, where "In the previous version I trolled people" and "I even had a note after the gag saying it was a gag, but everyone is too stupid to read that note even when they do elaborate responses to my writing."

I read his writing very carefully. This note didn't exist.


For those of you now breaking out the Internet Archive to prove the note existed

> "Yes, that is kind of funny way of saying that there's no reason why Python 2 and Python 3 can't coexist other than the Python project's incompetence and arrogance. Obviously it's theoretically possible to run Python 2 in Python 3, but until they do it then they have decided to say that Python 3 cannot run one other Turing complete language so logically Python 3 is not Turing complete."

Even though he says it's a "funny" way to say something, his explanation of said "joke" is not a correct description of what Turing completeness is. There is no "gag" here; Zed didn't understand it, and when people on the internet corrected him, he decided to play it off as a gag.

Edit: Just to make it explicit for those of you reading who don't get it, "Turing completeness" doesn't mean "I can run a program written in X in Y's runtime." It means that a language is capable of expressing all operations a Turing machine can do, which means that you can re-write a program in X to a program in Y if both X and Y are Turing complete. You can obviously re-write a Python 2 program to be a Python 3 program, so both of those languages are Turing complete.

The property he describes later, of the JVM and the CLR supporting multiple languages has absolutely nothing to do with Turing completeness. Lisp and Javascript are both Turing complete languages, but the fact that you can't run Lisp on Node.js doesn't mean you can't re-write a Lisp program in Javascript. The fact that he's equating being able to run many languages on a single runtime and Turing compleness means he doesn't understand what he's talking about.


>> Obviously it's theoretically possible to run Python 2 in Python 3, but until they do it then they have decided to say that Python 3 cannot run one other Turing complete language so logically Python 3 is not Turing complete."

> "Turing completeness" doesn't mean "I can run a program written in X in Y's runtime."

That's not quite what he meant. He meant that Turing completeness of both languages implies that an interpreter for Python 2 can be written in Python 3,and that if the Python maintainers say it isn't possible, etc. etc.

Which is technically true, except that it is completely obvious that the performance of Python 2 under that arrangement would so totally suck that no sane developer would ever take that approach (and I can believe that the Python maintainers just said "that won't work" or some such), and that is what he apparently didn't understand (or is now claiming he pretended not to understand for comedic effect).


> "Turing completeness" doesn't mean "I can run a program written in X in Y's runtime."

It's not the definition of Turing Complete, but it is a provable property of Turing-Completeness. If Y is Turing-Complete, then you can use Y to write an interpreter for X. Then the issue is reduced to arguing about what the word "in" means in the phrase "run Python 2 in Python 3."


Actually it's reduced to the words "in Python 3" because the conflation between the Python 3.x language and the CPython 3.x runtime is something that is prevalent throughout the document.

Just because the maintainers of CPython 3 haven't included a Python 2 interpreter doesn't mean that Python 3 is not Turing complete. Their choice not to do that has nothing to do with the Turing completeness of anything.


> Just because the maintainers of CPython 3 haven't included a Python 2 interpreter doesn't mean that Python 3 is not Turing complete. Their choice not to do that has nothing to do with the Turing completeness of anything.

Which is precisely the point that the original version of the essay was making. The joke was on those insisting that it couldn't possibly work, and the punchline was telling them "oh, so what you're saying is that your language is not Turing complete?"

Ok, maybe that doesn't follow the format for a classic joke, but it's certainly a humorous and sarcastic remark.


Except that the original statement isn't really that you can't write a python2 interpreter in python3. In fact, I'm quite sure that those exist somewhere on github, but that there wasn't some kind of transparent interop between py2 and py3.

That is, why can't the cPython3 implementation of the python3 language also execute python2 code? To which I think the simplest response is "why can't the gcc implementation of the C++ language spec link arm-compiled C++ and x86-compiled C++?"


> Except that the original statement isn't really that you can't write a python2 interpreter in python3.

That's why Z thinks of this as a joke, not a straight-face assertion.


I don't know what this statement is supposed to mean, could you rephrase or elaborate?


> but the fact that you can't run Lisp on Node.js

But you can write a Lisp interpreter in Node.js, because they are both Turing complete. In fact, I believe I've seen tutorials written up on how to write a Lisp in Javascript.


Let me clarify further.

The fact that Node.js, the software package that some people use to execute Javascript, does not include in it is default distribution a way to execute Lisp programs, says absolutely nothing about the Turing completeness of the Javascript programming language.

What those of you repeating this interpeter thing are missing is that there's a four term fallacy being employed here. Zed is trying to make an argument of the form:

If A, then B If B, then C Not C, therefore Not B

But in reality his argument is:

If A then B If B, then C Not D, therefore not B


No, the argument being made was:

If A, then B.

If B, then C.

Some people insist that not C, therefore they are unknowingly implying not B.

But you have several comments dismantling an strawman version of the real argument, so it looks like you really didn't understand Z's intention.

His comment about people "failing to [in this case, understand] that note even when they do elaborate responses to [his] writing" looks dead-on.


And who are these "some people"? I don't see citations here identifying anybody.

Pro-Tip: They're straw men. Even if you find examples of people asserting that after the fact, the undentified parties asserting "Python 3 cannot run Python 2" in the way that Zed claims he's responding to, are only mentioned in this rant to make Zed look smarter than he is.


You don't need real people to build up a joke.



Agreed, it's not a problem of theoretical possibility. It's just that the maintainers' time can be better spent improving Python 3 than maintaining a Python 2 compatible runtime.


> You can obviously re-write a Python 2 program to be a Python 3 program, so both of those languages are Turing complete.

This is true, but the converse isn't true. e.g. Python and brainfuck are both turing complete, but Python has interfaces around a hell of a lot more syscalls than Brainfuck which only has operations for reading and writing from stdio. You can't write a Python implementation in brainfuck, it is impossible.


No, that's just more work, because you have to write the required syscall interfaces for Brainfuck. There's nothing about Brainfuck that makes that impossible.

There's a difference between "impossible" and "really difficult giant waste of time" here when it comes to discussing Turing completeness.


Actually, how would you do a syscall in Brainfuck? Its only way to interact with the outside world is through "input byte" and "output byte", and that does not give you the ability to do arbitrary syscalls. As far as I can tell, arbitrary syscalls are impossible to do in Brainfuck (which is why that one person made a fork of Brainfuck with added syscall support).

Of course, syscalls are not related to Turing-completeness in any way. Turing-completeness means that the language can express any computation which is possible to do on a Turing machine. Turing machine computations can not have any side effects, and syscalls are a kind of side effect. Therefore, having syscalls is not necessary for a language to be Turing-complete.


> I read his writing very carefully. This note didn't exist.

If only there were a machine with which we could go way back:

https://web.archive.org/web/20161123042252/https://learnpyth...

Yes, that is kind of funny way of saying that there's no reason why Python 2 and Python 3 can't coexist other than the Python project's incompetence and arrogance. Obviously it's theoretically possible to run Python 2 in Python 3, but until they do it then they have decided to say that Python 3 cannot run one other Turing complete language so logically Python 3 is not Turing complete. I should also mention that as stupid as that sounds, actual Python project developers have told me this, so it's their position that their own language is not Turing complete.


If you check hacker news and reddit threads from the original posting, that note wasn't on the article when it was originally posted. It was only added later, but apparently still before anyone archived it.

Edit: I'm mistaken, however I'll say what I said then again:

The note does nothing to show that he actually understands the statements he's making. The note gets the definition of turing completeness wrong too.


If you have a link proving that, I would love to see it. I completely believe you that it was added later, as it's my own recollection that there was no note.

But regardless, it doesn't change the fact that Zed wrote as if didn't know what he was talking about.


You didn't read it carefully enough. It did exist.

It read:

"Yes, that is kind of funny way of saying that there's no reason why Python 2 and Python 3 can't coexist other than the Python project's incompetence and arrogance. Obviously it's theoretically possible to run Python 2 in Python 3, but until they do it then they have decided to say that Python 3 cannot run one other Turing complete language so logically Python 3 is not Turing complete. I should also mention that as stupid as that sounds, actual Python project developers have told me this, so it's their position that their own language is not Turing complete."

See: http://web.archive.org/web/20161123042252/https://learnpytho...


That was sarcasm against the impossibility of running python 2 code in the python 3 interpreter.


I'm sure that defense is just as valid as "we were hacked and someone wrote all of those dumb things we said in exactly our writing style."


I think Zed was in the majority when it came to Py3. He kind of comes off as a jerk due to his personality, but I really like his approach and he always makes me think. I might not agree with him but he certainly is not boring. Looking at Pandas which is a R like library. Wes wrote his book in Py2 and did not have any desire for Py3.

I moved from Python mostly to using R, Racket and Haxe for my other side projects. This infighting just left a bad taste in my mouth and started seeing greener pastures, which for once in my life were in fact greener. I still love Python but I don't use it much due to community drama.


> Wes wrote his book in Py2 and did not have any desire for Py3

I find it hard to believe you're citing a book that was published in 2012[1], when it made sense to still target Python 2, as relevant to today's argument. The new version is updated for Python 3.5[2].

[1] http://shop.oreilly.com/product/0636920023784.do

[2] https://www.safaribooksonline.com/library/view/python-for-da...


It's like there is no sense of time on the Internet! That was 4 years ago. People change their minds. That's a good thing.

I agree with you totally.


I am totally confused. Zed Shaw wrote in 2010 how he didn't like Python 3 and his book wasn't going to show Python 3. Then he edited the note in November 2016 restating his position again.

Sometime between November 2016 to today he decided to release a Python 3 version.

I mention that the original idea that Python 3 was not a good practice in 2010 and was the majority view and again in 2012 a major library was released with Python 2 as its main target with the majority of the community was fine with the Python 2 focus.

Why am I making no sense time line?

I like Zed Shaw as a member of the Python community and I was defending his stance on sticking with Python 2 (I personal supported Python 3 since announcement)

So what is you agree and disagree with totally?????


Don't blame the community for that. Blame Guido and the core development team. The community at-large weren't rallying for breaking changes to the language. GvR wanted that, it's "his" language afterall. Of course some of the community are more conservative and were willing to more quickly fall in line to goosestep with the core team on Python3. So the tension stems from the Python3 folks having extreme disdain for all the Python2 users and codebases.

I agree with you though, it's what got me looking towards other tech like Node, Go, Elixir. I can look past it all except for the fact they got 'unicode by default' wrong. They should simply do what Go does, everything is a bytestring with assumed encoding as UTF8. What they have today is ridiculous and needs changed before I could embrace Python3+.


There were definitely people wanting unicode support, which was the breaking feature.

They supported python 2 for years and year after the release of this, don't know how you can even mention the word goosestep in this context.


I can't understand what's going on with the Python community for folks to even make statements like yours. It has to be an influx of new Python developers who started on 3.

Python2 has unicode support. Python2 already had (better) async IO with Gevent (Tornado is also there). There's just nothing there with Python3 except what I can only describe as propaganda from the Python Software Foundation that has led to outright ignorance with many users.

Some people wanted "unicode strings by default". Which Python3 does have, but they even got that wrong. The general consensus on how to handle this correctly is to make everything a bytestring and assume the encoding as being UTF8 by default. Then like Python2, it just works.


Sorry, why you say python 2 asyncio is better? Python 3.5 - 3.6 new native! asyncio looks great https://maketips.net/tip/146/parallel-execution-of-asyncio-f...


> The general consensus on how to handle this correctly is to make everything a bytestring and assume the encoding as being UTF8 by default.

Whose "general consensus"? It's certainly not the approach adopted by the vast majority of mainstream general-purpose programming languages out there.


You mean most of the mainstream, genpurpose languages in usage today which were initially designed 20-50 years ago. And those which don't actively look for opportunities to shoot themselves in the face with sidestepping changes to break the language? Then no, not that consensus.

But if you were to find any sensible designer of new language with skyrocketing, runaway popularity- such as Rob Pike, they'll tell you differently. Or even Guido van Rossum, if he were to be honest with you. While Pike's colleagues at Bell Labs like Dennis Ritchie may not have designed C this way for obvious reasons, they did design Go that way.


So now it's the consensus of "sensible designers of new languages". Where "sensible" is very subjective, and I have a feeling that your definition of it would basically presuppose agreeing with your conceptual view of strings, begging the question.

Aside from Go, can you give any other examples? Swift is obviously in the "new language" category (more so than Go, anyway), and yet it didn't go down that path.


Well, do your research and come to your own conclusions. Most people are going to agree that UTF8 is the way to go. You can advocate something else, since you seem to take affront to my opposition to Python3's Microsoft-oriented implementation.

If you know anything about Swift, it was designed with a primary goal to being a smooth transition from and interop with ObjC so like other legacy implementations (such as CPython3), it had sacrifices that limited how forward-looking it could be.


I'm not at all opposed to UTF-8 as internal encoding for strings. But that's completely different and orthogonal to what you're talking about, which is whether strings and byte arrays should be one and the same thing semantically, and represented by the same type, or by compatible types that are used interchangeably.

I want my strings to be strings - that is, an object for which it is guaranteed that enumerating codepoints always succeeds (and, consequently, any operation that requires valid codepoints, like e.g. case-insensitive comparison, also succeeds). This is not the case for "just assume a bytearray is UTF-8 if used in string context", which is why I consider this model broken on the abstraction level. It's kinda similar to languages that don't distinguish between strings and numbers, and interpret any numeric-looking string as number in the appropriate context.

FWIW, the string implementation in Python 3 supports UTF-8 representation. And it could probably be changed to use that as the primary canonical representation, only generating other ones if and when they're requested.


A default UTF8 string-type has to be allowed to be used interchangeably with bytestrings since ASCII is a valid subset. Your string type shouldn't be spellchecking nor checking for complete sentences either. What comes in, comes in. Validate it elsewhere.

Thus Go's strings don't potentially fail your desire for a guarantee anymore than anything else would assuming UTF8. They're unicode-by-default, which was the whole point to Python3 but Go has it too in a more elegant way. That's the beauty to UTF8 by default, you can pass it valid UTF8 or ASCII since it's a subset, the context of which it's being received is up to you. If you're expecting bytes it works, if you're expecting unicode codepoints that works. There's no reason to get your hands dirty with encodings unless you need to decode UTF16 etc first. If there is still a concern about data validation, that's up to you not your string type to throw an exception.


> A default UTF8 string-type has to be allowed to be used interchangeably with bytestrings since ASCII is a valid subset.

This only works in one direction. Sure, any valid UTF-8 is a bytestring. But not every bytestring is valid UTF-8. "Use interchangeably" implies the ability to substitute in both directions, which violates LSP.

> What comes in, comes in. Validate it elsewhere.

I have a problem with that. It's explicitly against the fail-fast design philosophy, whereby invalid input should be detected and acted upon as early as possible. First, because failing early helps identify the true origin of the error. And second because there's a category of subtle bugs where invalid input can be combined or processed in ways that make it valid-but-nonsensical, and as a result there are no reported errors at all, just quiet incorrect behavior.

Any language that has Unicode strings can handle ASCII just fine, since ASCII is a strict subset of UTF-8 - that doesn't require the encoding of the strings to be UTF-8. For languages that use a different encoding, it would mean that ASCII gets re-encoded into whatever the language uses, but this is largely an implementation detail.

Of course, if you're reading a source that is not necessarily in any Unicode encoding (UTF-8 or otherwise), and that may be non-string data, and you just need to pass the data through - well then, that's exactly what bytestrings are there for. The fact that you cannot easily mix them with strings (again, even if they're UTF-8-encoded) is a benefit in this case, because such mixing only makes sense if the bytestring is itself properly encoded. If it's not, you just silently get garbage. Using two different types at the input/output boundary makes it clear what assumptions can be made about every particular bit of input.


I understand your position. This is a longstanding debate within the PL community, as you know. I've considered that stance before but I have to say thanks for stating it so well because it's given me pause. I can't say I agree but you're not "wrong". I agree with the fail-fast concept but disagree that's the place to do it. A strict fail-fast diehard probably shouldn't even be using Python, IMO. I don't have anything else useful to add because this is simply a engineering design philosophy difference. We both think our own conclusions are better of course but I appreciate you detailing yours. Good chat, upvoted.


I agree that this is a difference caused by different basic premises, and irreconcilable without surrendering one of those premises.

And yes, you're right that Python is probably not a good example of ideologically pure fast-fail in general, simply because of its dynamic typing - so that part of argument is not really strong in that particular context.

(Side note: don't you find it amusing that between the two languages that we're discussing here, the one that is statically typed - Go - chose to be more relaxed about its strings in the type system, while the one that is dynamically typed - Python - chose to be more strict?)

The key takeaway, I think, is that there is no general consensus on this. As you say, "this is a longstanding debate within the PL community" (and not just the topics we touched upon, but the more broad design of strings - there's also the Ruby take on it, for example, where string is bytes + encoding).

My broader take, encoding aside, is that strings are actually not sufficiently high-level in most modern languages (including Python). I really want to get away from the notion of string as a container of anything, even Unicode code points, and instead treat it as an opaque representation of text that has various views exposing things like code points, glyphs, bytes in various encodings etc. But none of those things should be promoted as the primary view of what a string is - and, consequently, you shouldn't be able to write things like `s[i]` or `for ch in s`.

The reason is that I find that the ability to index either bytes or codepoints, while useful, has this unfortunate trait that it often gives right results on limited inputs (e.g. ASCII only, or UCS2 only, or no combining characters) while being wrong in general. When it's accessible via shorthand "default" syntax that doesn't make it explicit, people use it because that's what they notice first, and without thinking about what their mode of access implies. Then they throw inputs that are common for them (e.g. ASCII for Americans, Latin-1 for Western Europeans), observe that it works correctly, and conclude that it's correct (even though it's not).

If they are, instead, forced to explicitly spell out access mode - like `s.get_code_point(i)` and `for ch in s.code_points()` - they have to at least stop and think what a "code point" even is, how it's different from various other explicit options that they have there (like `s.get_glyph(i)`), and which one is more suitable to the task at hand.

And if we assume that all strings are sequences of bytes in UTF-8, the same point would also apply to those bytes - i.e. I'd still expect `s[i]` to not work, and having to write something like `s.get_byte(i)` or `for b in s.bytes()` - for all the same reasons.


It's worth nothing that I think Python's success and simplicity came from the ease of use and flexibility. It should have Go's strings. You have a point there. The type annotations are really jumping the shark too, it's just no longer the language that once made so much sense.

On the consensus on bytestrings assumed as UTF8, there's only 1 new language without legacy baggage that has skyrocketing popularity and it has bytestrings assumed as UTF8. Everyone I've surveyed says that's no coincidence, including members of the Python core dev team. So that's where I'm seeing consensus on the string issue. While Python3 has struggled and folks are rightly irritated. Because some were irresponsible everyone has to pay, that's not really how I viewed Python's strengths prior to 3.x. Zed said it best but it really was one of the best examples of how dynamic typing can be fun.



I normally wouldn't respond to any comments that are merely a link to someone elses thoughts without something original of your own. Because it means you likely don't know what you're talking about and merely attempting to speak through someone else because you think you agree. So I will respond not to your benefit but for anyone else who is new to Python and may come across this.

I've read that before and the author is ignorant. He's parroting GvR & the CPython core development team's line that unicode strings are codepoints. Sure, but he's arguing with himself and note the argument is Python2 vs 3. That narrow focus is what results in his tunnelvision. As a result of the argument as he frames it, Python3 is not better than Python2 in string handling, it's merely different. One favors POSIX (Linux), Python2. One favors the Windows way of doing things, Python3.

There is an outright better way to handle strings. It's what Google did with Go. How do we know it's better? Well, it is because it makes more sense on technical merits and members of the CPython core dev team have admitted that if Python3 were designed today they would go down this path. But during the initial Python3000 talks this option was not as obvious. Bad timing or poor implementation choices. Take your pick, given the runaway feature-soup that Python3 has become I'd assume both.

So like all tech, let Python3 live or die on its technical merits. That's exactly what the PSF has been afraid of, so we have the 2020 date which is nothing more than a political stunt among others. Python3 is merely different, it favors one usecase over another, but did not outright make Python better. To break a language for technical churn is and was a terrible idea.


You're right, I do lean towards agreeing with the author of the blog post. However, I wasn't (and am still not) in any way certain, and didn't want to be one of those asses you see on the internet who turn everything into a religious war. So I just put the information out there because I (in my ignorance) thought it was useful information from which intelligent people could draw their own conclusions.

Honestly, I don't care a great deal about string handling in Python and just wanted to inject (what I thought was) more information into the discussion. I'm kinda regretting that now. Lesson learned: steer clear and leave it to the experts.

I'm curious, how does Go handle strings?


Well, I'm not an expert but in effort for full-disclosure Guido and the CPython core dev team aren't either. They hold a myriad of excuses for their decisions and they're all highly suspect from even a casual observer that doesn't just drink the koolaid. In the end, they'll just tell you they maintain CPython so only their opinion matters. Fair, but they're still wrong. Python3 is controversial for good reasons.[0] It's not lazy folks or whatever ad hominem is out there today. I couldn't tell your intentions given the lack of information included with your post.

Go handles strings by having strings be like Python3's byte-strings and unicode-strings as one type. This enables code to be written that generally doesn't force you to very often think about encodings, which you shouldn't have to as UTF8 is the one true encoding.[1] Or litter your code with encode/decode, or receive exceptions from strings (see Zed's post on some of that) where there wasn't previously. Python3 solved the unicode emojibake mixing unicode and bytes problems that some developers created for themselves in Python2, but did so by forcing the burden to every single Python3 developer, and breaking everyone's code while simultaneously refusing to engineer an interpreter that could run both CPython2/3 bytecode. Which is possible, the Hotspot JVM and .NET CLR prove it. Shifting additional burdens to the developer in situations where it's necessary, makes sense. It wasn't here because of both Python's general abstraction level, and Go showed it can be solved elegantly. Strings are just bytes and they're assumed to be UTF8 encoding. Everyone wins. Only Windows-specific implementations like the original .Net CLR shouldn't be UTF-8 by default, internal and external representation. Only a diehard Windows-centric person would disagree or someone with a legacy implementation (Java, C#, Python3 etc). The CPython3 maintainers fully admit they're leaning towards the Windows way of handling strings.

As you know, handling text/bytes is fairly critical and fundamental. For Python3 to get this wrong with such a late stage breaking change with no effort to make up for it with a unified bytecode VM is unfortunate. Add in the feature soup and the whole thing is a mess.

[0]http://learning-python.com/books/python-changes-2014-plus.ht...

[1]http://utf8everywhere.org/


Also, it's worth noting that creators of go _invented_ utf8.


Minor nit: s/emojibake/mojibake/


Pandas is a python library but it is based on the concept of a Data Frame from R.


Yes, it is a good thing!! I am glad he turn back from his previous decision. His book is nice.


For a detailed break down of why he was wrong with the "Case against Python 3", see here:

https://eev.ee/blog/2016/11/23/a-rebuttal-for-python-3/


I can't believe they actually have to debunk this

"Learning Python 3 is not in your best interest" REALLY? Learn Python 3 and you can work your way around Python 2 code if needed (except for Unicode problems which are tough even for people accustomed with Python 2 and which is a sore thumb that needs to be fixed hence Python 3)

Yes, I though Python 2 was good and Python 3 was irrelevant. That was 5 years ago though.

Reading stuff like "The fact that you can’t run Python 2 and Python 3 at the same time"? Really? Fake news anyone?

It is possible and several people do that. Period


> This guy has a very stranger way to promote his job.

Making and promoting a controversial rant, then a couple months later making news by reversing his position? That's pretty much Zed's m.o....


If you try to ignore your bias against the person, it's actually admirable that someone can change their mind on a controversial topic instead of digging heels in stubbornly.


My first thought when reading this. I still remember this guy's crusade against python 3.


He is just dedicated to do things the "Hard Way"


this comment is so erroneous. the fact that it's the top comment is really demonstrating how toxic HN has become as a community.


>I've standardized on this version of Python because it has a new improved string formatting system that is easier to use than the previous 4 (or 3, I forget, there were many)

Wait, is this seriously the only advantage you see to Python 3, Zed?


Maybe in the context of the book it is a major reason.

Strings are ubiquitous to programming. I'd argue that it's even more for first time programmers. Zed wanted something that would be easy to use.

(ASCII) Strings formatting is more important to beginners than unicode support, new style classes by default, iterators in range()/enumerate()/etc. , the new division operator, etc.

I haven't recommended his book to beginners because it was stuck in Python 2. But I have to agree that Python 2 had some stuff that was more "newbie-friendly" than Python 3


  > But I have to agree that Python 2 had some stuff that was more "newbie-friendly" than Python 3
What stuff, out of interest?

Also I suspect that a huge proportion of newcomer's got very confused as to why 3/2 == 1.


To take one example

  >>> range(10)
  [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
So that's what range() does!

  >>> range(10)
  range(0, 10)
Uhhh...

I love iterators, but they're not an obvious concept at first.


    print thing    # Works in Python 2

    print(thing)   # the parenthesis are optional in Python 2

    print(thing)   # the parenthesis are required in Python 3
A lot of problems I've encountered in the transition are a result of the huge amount of simple examples available to newcomers which are written in the Python 2 syntax.


Saying the parentheses are optional in Python 2 is kind of inaccurate. "print(thing)" is just "print (thing)", and "(thing)" is just an unnecessarily parenthesized version of "thing". So basically it's just redundant syntax that happens to make it look like a function.

If you try to do something like "print(thing1, thing2)" in Python 2, you'll find that "(thing1, thing2)" is interpreted as a tuple, so Python prints it as a tuple (instead of printing thing1 and thing2 individually).


As someone new to python, this has been confusing. The process involves learning using some blog post, the spending time refactoring because its old.


Yeah, it's an unfortunate added frustration for beginners, but one you can quickly overcome, once you're made aware.


Yes. But that is easy to explain. "It always rounds down"

Of the top of my head come the range/enumerate/zip returns. They are really useful and needed to teach "pythonic Python". But it's hard to explain that it doesn't return a list. "It returns an iterator, that is lazy, and you need to call 'list()' on it to use it as a list..." You get my point.

You fall to a "just do like I told you. You'll get it later." that plagues a lot of other languages when being taught to newbies.

But I never said that Python 3 is less "newbie-friendly" than Python 2...


> Yes. But that is easy to explain. "It always rounds down"

The explanation is easy, but it isn't simple.

With respect, "It always rounds down" is a terrible explanation. It explains the apparent behavior of the / operated in Python 2, but it does not really explain why. A better explanation is that the Python is both dynamically- and strongly-typed and that / does integer division on integers and "true" division on floats. For the newbie who understands these concepts, this is a far more useful explanation and metaphor; and for the newbie who doesn't, he has to learn somewhere, and frankly the / operator is not a bad example.


> / does integer division on integers and "true" division on floats.

Floats don't really get true division either:

https://docs.python.org/3/tutorial/floatingpoint.html

And as it happens, I once explained a related problem with round() to Zed Shaw on Twitter:

https://twitter.com/zedshaw/status/777780480737357824


This is one reason why I go out of my way to avoid floats and instead use Decimal whenever I need to support non-integral numbers.

(and from playing around with Perl 6, I love how decimal literals are of type Rat instead of Num; I wish Python decided to do something similar in Python 3)

Edit in response to Zed's question, in 3.6 I always just do:

    print(f'{foo:.2f}')
Works with Decimals and floats.


Yeah, I'm pretty sure he'll turn up his nose at that solution as well.

For purposes of teaching the language to a beginner, he wants literals being rounded or divided to "just work" in a way that is unsurprising to those beginners, and there isn't any way to achieve that unless the default type of a number with a decimal point is Decimal rather than float (which isn't at all likely to happen).

The closest we are likely to get to that ideal is by introducing a Decimal literal like 0d2.125, riffing off the Hex and Octal literals (eg. 0xDEADB33F, 0o76510).

P.S., unless I am mistaken, you're just truncating the float, not actually rounding it


Slight correction: it always rounds toward zero. -3/2 == -2 in Python 2.


That's away from zero.


I am s-m-r-t smart! Sigh. Yes, it's rounding down. I feel a little vindicated that at least three people saw that and said, "oh, he's right. Have my upvote!"


Don't get me wrong, I wasn't having a go at you. It's just that I learnt 3 when I already had quite a handle on 2, and I'm not very good at putting myself in the shoes of a beginner.


No problem. I didn't think you were.

And I understand that it is hard for some to jump back to a newbie perspective. Some people look for performance in a programming language. Others for a thriving ecosystem. I look for usability and elegance. So I tend to look at programming languages that way.


Notwithstanding Zed, or any particular feature, the reason to move to Python 3 is that 2 is EOL in 2020. http://legacy.python.org/dev/peps/pep-0373/

Unless any fork actually gains traction, Python 3 will be Python in 2020.


Not Zed, but for me, it's the sole reason I switched from 2 to 3.

I've been writing Python for over a decade, I dragged my heels on Python 3 for a long time, and as soon as PEP 498 came out, I decided then and there that all new personal projects I start will use Python 3, and I'll try to use 3 if I can in new work projects.

Not only that, but I've fallen for f-strings so hard that all my newer personal projects specifically require 3.6.


It was sane unicode support that did it for me. Unicode headaches are way down since I switched. LPTHW was my favorite intro to Python, but I was on the verge of not recommending it any more because of Python 2. This is a welcome change.


It's so strange for me to read opinions like that about Python3's string handling. Are you a newer programmer, did you start with LPTHW? You said your 'unicode headaches' went down when you switched to Python3, but that LPTHW was your favorite intro which suggests you might not have written much Python2 and calls into question the number of headaches that Python2 caused you.

Their unicode implementation is sub optimal at best, it doesn't need to be so complex. They need to go back to the simplicity of Python2's strings but modernize it. Make everything a bytestring and assume the encoding is UTF8. That's what everyone else is standardizing on.


I think he hated the old python3 string formatting and that was why he didn't cover python3 previously. The new version is good enough that he's willing to make the switch now.


To Zed's credit, he'd have been within his rights to be annoyed if he'd rewritten LPTHW without f-strings prior to their introduction. It's a little bit of a vindication of his perspective.


If this is anything like his Learn C book, I'll pass. Was a waste of money as he came off incredibly arrogant and fumbled through it.


At least the original LPtHW was quite a fundamentally different book from his learning C book. The C book was aimed at people who knew at least the basics of programming in some other language and wanted to learn C, while the Python book was more aimed people who had never programmed at all.



Introductory example 1: Plotting a White Noise Process

> Suppose we want to simulate and plot the white noise process ε0,ε1,…,εTε0,ε1,…,εT, where each draw εtεt is independent standard normal


Thanks for posting this :)


Is that for quants? Seems to be good. Thanks for the link.


No kidding. Thorough, too...the PDF for the Python lectures is 862 pages!


That's great. In fact I submitted it here at https://news.ycombinator.com/item?id=13724295


Zed lost his credence. It's hard to take him seriously after what he's said in the past. Besides, the book is rather light on content.


It's actually harmful to point newbies towards his work. There are much better resources that have been proven in practice to work better.


Such as?


well, you can go with the actual python tutorial for one. Or the literally dozens of intro to python books put out by No Starch, O'Reilly, Packt, Wiley, etc. You can also go with some of the classics like Swaroops' Byte of Python, or any of the college lecture notes. I hesitate to suggest this, but its worth saying, the book that Zed shit on to launch his little franchise is still actually usable (especially just to read the code examples) and was a much much much better book in terms of pedagogy then his tedious, arrogant offerings.


Yea...flipped through Zed's book. It seemed alright, but Python Programming for the Absolute Beginner is an amazing piece of work. Crystal clear, and you get to build fun little text games.



For those of us who are late to the party, what did Zed say in the past?


Related: I love this book's teaching style, and found it very much resonated with me - starting from zero, step by step, iteratively learning and building on previous lessons.

Can anyone suggest something similar for learning Scala?


The canonical source for learning Scala from scratch (at least, when I was learning a couple of years ago) is the Coursera course by Martin Odersky, one of the language's founders. It appears that the course has grown somewhat, but you can still find part 1 here: https://www.coursera.org/learn/progfun1


I took the course - unfortunately, and to be honest, it didn't have the approachability and "fun" of Zed's books. To me - it felt very dry, academic, and quickly became a chore to complete rather than a joy. I need the version aimed for "C- python programmer" / newbie rather than "eager CS student" or "experienced Java dev".


  >> Coursera course by Martin Odersky
Brilliant course.

Unfortunately, the next one in the series is not nearly as good.


The books I've looked at have tended to be too dry so I learned on my own, but there are some good conference talks out there.

https://www.findlectures.com/?p=1&class1=Technology&category...


I found Scala for Data Science by Pascal Bugnion to be great when I was learning Scala.

Granted, this was for a Big Data class at the university level that was covering Spark and Scala mostly. However, I remember the book to be very clear and digestible.


Atomic Scala is what I hear recommended for starting from scratch - I can't truly speak for it myself as I knew Scala before it came out, but I liked what I saw of it.


Thanks - will check that out.



You'll either love or hate his style but one thing you should note is that popular current production distributions are pegged on older versions (eg Ubuntu 16.04 has Python 3.5, Debian Jessie has 3.4.

If you're a purist and don't want to maintain your own build of Python (or trust an external PPA), I'd aim at 3.5.

That said, this is about learning... But you'll need to learn not to use f-strings if you want to use it on a current-generation LTS distribution.


pyenv is great for running whatever Python version you like: https://github.com/yyuu/pyenv

I'm a strong believer in de-coupling base system and application deployment.

And if you're a CentOS user, you can get newer Python releases from https://www.softwarecollections.org/en/scls/?search=python (an official Red Hat project)


3.6 is in the repos for yakkety as a beta, and upcoming release on zesty. Unfortunately it will likely not be the default Py 3.x until the "aardvark" release.


I'd advise against using 9m support distributions in production though. It's a super pain in the arse to have to do major upgrades that often.


9m?


Sorry. Nine months. That's all the non-LTS versions get.


Python 3.x is the present and future of the language


A language I use daily and love, but one that will end up a dead-end I fear long-term. So much is going the way of speed now.


I doubt that. Most of the cited competitors to python are either seriously flawed (e.g. go) or aren't really competing on the same turf (e.g. rust/haskell).


Speaking as someone who really loves Haskell -- let's be real, it's not serious competition against Python or Rust, same turf or not. It's a fantastic language for numerical computing and its applications, and for compilers/parsers, but it's never going to be as easy to use for the layman as an imperative language.

As someone who's used Go and who has some mixed feelings about it, I'm curious -- what do you find flawed in it?


>what do you find flawed in it?

* Package management

* The weird and approach to exception handling (which leads to overly verbose code)

* Lack of generics (which leads to overly verbose and somewhat hacky code)

* Vastly smaller 3rd party ecosystem

I think it works well for a very small subset of sysops-style problems (small servers, small command line tools, etc.) especially since it easily compiles into one small, portable binary, but outside of that it flounders.


Thanks for following up! Those are some of my gripes with the language too. The vendoring system/would-be package manager especially has caused me hours of headaches, and I'm glad to hear I'm not the only one who considers it broken.


Go is not flawed (I mean, of course it has issues), but it's not the same thing/area


> Go is not flawed

For most technologies (with some notorious exceptions) that boils down to a matter of opinion an intended use cases.


Well Python is a portable layer of the scientific code written in C/C++/Fortran and a better bash. The reason projects like Pandas, Tensorflow etc. uses it is because it is extremely easy to write a script like code in it that does the heavy lifting in the aforementioned languages. If you had to implement Pandas (more specifically DataFrames) in pure Python it would be extremely slow. With other words Python is an excellent glue languages. I quite often see that people implement complete systems in pure Python when they realise it is too slow or to complex to be maintained. Go at least fast and has proper concurrency while Rust and Haskell are properly typed and supposedly safe. I am not sure if any of these are competitors of Python though.


Agreed, but with things like Crystal coming out, you can still use it as elegant glue, but it will be stupid fast too. One language for everything.


numpy on pypy may well solve that issue for python though.


Pypy isn't very fast, but i guess that plus calling out to the Fortran numpy libraries should be fine for numerical work, but now you have to worry about a JIT, big library, runtime...etc. With something like Crystal, Nim, or Red you get a single small binary at the end of the day and nothing else. I guess to each his own, but I honestly feel like we've overcomplicated things.


Pypy is as fast as or faster than c/c++ in some applications/benchmarks.

And with python (or interpreted langs in general) you gain a repl, a shorter write -> compile -> test loop, and generally speaking a higher rate of development.

I'd agree though, I do want some way to deploy a "python binary" that isn't essentially a virtualenv.


Oh yea I know all about the power of a REPL. My friends with their massively bloated IDEs can't understand my desire for a simple read-eval-print-loop. I gain simplicity and instant satisfaction. Some languages like F# heavily promote both. I didn't think PyPy could be that fast. How on earth could it ever compete with C?


Jits are magic man.

Like I said, it depends on the application. For numerical-heavy code, where you're going to be using numpy hooking into MPI or LAPACK, C or similar code will win, it takes more advantage of things.

For more complex code, say a webserver, pypy can compete and win, because pypy gets the advantage of, after the jit warms up, essentially profile guided optimization, which means that pypy can, assuming there's some kind of loop (there always is), optimize the hot code paths, and the surrounding code to abuse hot code paths, and it can optimize the hot code paths really really well, better than gcc could normally, because it has additional information (pypy knows how the code is being used at runtime, whereas gcc only knows what information is available to the compiler: types and directives really).


I'm not sure I get that. Care to elaborate, and what use cases you have for it?


OO I'd love to, but you'll have to take it with a grain of salt :)

Python is wildly popular now, and it is great for small scripts and interactive data analysis, but I don't see any real long-term strengths. It is not growing and is very slow in comparison to most languages. There are soooo many interesting projects out there. Perl6 is very immature from an implementation perspective, but is just as easy to use as Python, but has waaay more power and flexibility under the hood. The OO model is more advanced, the semantics are clean and elegant, you can do real FP, easy asynch based off of C#, grammar based programming...etc. Python can do a lot of that with modules, but having it all in one consistent package is nice. On another note, there are sooo many languages that although maybe not as flexible as Perl6, they run blazingly fast. Think of Crystal (Ruby syntax with native speed and no dependencies), Dlang, Go, Rust, Red (fast and dynamic language in a 1.3 MB runtime...you can even access a systems language DSL...the best GUI system I've ever seen). So basically we have new languages that have the advantage of an easy syntax like python, but are waaay faster and not mostly locked in place (Guido doesn't like feature bloat). There is also Elixr...etc. Python isn't dead, but I don't think a JIT is enough at this stage and I'm mostly moving on. Python's hold on scientists is being challenged by Julia which will be just as easy to use, but also once again...much faster. Let's not forget that C# is actively maintained by a large and very well funded team at Microsoft and is constantly adding new features like LINQ and now things from F#, which is a great language on its own. I don't take Haskell seriously. Note that I think it is very advanced and can be quite fast, but it is also very hard for most people to understand things like Monads (just Monoids in the category of Endofunctors...wtf) or Currying...etc. I expect Python to remain around for a very long time (you can still find Cobol at banks and I rely on Fortran every day, but I really hope it is not still in the top 5 on the Tiobe in 10 years unless Python 4 is a rewrite from scratch to have a few more features and run on LLVM...oh wait we already have Nim that is basically Python syntax transpiled to C & then compiled with gcc or clang to get ~40 years of optimizations.

My uses for python are data analysis, helper scripts, command-line apps, glue on Linux servers, glue on windows where it acts as a better CMD (I've tried powershell, but I don't gain much and often lose in speed as 8/10 ways to do something work, but are dog slow).


"All the features" is why perl is considered a write-only language. It's great for personal projects but I have yet to see any big successful perl projects because the onboarding time of learning a given perl project's chosen features is so cumbersome.


Is that serious? Lots of massive perl projects out there like Booking.com. If you search you can find an article on how they manage their +1 million plus line code base in production. It can definitely get unwieldy I bet, but you can write really good code too. I don't think having extra features makes a language bad as long as it isn't all bolted on and the community has some discipline.


A large commercial project is not the same thing. You can use whatever hideous language you want for that because you are literally throwing money at your developers to make them understand and work on the code base.

The bar is large open source projects, which indicates that it's very easy for a drive by developer to isolate a bug and fix it or implement a feature.


Perl5 by itself is a pretty large open source project as is all the 25,000+ CPAN modules, but you're correct that I'd rather not deal with the Perl5 VM itself as that is probably very arcane (although I've certainly never tried). That could just be because the language has 30 years of open source patches on it. Perl6 on the other hand was designed to be stupidly easy to get into fixing the Core and VM. Instead of being written in C, it is written in a limited Perl subset "NQP" or "Not Quite Perl" that is much easier to read. They've had several blog posts about tracing an error pack to the source and fixing it. I'm not sure if P6 will continue steaming ahead into a true production language, but i really hope it does as it is so nice.


PG was talking about viaweb and dead languages in an interview I won't spend 10mins looking for; but he said something like:

Latin is a dead language and no one speaks it. Sure it is helpful for understanding other languages, but I never got that comparison because the computer always speaks it.

Certainly imperfect paraphrase, but the crux is that you can always code python, or lisp ect.


For those who want an alternative there is the community continuation of Python 2.8, called Tauthon:

https://www.naftaliharris.com/blog/why-making-python-2.8/


Those that go on this path will then find out they have gained nothing and only lost time by not updating their systems


and many (like me) who have painfully moved to Python 3 will find that they have gained nothing important there either other than avoiding the "end of life" blackmail.


> avoiding the "end of life" blackmail

People usually think that costs nothing, until they get bitten by it.


If you are a complete beginner in python, I would rather suggest this for python3 - www.openbookproject.net/thinkcs/python/english3e


I would basically avoid this simply by reading his idea of what makes good educational content: tests, repetition, memorization and boring predetermined projects. If you know anything about educational theory, you will quickly discover that constructivist education, or constructionism as Seymour Papert coined, is a way more effective and motivational way to learn computer science and coding. There are basic understandings of computering which is beyond specific language, eg loops or logic. You are not doing yourself any favors by memorizing the entire syntax of a language. This happens while you do personal and engaging coding projects.


I tell people to `ipython; %magic`. For all of python's faults, that REPL is the gold standard and makes for a nice way to learn the language.


Python3 folks act so daft about his Turing-complete comments. ZS knows what Turing complete is and means. He was clearly trolling, saying in essence, "if the CPython team can't create interop between CPython2 and 3 code then they really botched the job.. Python3 bytecode is Turing incomplete. What a fail."

And then people attempt to troll him saying he doesn't know what Turing complete is. Perhaps they really don't understand what they're even responding to, that's what it appears. Or it's just a willful attack because they "like Python3". Unfortunately with Python3 this is what the Python community has been slowly devolving into, what comes off as a bunch of trolls/kids or ridiculous adults.


1. I'd really like to see proof that the people he was trolling actually existed, and if they did exist, a coherent explanation of why beginners to Python needed to be exposed to the arguments of said people through trolling.

2. Zed can troll people, and that's totally OK, but the "like Python 3" people can't troll him or counter-troll him, because it's turning the Python community into trollville. I'm guessing like sort of how Rails is a ghetto.


> I'd really like to see proof that the people he was trolling actually existed

See the user of this parent comment.


I have never asserted that Python 3 can never run Python 2. The claim is that Zed trolled some imaginary people who said this.


https://learnpythonthehardway.org/book/nopython3.html >This document serves as a collection of reasons why beginners should avoid Python 3 as of November 22nd, 2016.

This is the first thing I get when searching "Learn python3 hard way" in Google. Maybe it's time to update this page :)

I didn't enjoy the python2 book too much, but I'll give this one a try.


Knew it was coming, but still kinda annoying right in the middle:

https://josharcher.uk/categories/lpthw/


The hard way is still the easy way though. Python is simple to learn :)


In your opinion which one is better? Learn Python the hard way or Dive into Python 3

http://www.diveintopython3.net


A lot of people dislike Zed Shaw's books (the author of Learn ___ the Hard Way) for various reasons, and he has posted some things online that serious call into question his credibility (namely, claiming that Python 3 is not Turing complete).

Dive into Python 3 is a good book. I used it to supplement my CS classes in early college. There are some other good book suggestions on the sidebar of https://www.reddit.com/r/Python/ . In particular, I've heard good things about "Automate the Boring Stuff with Python".


Its a great book, that still has relevance, amazingly. Theres also the fact that he feels like its necessary to shit on anyone who disagrees with him...or, anyone at all actually. The trolling thing is lame, and a total waste of peoples time, like, for example, when he literally ate something in the middle of his lecture at Pycon, as if he somehow couldn't find the time to eat a candy bar in the elevator. I literally worked Pycon, and I can guarantee you that speakers, and pretty much anyone else, can find the two minutes to choke down an orange or whatever before the go up to speak. Its his persona, which he uses to get notoriety, which he uses to get work.


Learn Python The Hard Way was the first programming book I ever read. Beats the hell out of codeacademy and all that other shit.


is this complete or a work in progress, because if you try to buy, you can still only buy the 3rd edition (with a free upgrade to 4th edition)

so to be clear, is the online version complete, and only the pdf and videos are missing, or even the online version is still work in progress


Does anyone have any advice from where to go after beginner/intermediate python? I can hack together a program with the functionality I want, but frequently it's pretty verbose and the architecture gets creaky as the complexity increases...


I'd suggest http://www.obeythetestinggoat.com/ if you're looking to use Python for web development.


For a beginner who wants to learn Python, which one is better — 2 or 3?


3. When you need to work with a 2 code base the learning curve won't be big at all. Whereas if you start with 2 and then move to 3 you will have to unlearn a bunch of stuff which is just a waste of time. Start with 3 and then learn just the differences in 2 when/if you need them.


Agreed. Just as a for-instance: In Python 2, the parenthesis on a print call are optional. In Python 3, they're not. Going from 3 to 2 you probably won't even notice. Going from 2 to 3... To this day I often fail to type those parenthesis.


3. 2 is officially End of Life in 2020. http://legacy.python.org/dev/peps/pep-0373/


3, but when you find some snippet on the web that you want to try and it doesn't work, try not to forget that it may have been written in the Python 2 syntax.


Definately 3.


If you want to learn a language efficiently, work on a project with it. I went through Learn Python the Hard Way pretty fast but my retention wasn't that great.


Definitely 3 and almost certainly not from this book.


Define "better".


Learn Go instead.


Learn Go. It depends what you want to do with said skills but JS ES6 would be worth it too in the web space.

I wouldn't recommend learning Python at this point with the 2/3 split and uncertainty. If you must learn Python I'd learn 2.7. Reasoning is that almost all the jobs are Python2 still. There's just so much code out there based on it and you're almost certainly going to have to use it anyway. Don't listen to the haters, it's what will pay your bills. Move to Python3 when/if your job does. They'll probably move to Go instead.

For hobby work not remotely considering a job? Then learn Elixir.


For a beginner trying to decide between Python 2 and 3, 'learn Go' doesn't seem like useful advice. One good thing about Python is the gobs of educational material available, much of it free. If the person is looking to make this a career, they'll end up learning a bunch of languages as a matter of course anyway.


With Python3 not likely to be one of them.

I answered his question on 2 vs 3 if you read what I said.


Would a tutorial like this really be worse if it used realistic situations?


I'm not sure I would recommend learning Python to anyone these days. Go is just so much nicer and almost as easy, and also avoids the inevitable GIL & speed problems.


If you are doing AI, machine learning, it's all Python. Other language support yes, but python is it. Alexa does not support Go.

> https://www.tensorflow.org/


You mean the API that you are calling is in Python. I think the heavy lifting is mostly C/C++.


Not to mention that Jupyter (was IPython) is a gorgeous way to work through a dataset while describing your process to your peers.


unfortunately this is true. For one thing Go doesn't have a REPL - major problem for data science/AI/scientic programming. And it's very committed to old-style procedural/imperative paradigm.

Numpy is carrying Python, and its competitors are thin. There's Julia, but that's not really ambitious enough compared with Numpy to be worth moving to. There's Scala though then you need to love the JVM. Lua had promise as a nice and tight, pragmatic and fast language, but it seems to have missed its window of opportunity. What else? I'd genuinely like to know. Ideally I'd like something that's got a functional programming flavour but that also data-parallel, almost R-style, by default, so as easily able to target the GPU. I took a look at futhark but it seems too experimental, as does Ocaml/spoc combo. I am taking a serious look at the probabilistic programming area with Edward or Stan, but then we're back to Python front ends.... Actually I did look at Swift which does c-class speed with an official REPL but an apple-supported language may not attract the science/ML crowd. Meantime rust is REPL-less for now...


This whole thread.

"This post is an interesting approach to B. I have looked in to A and B, but if you could explain why you chose A or B that would be helpful"

> C




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: