Hacker News new | comments | ask | show | jobs | submit login

Yup at this point there is no reason to not use Python 3 for new projects. What I'm finding actually is that some companies that have their codebase in Python 2 have actually started migrating to Golang rather than Python 3, because of the increased performance benefits.

I'm kind of sad about this, but I think that Golang will eventually replace Python as the go-to back-end server language (maybe even data processing).




As much as a I like Go in general, there are two pythonic things that, IMO, really should go into the next major revision.

1: sets; in this day and age, not having sets as native, stdlib-provided data types is simply unacceptable.[ß]

2: The ease of python's "in" keyword; being able to test for a key in a map (or set!) without any indirection is crucial

ß: granted, IIRC python moved sets into native stdlib types only somewhere in 2.x but better late than never


Sets don't need to be native in languages that are both fast and have a proper type system. For example, rust and OCaml have a standard set in their stdlib, but it's not builtin. Of course, go lacking generics makes it impossible to implement a decent set type in it.


I wouldn't hold up OCaml as an example of how to structure your standard library, the lack of standardisation in its standard library is one of the remaining weak points of an otherwise excellent language. For those that don't know, there are multiple standard libraries for OCaml, each with different trade-offs, leading to unnecessary fragmentation.


Agreed, some things are still missing (in particular my pet peeve, iterators; also unicode strings). But for what it provides, OCaml's stdlib is reasonably well designed and performs well.


Set literals are nifty. If they're in stdlib rather than built-in, they can't be literal syntax.


Unless the language has macros.


Touché


> granted, IIRC python moved sets into native stdlib types only somewhere in 2.x but better late than never

Built-in set objects were added in Python 2.4:

https://docs.python.org/2/whatsnew/2.4.html#pep-218-built-in...


Sets, if you mean a collection that contains no more than 1 of something, are easy. Use a map with struct{}{} values. If ordering is important, it's ~30 lines of simple code to encase a slice and a map in a struct and define your desired interface.

Testing for a key's existence is easy as well. Accessing a map returns two variables--the 1st is the value itself. The 2nd variable is that "exists" variable you're asking for. It's true when the key is present in the map and false if not. Often you'll see the 2nd variable omitted, but if you need to test for existence it's there for you to use.


Testing membership is the easy part of sets. What about subset, superset, union, intersection, difference? So many problems are trivial thanks to sets.

I'd sooner give up regex support in python than the set type, that is how useful they are.


This repository was named to emphasise the comedy of its existence :) https://github.com/frou/poor-mans-generics/


Ouch!

  type Strings map[string]stdext.Unit
  […]
  func (a Strings) Intersection(b Strings) Strings {
  	result := NewStrings()
  	for as, _ := range a {
  		for bs, _ := range b {
  			if as == bs {
  				result.Add(as)
  				break
  			}
  		}
  	}
  	return result
  }



Ick, good point. Since the fake set is a map, that should use key lookup?


Yes. Basically:

  for k := range s1 {
    if s2.Contains(k) {
      result.Add(k)
    }
  }


Yes, the boolean group operations make many otherwise non-trivial problems remarkably easy. That's why I consider sets such a crucial feature.


Could you give some examples please?


Sure, let's get a really simple one.

You have N groups of something (persons, sales units, cars, tools, whatever). Find the values that are present in all of them. This problem comes up, all the time, in some form.

With sets it's trivial. Take the first set, and calculate a cascading intersection with every other set. At the end you're left with only the values that appeared in every one. In very clear python the code might look like this:

    common = ALL_SETS[0]
    for _set in ALL_SETS[1:]:
      common = common.intersection(_set)
    return common
Now, you can argue that underneath the hood that's still going to do N expensive iterations and you'd be right. This is where proper optimisations come in - when provided by a robust stdlib, we expect the set operations to be implemented with bitmaps.[0, 1] These are not something you'd see in a casual implementation.

Btw, in the above code example I have deliberately omitted a trivial optimisation. Let's leave that as an exercise for the reader. :)

0: http://roaringbitmap.org/about/

1: https://en.wikipedia.org/wiki/Bit_array


Examples like this really make me miss fold.

    return functools.reduce(lambda x,y: x&y, ALL_SETS)


If sets are a good match for your problem (and I agree, they tend to be for many common ones), you definitely should use them in Go. Just write the fucking code.

Even if you use the most obvious, naive code to implement fundamental set operation in Go, the resulting program is probably still faster than whatever you're going to write in Python.

In the absolute worst case, you need to generate an extra copy to use with some other type. Mildly annoying for sure, but easily dealt with.


If a hundred people need to write that code anew a thousand times, somebody's going to screw it up. Widely used and heavily tested code is how we get a more powerful foundation to build on.

If the answer is to generate the Go, then we should be having human beings focus on the more powerful language controlling that generator.


Yes you can hack sets into most any modern language, but mostly involves writing annoying looking boilerplate, or pulling in external libraries.

I think parent is looking for a native set type (which I agree with). I don't know if it has a place in golang, but I always sigh and roll my eyes when I have to implement a set in most languages.


I'm currently using Go (and Python - we did go from Python 2 to Go, but later continued using Python 3) at work, but for all of the hoopla about it's concurrency, the reality is I'd rather have Rust's superior runtime safety and metaprogramming. A few things have impeded on Rust adoption at work:

- Learning curve. Rust is not as bad for learning as C++, but any language with strong metaprogramming and unfamiliar or relatively new language concepts are bound to be stumbling blocks.

- Ecosystem. The Go ecosystem by and large is awesome. Rust's is getting up there, though. It'd be really cool if Mozillas work with Servo meant there would be pure Rust HTML rendering and JavaScript interpreter code though! So far, impressed with minimal but powerful libraries like Iron.

- Maturity. Rust itself is pretty good, even nightly feels stable, but the developer tools are still shaping up. Until recently, rustup crashed on my Win10. Things are a lot better today, especially with the Rust Language Server.

- Marketing. Rust has a ways to go on its marketing. This has improved recently as well and I'm glad. Some underestimate the importance of having good PR for a programming language - people need to be really confident to adopt a programming language for critical components, and Go is in a place where many would trust it just because of its reputation. Also good for convincing your coworkers to give it a shot.

That being said, while Go makes concurrency easy and generally acts as a great simplistic language to develop services in, I really feel as though Rust's potential is enormous. The 'simplicity debt' of Go can be taxing sometimes.

I still love Go, though. I am just struggling to figure out which tool to use for which jobs. Though there's large areas of overlap, there are definitely things that feel nicer in either than the other.


I kind of like the idea of Rust, but I don't really have much belief in complex programming languages. C++ only has a large following because it piggy backed on C, which people already knew, then it gradually grew its complexity.

It is quite a different challenge to get people to use an entirely new language with high complexity.

Companies are going to be negative towards adopting a language with a steep learning curve for its employees. That represents too much risk and money.

I think Swift is a much better alternative to those who want something like Rust. Much easier to learn and you still got native code and strict typing without a need for a garbage collector.

People also know they can trust it in the sense that they know Apple supports it and all Apple development is switching to it. That takes a way a big risk factor for companies.


I don't think that's why C++ is popular - I think that C++ started as "C with Classes" is what got it off the ground, but if it were just that it wouldn't be around today. Similarly, Objective C doesn't enjoy much adoption outside of it's creator's domain, despite the obviously more elegant integration of object oriented programming.

I think, rather, C++'s brand of metaprogramming ended up being a big deal. C++ template metaprogramming and now const_expr allow for doing a lot of work at compile time, allowing higher level constructs to compile away to virtually nothing.

That it's compatible with C definitely helps still though, since it means you can still use C libraries and APIs, including the APIs of most operating systems.

In C++ there were programming styles that were "memory safe," relying on a combination of stack based lifetime and 'smart' pointers. While in practice most C++ code is very far from memory safe, I'd say this alone vastly improved software quality over C for big projects.

Rust represents an opportunity to do all the things C++ did right in a fresh language that is actually memory safe and doesn't suffer from the syntax C++ has. To me it's no surprise it was developed by the same people who write browsers - these are people that "get" it when it comes to huge, mission critical codebases. The complexity of Rust isn't a bug, it's a feature.

Go's simplicity is it's biggest strength and weakness, in my opinion. Code generation isn't cutting it as a substition for good low cost abstractions. Don't you hate when you end up in scenarios where you wish things could be typesafe but you realize to get there you'd need to write a codegen? I.E. there's a typesafe GraphQL library for Rust, but the equivalent library in Go will drop you back to interface{} constantly.

I would implore people to try to look past the instinct that simple is better in some cases. There's a balance to strike that yields the best benefits of 'simple' and 'featureful' and I personally believe Rust is closer to it than Go or C++.

I will still be coming back to Go and look forward to Go 2.0, but it'll be hard to look past some of the shortcomings when I'm writing performance critical code and wish I had more safety and control.

(Sidenote: I have a performance critical piece of code at my work written in Go, and it ends up using a ton of memory as a property of the GC. Go's low latency GC is cool, but there are definitely trade offs.)

(Sidenote 2: I wasn't enticed by Swift enough to give it a try, so I have no comments to make on Swift. I'm sure it's fine. It doesn't seem to be what I'm looking for.)


For network glue, Go is a wonderful and super useful language. Not just in how efficient it is, or how easy FFI calls are, but also in how easy it is to deploy.

I don't see Go replacing anything that isn't a lightweight server or really small data processing script, though. Anything Tensorflow or numpy of any notable complexity will probably remain Python afaict.


well I wouldn't call kubernetes less complex. and it's still written in go. I think stuff like that is where go shines.


The key question is how many lines of code would Kubernetes have if Go was an expressive language, but I don't have big hopes for a hypothetical Go 2.0.


but at the same time, Python is the fastest growing language https://stackoverflow.blog/2017/09/06/incredible-growth-pyth... because of data science and AI, and I don't see Go replacing Python anytime soon in these domains.


However Python is getting attacked by new language in all areas where it is being used. Go attacks just one area.

Julia is a much better language for data science than Python. It was essentially designed for it, while for Python it was an afterthought.

Python has a head start today with a large selection of packages and mindshare, but as we've seen with Go, I think that will start eroding in the future.

Python simply can't keep up with the development pace and possibilities offered with Julia. For Data Science and AI packages in Python to have acceptable performance they need to be written in C/C++, this means slow development. Julia has high enough performance that all these packages can be developed in Julia itself. And this is what people in this area is observing. That package development is moving much faster in the Julia community than in Python.

You can also combine these packages in ways that are simply impossible to do in Python. The need to interface with C/C++ creates a lot of artificial restrictions. E.g. processing lots of data fast requires big NumPy buffers with plain data types.

In Julia you can e.g. create a an image buffer made up of color objects and process it fast.

Unlike Go, Julia offers the ability of a gradual transition to Julia from Python, since Python packages can be used in Julia.


To me, Julia feels clunky as all hell. The other thing is that Julia doesn't help for deep learning, where something like 99.99% of the computation time is actually CUDA kernels.

I don't know; I actually really like the optional typing of Julia in theory, I don't know why I find the language so unpleasant.


Agreed. I think the reason is because the programming mindset for data science is different than for normal software engineering. In data science, you write a program, find the answer, write your paper, and throw the program away. Python fits this usage pattern very well.


It is still not installed by default on Macs and that is a sufficient reason not to use it for me. It is far better to be able to tell users they can simply download and run your app without external dependencies. It is also important that downloads be small (i.e. I’m not embedding an entire Python 3 interpreter into my download if the preinstalled Python 2 works as well as it needs to).


> at this point there is no reason to not use Python 3 for new projects

Sure there is: Python 2 isn't going to change on me again. Python 3 might.


Why on earth do people downvote that? I want a language that isn't going to change. Every single other language community seems to understand that, but something about it pisses off Python people.


You're claiming to know the future of Python better than "Python people" know the future of Python.

Python has changed incompatibly once in 26 years, and never plans to do it again, yet you're making it sound like an ongoing pattern. What language are you going to run to that has that kind of track record?


> Python has changed incompatibly once in 26 years

That's not true. Every change breaks some behavior, even if it's not throwing an Exception where one used to be thrown. Things are still being deprecated and removed in new releases.

On top of that, by adding new syntax in 3.7+, libraries may become incompatible with the system-provided installation. That won't happen with 2.7 libraries.


POSIX sh?


Have fun with that?


Real 'fun' to program in - with strict POSIX shell, functions cannot even have local variables.


30 years later, scheme is still scheme. A different report has come out, and some compilers choose to implement it. But the language hasn't changed. See my point?


I particularly don't see your point when it comes to Scheme. The Scheme I learned in 2002 (MIT Scheme) is all but abandoned. I was one of the people hired to rewrite MIT's course code from MIT Scheme to DrScheme, which was definitely not compatible. DrScheme itself has been superseded by Racket, except at that point MIT changed all their CS courses and rewrote everything again in Python.

What Scheme implementation do you use that has never broken compatibility and continues to be maintained?


Of course code being data in Scheme you could easily write program that processes scheme code and transforms it to a different syntax. I would assume this would be harder with Python code.


Not particularly. Like any sane programming language, Python has an AST. That's code as data. For porting, it doesn't particularly matter to be able to treat code as data while running that code.

There are tools that use the AST to translate Python 2 to Python 3, such as python-future. (Unfortunately, one of the things the core Python devs botched about the transition is that they put an awful, incomplete, broken one of these named 2to3 in the standard Python distribution.)

Racket's approach is rather different; they just support all the different Scheme/Racket syntaxes and standard libraries PLT has ever supported, with the same runtime. This doesn't include MIT Scheme but there is a syntax for the subset of it used in SICP. I don't think they have anything that converts code between the syntaxes.

I wouldn't mind seeing Python also use that approach -- a Python 3 runtime that runs Python 2 code (and maybe borks your Unicode, which is fine because people who stay on Python 2 clearly don't care that much about Unicode, and that's their prerogative). I'm convinced the only reason no such thing exists is that the first person who makes it would become responsible for everybody's awful old Python 2 code. Maybe in 2020 someone will make it and charge money for it.


If you read the original Scheme report, you'll notice that it's full of example programs, zero of which run on modern Schemes. They're full of operators that Scheme doesn't have anymore, like BLOCK, PROGN, CATCH, ASET, -$, *$, //$, EVALUATE!UNINTERRUPTIBLY, START!PROCESS, etc.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: