Hacker News new | past | comments | ask | show | jobs | submit login
Python’s tug of war between beginner-friendly and advanced features (aroberge.blogspot.com)
227 points by aroberge on Feb 4, 2021 | hide | past | favorite | 431 comments



I think the time is ideal for the "next Python". A new, general-purpose, beginner-friendly language focused on readability, but designed from the ground up with the techniques & features learned in the past 2 decades.

Python has added lots of features to stay current with the latest trends in programming languages, but this has come at the cost of complexity and redundancy (e.g. look at all the ways to format strings or do filesystem operations). And many of the new features such as type annotations have a compromised design due to their being added to Python late in the game. Python's "batteries included" approach of a large standard library was a great asset in the days before package managers but these days I think it would be better to start with a much leaner stdlib and add things in more stringently.


I agree with you except the last bit - I want a giant language with professionally developed official libraries. Not some junkies slapping half-baked libs together. There is so much calm and peace that comes with std libs. It's not mutually exclusive. You can have a well crafted and large std lib and still can use external libraries. Nothing is stopping you from BYOL.

I would also want some kind of a typing support, ala Pydantic built in. If for nothing else, I would like to know wtf is this function taking in as args.

Can we also run this in a browser!? One can dream.


I agree with you that it's great to have a comprehensive stdlib but I think the process should be careful and gradual. Python's stdlib has a lot of stuff that is not really best-in-class or consistent in design. I think with today's ubiquity of package managers, we could get away with a smaller stdlib initially, and take time to validate libraries with real world usage before blessing them with official status.


Most languages would probably benefit from having a “experimental for the standard lib” package that doesn’t guarantee perfection, and might get deprecated then abandoned before making it to the stdlib. But all new features go through that library first.

That experimental library could even be backed by other packages, so you could iterate on the interface while allowing other packages to continue to improve the core. But the assumption is for any clearly solved problem, these “batteries” provide one recommended way to solve the problem, and as of now it’s either the only or at least not the worst way to do it.


Language features either can, or cannot, be used.

Claiming that a language user should be prepared for certain parts of the language to disappear is a perfectly reasonable idea. But in practice it does not work. Because either it can or cannot be used, and if it can't be used it is a waste of time, if it can be used it must be supported.

Once people use a library, there will be pressure to support it forever and the mindshare switch is a real cost. A language bleeds users every time it drops a feature. The Python namespace is still polluted by documentation and tutorials talking about how to do things in v2 too.


Rust seems to handle it ok. Features can spend a long time being nightly-only gated behind feature flags, while nevertheless seeing real world usage from people willing to brave the risk and churn.

But Python has a very different release schedule. Rust's nightly releases nightly and stable every six weeks, while Python only releases once a year. I don't think it could adopt the same model.


>Claiming that a language user should be prepared for certain parts of the language to disappear is a perfectly reasonable idea. But in practice it does not work

Well, it can. That's what you get when there's no official package and you depend on third party packages that change all the time (e.g. common in the Node ecosystem, in GTK+ land, etc.), or are abandoned, etc.

Compared to that, a scheduled, anticipated process for candidate packages to include in the language (or remove eventually) is much easier.


Indeed, but from your own original comment - we have hindsight. So we can build an excellent std lib from the get go.

There are other bundles that are very popular for a reason. Conda. That's all you need to know. People want stability of dependencies, not wild wild west on pulling libs from random github repos.


IMO the bundling and curation is the least interesting part of Conda. What's really nice are the build infrastructure, quality checks and proper dependency bounds (looking at you, PyPI). Something like R's CRAN already has those without bundling.


And now we have Mamba, which is just like Conda. Except it has a better-documented (and more open) tooling ecosystem. And it lacks the debilitatingly slow/unreliable dependency resolution of Conda.

https://mamba.readthedocs.io/en/latest/


Can poetry do that or that’s different?


Poetry is pretty much a Setuptools replacement, with optional features to help manage Python virtual environments.

It's not a package manager and it does not provide any environment isolation of its own. Conda provides both of those things, and quite a bit more.

They are totally different tools. Conda is more like Homebrew than Poetry.

What Poetry and Conda have in common is a not-really-bad dependency resolver.


Standard library works pretty much in every platform the language targets, a random package on a package manager might be available, with luck.


I would prefer a simple language with a giant stdlib. Every common protocol/algorithm should be in the stdlib.


This seems too hard to evolve.

I'd instead aim for a language that supported interface/trait/typeclasses with a huge standard library of those that allowed writing uniform benchmarks and unit tests to every implementation that shipped simple implementations for some of those and made it easy to use different implementations and experiment with them.

This seems like solving just dependency injection instead of trying to solve every library ever, which seems a much more realistic problem to solve correctly, or at least in a way better way.


It's not ... possible/viable at this time.

Every common X means thousands of complex things. There are over 9000 RFCs (insert Vegita meme). Even more popular APIs, libs, algorithms, whatevers.

Even basic support would be near impossible. A constant race between what constitutes basic and releasing it. (Plus upkeep/maintenance, as things continue to evolve at a rather fast pace.) And if the language enjoys at least a minimal level of successful then a lot of non-stdlib packages are likely to become standard by convention.

It's no wonder that new languages nowadays start with minimal stdlibs.

(I just noticed that there's an SMTP server in Python3! It's ... great, I guess. But there's no DNS resolver library. So either the SMTP client (and now server) packages do not handle anything related to DNSSEC / DANE / TLSA / MTA-STS / DKIM / DMARC and others, or they bundle it. Sure, maybe it's enough to outsource these to the upstream SMTP server and the SMTPd package is just for "testing" ... but, this just shows how arbitrary "commonness" is.)


I think a lot of the benefits can be achieved by making more sophisticated types a first class citizen in the language. So for example things like date/time, intervals, and geometries are just another type. And you use normal operators to interact with those types and invoke algorithms.


Visual Basic .NET is your perfect beginner language with a rich built in framework and standard library. Ironically, it likely fell for the same reason discussed in this article: It got a lot more complex over time.


Fortran then :-)


I would also like pip and packaging "fixed", which from my standpoint of a python software user is "broken". The three I rely on are in a constant cycle of breaking each other.

...although that might all be the fault of aws-cli. Which by the way doesn't install on a vanilla Ubuntu 20.04 ami via apt which is just... really really bad.

Aws cli continues the inexplicably horrible aws interface that makes me shake my head at people lauding their might, along with their console.

The console rewrite... anyway, pip seems to be a headache, certainly not as good as gems.


I love how you can tell which parts of aws-cli were written by different teams because of how the inconsistencies are consistent within a certain domain.


I finally switched to poetry for python deps and happy ever since. It’s basically the gems approach applied to python.


For god’s sake at least let requirements install in order.


Are there scenarios where this doesn’t happen?

I had a quick look in pip and install order is predicated on requirements order https://github.com/pypa/pip/blob/c188d1f08f1a4a2c0a03e397f15...


I'm thinking ReasonML or OCaml might be able to take on this role.

I was sad to hear about the change to ReScript though. ReasonML targeting the browser and native seemed nice to me, although I do agree the documentation situation and DX wasn't ideal.


Hindley-Milner typing systems are a great way to introduce typing without making beginners afraid of it. The inference make things just work, the way a beginner may expect from the computer.


I'm not sure I care about the "run this in a browser" part (unless someone developed a WASM backend for it). And FOSS would be a requirement for me. Given those criteria, I would gladly pay a recurring subscription or donation for such a language.

For what it's worth, there are a lot of good interesting languages out there that aren't Python and have pretty good library ecosystems. Kotlin, Elixir, and Elm come to mind. But they don't exactly fit the Python niche.

What you're describing sounds to me like a modernized Common Lisp with commercial backing. Rework CLOS a bit, clean up some of the other crufty old weirdness, and give me a big selection of high-quality well-supported libraries like Numpy, FastAPI, Trio, etc.

The advantage of doing it in some Common Lisp derivative is that things like React JSX are pretty much "free" and wouldn't much seem out of place, whereas in Javascript they need a whole separate build/compilation step and are often portrayed as black magic fuckery.

edit 1: Arguably Julia could be used for a task like this, but its library ecosystem and language runtime are heavily skewed towards numerical computing and data analysis.

edit 2: Raku also sounds like a possible candidate here, but it obviously lacks widespread adoption and last I heard it fell significantly short of Perl 5 performance. I'm not sure what kind of library ecosystem it has nowadays and I doubt we're going to see big commercially-backed library development efforts for it, soon or ever. Perl 5 has a huge ecosystem of course, but it's such a nasty warty language with nasty warty tooling.


If you like Raku, and the large Perl 5 ecosystem, you might consider going the path of Raku, as it allows 99.9% of the Perl 5 ecosystem to be used as if it is Raku, by installing the Inline::Perl5 module. Once that it done, using a Perl 5 module is as easy as:

    use DBI:from<Perl5>;


I use Clojure for that reason. You can use Python libraries with it as well as Java libraries and JavaScript libraries. Its reach is just phenomenal. And indeed, its React support is amazing.


I didn't know about the Clojure-Python interop. This is the library you use for it? https://github.com/clj-python/libpython-clj



> Raku ... lacks widespread adoption

Right. It clearly doesn't have a strong enough offering at the moment to attract anyone beyond a few early adopters.

> last I heard it fell significantly short of Perl 5 performance

Likewise. I'm pretty sure Perl still roundly trounces it for a lot of operations, especially in regex processing. I think it will need to be significantly faster than Perl before it will have a chance at gaining significantly more adoption.

> I'm not sure what kind of library ecosystem it has nowadays

The raku.land directory has less than a thousand packages and many seem to be jokes, unloved, poorly tested and documented, version 0.x, or even 0.0.x. [0]

The standard library is a different ballgame. It's generally well designed and very large, cutting out the usual blizzard of folk reinventing the wheel for basics. So that's good.

> I doubt we're going to see big commercially-backed library development efforts for it, soon or ever.

I agree.

> Perl 5 has a huge ecosystem of course, but it's such a nasty warty language with nasty warty tooling.

This is where I think Raku's strength lies.

A lot of the effort that went into Raku was to build a platform that could just directly use tooling and libraries of other PLs while insulating developers from their downsides.

That way you get to use Raku tooling, syntax and semantics at the surface level with other PLs' tooling and modules and performance. Kinda like using C code in a non-C PL, but generalized to using code from any other PL in Raku.

The exemplar was supposed to be Perl, but it was also supposed to extend to many other PLs. Starting 7 years ago this became the Inline family.[1]

Inline::Perl is the most mature. Imo it's remarkable. But it was always supposed to just be the first one to get the polish to demonstrate that the approach works and works great. About a year ago the author behind it said it was nearly time to do the same for Python modules. And now I see a GSOC proposal to do that.[2]

The Inlines hide the warts (though you do have to read the modules' documentation to know what their API's are etc).

I think the impact of Inline Python will be huge because it takes away the complaint about reading Perl documentation and code. Instead you read Python API docs but get the benefits of using Raku.

(And, in the wings, there's Inline::Ruby, Inline::Lua, and so on.)

[0] https://raku.land/

[1] https://raku.land/?q=inline

[2] https://github.com/perl-foundation-outreach/gsoc-2021-ideas/...


Somewhere I have posted code that uses Inline::Perl, Inline::Python, Inline::Ruby, and I think maybe Inline::Go to convert Base64. (Perhaps other language interfaces as well, I don't quite remember)

While there were some quirks, It worked rather well.


>Not some junkies slapping half-baked libs together.

This is offensive. Whether the people who work hard to make and share (for free) half baked libraries are addicted to drugs is really not part of the main thrust of your argument. You could leave it out, save words, and not have distracting comments from people like me while still making your point.


Parents words were meant to convey a certain disgust/impact.

Leaving them out would communicate the core idea, but not the emotional tone they wanted it to have.

(And it's obviously not about literal junkies, it's meant as "random people who don't know what they're doing").


Are you seriously taking this literally? I didn’t even think this deep when writing it. Honestly! Lighten up.

No one in their right mind thinks about people building free libraries on GitHub as nothing but positive thing. Come on. I am offended that you’re offended by this.

Usually, I would apologize for saying something offensive, but I guess someone will get offended on the internet. If I said “Wild Wild West” then may be some Texan would be offended by it.

Intention matters. Smile more and be tolerant, no offense :-)


> No one in their right mind thinks about people building free libraries on GitHub as nothing but positive thing.

If you feel love, share love. xx


It's bringing love, don't let it get away! Break it's legs!


Amen.


It’s 2021. The Earth is heating up, we have a cult member who believes in baby eating pedophiles in the USA government, and people are still dying from their inability to afford insulin. It’s not that name calling will be taken seriously by everyone, but thoughts like that are distractions we can no longer afford to think about. Put your energy towards ensuring there’s a future worth worrying about.


If they just don't eat, they won't need insulin anymore!


I honestly can't tell if it's sarcasm or not. If I was reading it on some comp.* newsgroup I'd say the former, but it's 2021 so I consider the possibility you might be serious - but I hope not.


I'm fairly certain, given context, he didn't mean drug users. He was using it as definition two here:

https://en.wiktionary.org/wiki/junkie


I read that as programming language (re)design junkies, not pharmaceutical junkies.


Aren't there plenty of languages that fit that description? Java, C#, TypeScript, Kotlin, Swift, et all ?


I agree about the stdlib. There was a reason that xkcd had the "import gravity" joke. One of the reasons I ditched perl was having to evaluate so many different perl libraries, which would do anywhere from thirty to eighty percent of what I needed, and tended to come with some really exciting dependency chains.

I know the mantra is that the standard library is where things go to die, but I believe that to be cultural. I still have code running somewhere probably that uses the cgi library, which was perfectly well-suited for a Windows serve where the page in question got an average of seven hits a day, never to scale much beyond that (you'll have to trust me on this, for amusement I reviewed ten years of logs for just that page to find nothing out of the ordinary, and there are reasons it would not need to "scale" in the future).

Each library and each string-formatting method and so on is a choice, and those choices impose a lot of speedbumps and cognitive friction.


Anecdotally, this approach has been replicated by go. The stdlib in go is surprisingly complete. I’ve seen several go developers cite this as an advantage, being able to write many apps without adding dependencies from external to the language runtime.

I definitely think they’re onto something. Dependencies are a hard problem to solve.

Modern build tools like gradle work great at resolving deps, until they don’t and at that point we’re in for a world of shock at just how complex dependency resolution can be in large projects.


Is there a specific advantage to running in the browser versus running in everything at the OS level? I've always been a little confused why are abstractions went all the way up to the browsers. It seems that you could push a lot of browser functionality down the stack and simplify a lot or architectures and remove a lot of cruft we don't need.


Deployment is easier. Upgrade is easier. Access control, restrictions, permissions; you don’t have to add a user account to namespace/sandbox a process in browser tab.

The facilities provided by a browser are almost (but not quite) a complete superset of those provided by an OS. file io, peripheral io, etc. Can all be done via the browser. Additional facilities are provided too (security/sandboxing).


Right, but you could implement all of that at the OS level too, which to a large part of my understanding is what things like docker is built on. Changes to the kernel to allow sandboxing at the kernel level for instance versus doing it in user space (which browsers could theoretically implement with syscalls now?), and would reduce a lot of overhead no? Or are browsers faster because they stay in user space and avoid context switching?


I don’t think so - OS’s have been around much longer than browsers. People have been trying (without anywhere near the success of browsers) to do cross platform deploys for almost as long as OS’s have existed.

Now a common model has arrived (dom + js) allowing multiple browsers to just run your app.

I’m not sure browsers are faster (there was a thing called pepper api in chrome and nacpi or something in firefox - i’m not familiar with the space) that allowed native code to run.

I’m not sure browsers are generally faster than native OS binaries, but they are an easy to deploy to environment that doesn’t need syscall emulation like freebsd can do for running linux binaries.


Say not browsers, but linux becomes the dominant OS in existence on all platforms, not just embedded and server. Does that move the most targetable platform back to the OS level instead of the browser level if we are going off the metric that human interaction and ease of use is more heavily weighted than all possible machine interactions?


Yeah probably, but there would still be the open question of distribution and update (inc dependencies).


You still have to click on the update button for a browser, and as far as apps on your machine go, they could just check for updates next time the application is launched like current web apps/web sites do. What you are saying is not impossible for the OS to take back. It seems more like there just isn't momentum to push browser features down the stack back into the OS and I'm not sure why other than browsers already do it, so why backport it.


I think the point is more to use it instead of javascript.


What do you mean by it? The browser itself? In that case, everything done in the browser could theoretically be pushed down a level from user space into kernel space, potentially saving a lot of overhead, no? The only trade of I imagine is the context switch you hit from doing everything in userspace, but as far as I'm aware, even browsers have to eventually hit the kernel for at least raw input output on the network.

Why not let the OS do more and more of the complicated technical stuff, simplifying the deployment of web apps because now we don't have to know a thousand different bloated libraries, just a slowly updating set of OS APIs.


Libraries in Python are hell. And Stackexchange can be almost pedantically useless in those things. For example let's take OpenCV bindings for Python. 3 separate libraries provide it, which all are called in as

   import cv2 
Yes, Stack Exchange, I know OpenCV can do this for me, but integrating that into my package dependencies is time down the drain which I am never getting back.


> There is so much calm and peace that comes with std libs.

This, and library documentation. It's probably the single practica issue I have with D and Nim. They seem to have fairly robust standard libs but when I look for something specific it always turns out this feature is missing, outdated, or badly documented.


What about Microsoft's Blazor - compiling C# down to wasm:

https://dotnet.microsoft.com/apps/aspnet/web-apps/blazor


How I dream of a future language with the power of rust, ease of python, and the ability to work in a browser.


Sounds not far from https://nim-lang.org/


Rustonscript :-)


Two pieces of advice:

- Just learn the language with all of its quirks

- If you are new, ignore the quirks until you are ready

Creating a new language sounds ideal, but there are a whole slew of caveats. Python tends to be recommended as a beginner language since it is reasonably easy to learn and offers considerable room for growth By the time the Next Python reached that stage, we would have accumulated a new two decades of techniques and features that would leave Next Python in a similar position to Python today. That assumes that people would even agree upon which language would serve as Next Python as a standard recommendation. One of the advantages of having so many people agreeing upon Python as a starting point is it eliminates the first hurdle novices most novices face: figuring out where to start.

As for the batteries being included, it is also a strength of Python. Choosing libraries can be difficult. That's true for the novice and it's true for anything but trivial projects. Lean standard libraries encourage fragmentation. We should have learned that from the shortcomings of C, yet many languages have followed in the same footsteps with similar results. We should have learned that large standard libraries can be useful, from the success of languages like Java and Python. Package managers don't change the degree of fragmentation, they only make it easier to deal with.


> Creating a new language sounds ideal

I seriously cannot think of anything I could do to exceed Python.

Does it do everything? No. But name a tool with significantly greater overall reach. Some compete, but seriously. . .


Python is great if you’re writing code to run on your own computer(s). But it’s terrible for packaging into executables to give to others (compared to Go), for embedding in applications (compared to Lua) and for running in the browser (compared to JS).

There’s no reason a new scripting language couldn’t excel in all three of those areas. But AFAIK none currently come close.


And small things. Like your function wants to know if a parameter was not used when calling it. The ideom if par=None breaks if called with arrays. Ending scripts with a command early would also be great (for interactive programming)

Also what is wrong in having a switch statement? (I`d like one with range cases 1<x<10..)

Accesing dicts with a struct syntax.

'Inline' C functions.

JIT, especially for loop acceleration.


Quick note, the more pythonic way of checking par in this case would be just to:

  if par:
    do_stuff()
Python does some really cool, if slightly unintuitive, checks for "truthiness" where 0, [], {}, false, '' and None (could be other cases too, brain's a little foggy today) all evaluate as false. This controls for:

  if par = None:
    do_stuff()
not catching empty lists, empty dicts, etc.


How do you define the function in that case?


So, a more complete example might help:

  def myFunc(par=None):
    if par:
      # do whatever is needed when the parameter exists
    pass
    #finish processing
This allows you to pass in [], {}, None, '', or omit par all together, which will all behave as if par=None, or pass in data and use it appropriately. You can also use

  if not par:
    pass
if you want to do the inverse.


I believe the idiom to check, if a parameter wasn’t set is to use an object() instance as default value.

  _notset = object()

  def foo(x=_notset):
      if x is _notset:
          print("foo was called as foo()")


Thats nice. Thanks. Where did you find it.


I think it is from somewhere in standard library.


I don't understand what you mean by arrays.

    if par is None: ...
As for accessing dicts with a struct syntax, I prefer keeping dicts and objects different things (unlike Javascript).

And for JIT, there's numba!

Agreed on the switch statement...


Pyinstaller seems to work pretty damn well for packaging standalone binaries. Or at least it does on macOS, I haven't tried it on other platforms.


I'm using PyInstaller on Win10. It works well, although the single-file binaries get slower as I add more libraries because it starts by extracting that single file out into a temporary directory and then running it.

It's a personal project so I like to experiment, but it's also a CLI program that accepts command line arguments, runs, and then exits, so a 3-5 second delay _each_time_ it runs is pretty painful.

Luckily for me there's a 'single directory' option, which puts everything (uncompressed) into a folder. Snappy start because there's no decompression step, and while I like the idea of a single file .exe it turns out I don't really need that (I'm not copying the program from computer to computer).

So - thumbs up for PyInstaller!


Pyinstaller is nice for packaging projects with smaller libraries. Hello world is about 7MB.

Try including pandas with a few other popular libraries.

Suddenly you have 500MB blobs. Not very practical for passing around multiple projects.

I suppose that is not worse than Electron apps...


I know you are being factious, but it is much worse than Electron apps. One advantage of JavaScript's focus on smaller libraries and aggressive tree shaking at build time is that it is pretty good at only shipping code that is actually called.

In the python world you have to do all the work by hand. I've literally spend time copying and pasting code out of libraries and into my code just so that my build wouldn't balloon by 100MB because I wanted to call a 100 line function.


Is Python actually so poor though?

This is an extremely unscientific comparison, because I don't feel like creating custom binaries specifically to test with. But I happen to have a copy of youtube-dl on my hard drive, and it's 1.8 MB. I'm not sure whether youtube-dl creates their binaries with PyInstaller, but youtube-dl is written in Python, and the binary is a single file that can be run without a Python installation.

I also have a copy of docker-compose, which I know for a fact is created with Pyinstaller. That one clocks in at a significantly worse 10.8 MB (so perhaps Pyinstaller is the culprit and youtube-dl is doing something unique), but that's still relatively small for a considerably complex program.

Lastly, I have a binary called "toggle-switchmate3", which I created myself some years ago. I have a set of Switchmate light switches set up in my apartment, and the best way I could find to control them from a Mac was via this Node package: https://www.npmjs.com/package/node-switchmate3. NPM's dependency mess freaks me out, so I set everything up in a virtual machine, and then created a binary with "pkg", the Node equivalent of pyinstaller.

"Toggle-switchmate3" is 36.1 MB. And it's not a standalone binary—it requires a separate "binding.node" file to be in the same directory.


Is Python actually so poor though?

Perhaps not in the general case, but in many cases it absolutely is. The problem isn't python itself but libraries. While JavaScript tends towards dozens of small libraries that do only one thing, python tends towards one library that does everything. Which is really handy in most cases, but very painful when you want to package something that just uses one of those functions.

In a concrete case I had a small script that ran a particular edge detection algorithm on an image and returned the edges in geojson format. The whole script was less than 300 lines. But the built dist weighed in at several hundred MB as it pulled in the entirety of numpy, scipy, scikit-image, fiona, shapely and rasterio, despite needing only maybe a couple of percent of the functionality of each library.


I have tried PyInstaller some, on Windows, for both CLI and GUI (wxPython) apps, and what I tried worked fine, including single-file EXEs.


>> for embedding in applications

I did this before and found it really easy to integrate. The provided C API surface is really easy to integrate with. It was literally just a case of detailing my types, calling the Py runtime and then querying the results/side effects.

Having never done something like that before, i was full of trepidation - i was amazed at how easy it was.


Packing Python into single packages has been a thing since 2002.


There are a lot of things python could do to be much better. Two examples, 1) dependency management in python is a train wreck compared to most modern languages. 2) the way package paths are done for imports is far more confusing than it should be.


Agreed. I’ve always felt that the following sums up why Python is my preferred language:

Python is the second-best language for everything.


With the big, big, notable exception: targeting the browser.


To be perfectly fair, unless you count WASM or other compiles-to-JS-but-not-quite-native langs (which, outside TS, are pretty niche), this is the case with literally every language except JS.


> unless you count WASM or other compiles-to-JS-but-not-quite-native langs

As a happy user of clojurescript, or seeing the popularity and adoption rate of typescript, I do not count them out.


Not counting them out either, but outside TS, they're still quite marginal, at best.


JavaScript and it’s variants are 1st place for the browser (shoe in on a technicality) ... but like 213th for everything else.


Possibly not for long: https://www.brython.info/


Brython has been around since 2014.


> I seriously cannot think of anything I could do to exceed Python.

Just off the top of my head:

- Pattern matching

- Multi-line lambdas

- Proper variable capture in lambdas and inner functions (as was fixed with the 'let' keyword vs 'var' in recent versions of JavaScript)

- Support for cyclic imports (a bad pattern in general but necessary in some cases)

- A well-defined C API for extensions that doesn't get them too entangled in the internals of CPython. This would make it possible for other implementations to reach the level of library support that CPython enjoys.


Pattern matching is in PEP 635, created Sep 12, 2020.

Multi-line lambdas can be sort of obtained in Python by defining a progn function:

https://news.ycombinator.com/item?id=19933223


I would say proper variable scoping in general. It would be awesome to have let variables in python!


Variables are scoped just fine from my point of view. What do you mean by proper?


I meant to say block scoped. Variables in python have function or global scope but no global scope. E.g. you can say

   for x in ...:
and then use x outside of the loop.


I have no idea why you would want to declare an x in the loop, and then use it outside the loop. There are better ways of doing things.

The fact that you can do such a thing in JavaScript is exactly why JavaScript is such a mess of a language with its globally-declared and hoisted variables.

That sort of practice has never made sense, and is not a language design that Python should follow.


I'm not sure what you mean by reach? But Clojure given one definition has more reach. It can make use of Python libraries, Java libraries, C# libraries, JavaScript libraries, Erlang libraries, C libraries, and can run on desktop, mobile and browser.


JavaScript has greater reach. ES6 in a modern browser with native modules, and if you avoid classes and this, focus on object literals and pure functions, and pick a decent utility library, its a very good environment with max reach.


Better main method calling, better chaining, multiple constructors, and better reflection.


> ignore the quirks until you are ready

If you are new, chances are do not yet know what is a quirk and what is idiomatic.

It can be challenging and frustrating trying to follow guides or tutorials and they are all doing things differently - which is the "right" way? You have no idea as a beginner.


>Python's "batteries included" approach of a large standard library was a great asset in the days before package managers but these days I think it would be better to start with a much leaner stdlib and add things in more stringently.

I think that package managers or not, the "batteries included" approach is great, because it leads to most of the code for useful things already available, and also helps third party packages be smaller (since they can depend on things available in the standard lib).

Imagine 3 different packages, for example, each bringing its own different json parser depedency -- as opposed to all leveraging the included json package.

That's how you get no community standards but tons of packages for the same basic things in slightly different ways, each with 20%-30% adoption.

That's also how you get 1GB node_modules folders.

That's also how you have the "leftpad" situation.


I've been thinking for a while that it's about time a new (maybe jit compiled?) language hits the scene with:

- A simplified/sounder natively integrated version of Typescript's type system - Javascript's easy notation for manipulating objects - None of Javascript's other baggage/cruft - Something akin to go routines

Basically, something with the performance of Go/Java (so there would be no need to rewrite your code once it starts serving millions of people) but the elegance of Typescript.

Not sure if this is similar to your vision of a "next Python" but it feels somewhat similar.

Basically I think right now:

C -> Zig

C++ -> Rust

Java -?-> Go

Python/Javascript --> ???

With native typing we could get a language as fast as Java/Go but with the cleanliness of Typescript.


I remember a few years back when Nim was going to be the next big thing here, but unfortunately it never gained that level of traction. Still an interesting language even without bigco levels of support behind it.


> C -> Zig

If dealing with uses after free and lack of bounds checked access in release code is still a fun factor.

> Java -?-> Go

Not at all, unless one misses Java 1.4 with loots of if/else boilerplate.


- But huh, if you checked that at runtime compiling un debug mode, and then you compile you compile the exact same code... It should be OK right ?

- I think you're right. C is more l'île a garbage collected C.


Big if, specially if you haven't tested all use cases.

It is the same fallacy as C code is safe if everything was checked, while CVEs keep happening.


Swift would be a candidate and is a great, progressively disclosed language, simple enough for beginners but scales up in advanced features. It’s attachment to Apple puts a lot of developers off but there is a lot of work going into windows and linux support as well as WASM. One issue is compile times right now.


Typescript is not clean because it has to accommodate every crazy thing people do in raw JavaScript.

Having said that, isn’t Deno close to what you’re looking for?


I actually really like TS's type system precisely because it is insanely flexible.

I can write statically typed code that easily shapes & reshapes objects & code that fits functional or object oriented programming styles.

It's a statically typed language that feels dynamic.

Now, if I were to get a version of TS without all of JS's crutches that would be amazing.

Deno is pretty close, but it lacks on performance & is bounded in potential by JS (in some aspects).


I live in hope that, one day, TS will loosen its links to JS.

Things like `Object.keys` irritate me because they don’t compose well. What would go wrong if TS started supporting `Object.prototype.keys`? Could even be implemented as a rewrite using `Object.keys`.


I’ve been thinking about this, and I never thought of it before. I would love a language that is both dynamic and static similar to JS and TS. but, when the types are provided, use them to compile to fast code, while keeping the ability to do dynamic scripting.


What do you think of Dart? It's still evolving, but for me it ticks all those boxes.


I’m overwhelmingly favourable towards Dart. While I picked it up for mobile (Flutter) dev, I’ve been slowly replacing a lot of scripting/backend work with it too.

It’s the ideal replacement for general purpose scripting and browser work IMO. I can’t recommend it enough.


What’s your opinion on Julia?


Scala comes very close to that.


Scala is nowhere near as approachable as Python or Typescript


Why do you think that?


At least when I used it ~2 years ago: an extremely complex type system with a huge learning curve (unless you're going to just write Java in Scala), poor documentation, and a very slow compiler. That's to say nothing of the JVM-based runtime.


I got really frustrated one of these days because I reached the 20-something limit for fields in a case class and I had to change a lot of my architecture. It was a Spark job so I needed all those fields in that structure.

Not sure if that's because I'm a beginner or if it's just Spark itself, but Scala errors are impossible to decrypt.


Ah yeah, that was really a stupid limit.

Time to bump Scala to version 3 so that you don't have the 22 case class field limit anymore. :)

See here: https://dotty.epfl.ch/docs/reference/dropped-features/limit2...


> unless you're going to just write Java in Scala

That is what I mean though - but I would replace "Java" with python here. Just stay away from all the complexity and write python code. It's pretty simple and some people argue it's the better way to use Scala.


Scala’s implicits, mutable/immutable collection bifurcation, infix/dot syntax interchangeability for method invocations, tendency of libraries to implement their own mini language, ...


As a beginner, you don't really have to use Scala's implicits or "infix/dot syntax".

Just like how you don't have to use type annotations, generators, decorators, context managers in python.

Same for libraries.

At some point you will stumble upon a library that forces you to understand them or you even write one yourself, but that is not something that stops you in the beginning.


It does.

Go got this right. If you want to read code, contribute to a large code base, folks will be using the features of the language, including the weird ones. Go has kept things very simple (relatively) in terms of core language. Some things were beyond basic (error handling? no generics?) But boy did it make it easier for contribute and understand code.

def Hi() = "hi" Hi() Hi

Yes, those two variants both work?


> If you want to read code, contribute to a large code base

Sure, but in that case, is python really beginner friendly? I don't think so. I had the impression we were talking about starting out with the language and building the well-known todo-list to learn the basics.

Also, golang is a rather young language and it will go the way that all languages go: it will get more and more complex because the more experienced developers want to improve their productivity with better language features. Remember the "no generics" thing? Well, now the generics will be added - I predicted this to happen. And just wait, other features will follow...


It was originally. But agreed, less so now.

They've been adding some really more edge / line noise type features. Consider the walrus operator? You can do pretzel magic with this which make figuring out scope and what happened to what difficult.

I love the old way.

with open(…) as f: for line in ...

And I wish they'd just made more things iterable maybe or fixed some of the regex match stuff?


the JVM is an incredibly heavyweight runtime compared to CPython. I don't think we're going to find the next beginner-friendly language coming out of the most professional runtime in human history.


True, but I don't think that is what the OP was taking about. The language itself is rather independent of the runtime. Scala compiles to native code or Javascript as well.


> incredibly heavyweight runtime

The apps may be heavyweight, but the runtime was designed to run on set-top boxes 25 years ago.


The JVM is definitely memory hungry (excluding cherry-picked examples such as 'set top deployments' or the Java Ring from the 90s), moreso than any other language runtime referenced in this discussion.


Python honestly did not learn from many things that were already well known when it was designed.

It wasn't arcane at the time that using exceptions for decision logic, or letting loops run by assignment, rather than by creating a new scope, were not the most salient ideas.

The following Python code contains a bug:

  list_of_adders = []
  for x in iterator:
   list_of_adders.append(lambda y: y+x)
Namely, since `x` is assigned with each iteration of the loop, rather than that a new scope is created, the `x` that the closure encloses is re-assigned with it, so all anonymous functions contain the value that `x` had in the last loop, which is almost certainly not what the programmer intended.

In fact, if within the same scope, which is quite large, since Python does not like block scope, x ever be re-assigned, which is quite likely, as Python uses the same syntax for initialization and assignment, then all the closures collected in the list shall thenceon refer to the new value that x is assigned to.

These are not design choices that many other programming languages at the time made; they were bad with the knowledge of the time, and continue to be bad now. There is no justification to have loops be implemented by assignment, and not create a new scope.


The justification for function-scope instead of block-scope is that only the former really works if you have implicit variable declarations.

In turn, implicit variable declarations (using the same syntax for initialisation and assignment) are a reasonable choice for a language that aimed to be “executable pseudocode” and that supports mixing initialisation and assignment in a single statement (e.g. `a, b, self.c = foo()`).


It could easily be written as `var a, var b, self.c = foo()`.

Having the same syntax for initialization and assignment is quite quæstionable.


That’s not a bad suggestion, but it’s quite verbose (it would require declaring multiple variables as `var x, var y, var z` instead of `var x, y, z`).


That could easily be `var (x, y, z) =`, which is similar to what many languages use.


I'm always surprised when something throws in my face how many details we have to keep in memory to make implicit mutability work.

It's amazing that people can program in imperative languages at all, yet it feels easy. I wonder how much trouble this causes for somebody learning how to program in a modern high level language.


New languages need to have a distinct benefit in order to be enough encouraging for new people to learn it over picking something which is already well established. In python case it was something that replaced shell language for data administrators, and many of the quite complex languages used by data scientists.

The one big selling point a future language could have would be inherent parallelization that uses all the 32+ cores of the developers machine, but that are as beginner-friendly as python. Such language does not exist. A dynamic, easy to use, and built from the ground up to always utilize all cores all the time without demanding that the programmer are handling side effects.


Such a language _can not_ exist, because it stops fulfilling the "beginner-friendly" criteria. As soon as you deal with concurrency, things either get hard to learn or get super messy.

Maybe golang comes closest to that (and concurrency is messy in there), but I'm not sure if it fulfills the "beginner-friendly" criteria from the perspective of a python developer.


Go minus concurrency actually is a good candidate for a solid beginner language. It has quirks, but no more so than any other choice, and fewer than most, for a beginner.

However, in my experience, keeping beginners away from concurrency is difficult. A lot of Go tutorials make a hard burn for it, since it's the big feature if you're already a programmer. And I've gotten (internet) yelled at for trying to suggest that concurrency is something beginners should leave until much later.

The other problem is a lot of what beginners may want to pivot into after they are done writing their "is the number higher or lower?" first terminal app isn't great for them. It's not a great game/graphical language. GUIs mostly suck. Web is awfully hard to get into nowadays, even if you try to help them narrow their focus.


> As soon as you deal with concurrency, things either get hard to learn or get super messy.

Strong disagree. There is a language where concurrency is trivial and beginner-friendly. It is called POSIX shell with standard GNU tools. To run several independent commands in parallel, you print them and pipe to GNU parallel. If your concurrency has an arbitrary tree of dependencies, you print it in the form

    target : dependencies ; command
and then run GNU make.

Languages that make this simple stuff complicated, like Python, do so by choice, not by need.


There is a reason why no one uses shell for anything more complicated.

E.g. start two long-running commands and then close the one of them depending on some later output of the other one or vice versa.

If that's not already complicated enough, we can easily add things like, execute a command A and then another other commands (B, C, D) and, only once all these are finished, run a cleanup command D. But run D always, no matter wether B, C, D fail (think of closing a database connection). And now treat this whole thing as yet another command E, that itself can be used in the same place as e.g. B or C or D before (= composition).

Shell is great for many common tasks, but I would never call it beginner-friendly, even less "trivial to use for concurrency".


I think the reason no one uses shell for anything more complicated is related to the fact that:

1. It's not generally taught in CS classes, and people just search up enough syntax to be able to acomplish their task.

2. A lot of it's variable expansion/substitution are remiscient of Perl.

3. Small mistakes can have huge impact (everyone knows how often rm -rf of unintended directories takes place). Just for the fun of it, I suggest everybody reads at least once the Unix Haters Handbook https://web.mit.edu/~simsong/www/ugh.pdf

If I understood your task correctly, it can be done just nicely in the shell (after all, this is what I think the shell excels at). You'll see the unpleasant side of it when dealing with function arguments and slicing values and referecing specific ones. I don't guarantee it does what you want to the letter, but just my quick interpetration of your description.

    function demo() {
      first=$1
      last="${!#}"

      $first &
      wait

      for f in "${@:2:$#-2}"; do
        $f &
      done
      wait

      $last
    }

    function first() {
      sleep 5
    }

    function pingy() {
      ping -c 8 google.com
    }

    function last() {
      echo -----------------------------
      echo done
    }

    function inception() {
      demo first pingy pingy
    }

    demo first pingy pingy inception last


> Shell is great for many common tasks, but I would never call it beginner-friendly, even less "trivial to use for concurrency".

Yet somehow beginners in my lab cannot get their hands off the shell and we have to politely guide them to other languages... But still, I insist that parallelism in the shell is very easy. How do you do that in Python, for example?

    # convert images from jpg to png, in parallel
    for i in *.jpg; do
            echo convert $i ${i/jpg/png}
    done | parallel
Regarding your "complicated" use cases, I must confess that I don't understand them nor their relevance. But for simple people like me without fancy needs, the simple concurrency offered by GNU parallel and GNU make is good enough.


Haha, please don't get me wrong! I didn't mean to say that concurrency in python is any easier - it's probably even harder than in shell.

> Regarding your "complicated" use cases, I must confess that I don't understand them nor their relevance.

They are two recent examples of mine. But even if you don't see the relevance in them: if shell truly makes concurrency trivial, then the shell code should not look much more difficult then my English descriptions.


Parallel independent batch jobs are easy in Python though. If you can write it as map(function, iterable) you just need to import the right module and drop it in and you’re done.


> As soon as you deal with concurrency, things either get hard to learn or get super messy.

I'd argue that Clojure, while perhaps not the easiest language for beginners in all aspects, is both minimalistic and really fantastic for doing clean and safe concurrency.


Using a lot of execution resources is not a direct benefit from the user POV.

Making programs execute faster could be. And there using many cores is one implementation strategy among many complementary ones. But the current landscape of programming languages shows that languages focusing on execution speed and parallelism have smaller user bases.

Also, re this data interpretation of Python's killer app, this usage of Python is something fairly recent and happened after Python had already broken into mainstream usage in the industry. A lot of small and big applications have been built using it before the data and ML things got in vogue (like Youtube, Dropbox, Spotify etc at the bigger end).


dask in my opinion comes decently close to better parallel computation in Python, and being beginner friendly. It is immensely compatible with numpy and pandas, and can abstract away a lot of the parallel computation complexities which makes getting up to speed easier.

However, their scheduler really needs speed ups, and they do run into weird data handling bottlenecks in my experience.


Chapel and Julia.


> replaced shell

I'd argue it replaced Perl.


Are you saying it was a replacement for a two year old language?

Perl - December 1987

Python - December 1989


Yes. You need to remember than neither language was relevant yet; I'd draw that line around Perl 5 and Python 2. Don't forget that Perl 6 spent two decades in development hell.

Not that the TIOBE index is perfect, but you can see what happened since 2001. Perl used to be a lot more popular for early dynamic websites and systems glue, but by 2011, Python had gotten a lot more popular in those roles.

https://www.tiobe.com/tiobe-index/


Zope was winning over mod_perl back on those days.


The program should leave one core for the OS.


I spent a few decades chasing "the next great language", and I regret it. I wish I had spent those hours learning about statistics or cryptography or algorightms. Things that are timeless. If you want to be a jack of all trades, master of none, by all means, chase the next great thing.

I for one, think it's more likely that Python will evolve. The mathematicians and physicists, biologists, astronomers etc. that have finally gotten rid of C++/Fortran and learnt Python aren't going to change to anything with forced static typing any time soon...


Not a mathematician/physicist/etc, but having literally dived into Julia today, I think you’d be surprised. It definitely seems like “Python 2.0” for numerical computing.


Julia is probably excellent. How large are the benefits to using Julia compared to Python? Are the benefits great enough to compensate for the time spent:

- Training your entire team to use Julia instead of learning about other relevant things

- Re-writing your Python-infrastructure (or dealing with Julia to Python-interoperability, which means new employees now have to learn or know both Python and Julia)

- Replacing Jupyter and learning new, similar tools

- All employees making their favorite IDEs function well with Julia

- Figuring out how to make Julia talk to software that already has an existing Python API (Wireshark, AutoCAD, GNU Radio, ... just about everything you can think of)


Those are definitely problems for the corporate world (except for Jupyter, a Julia kernel already exists for that).

Again, I’m not an academic so take what I say with a grain of salt - but I can see Julia paying dividends very very quickly in that sphere (once you’re over the admittedly steep learning curve).


A few of these are non-problems IMO.

- Julia is relatively easy to learn for Python people (especially the numerical people who have experience with stuff like Cython)

- The interop (both ways) is pretty great with Python


The Ju of Jupyter is for Julia.


I think Python's batteries included approach still matters a lot, because many users (myself included) have to work from locked up workstations, with awful tech support, no local administrator privileges, and draconian policies forbidding installation of not previously vetted software. For example, not long ago a colleague requested a Orange (data mining software) install. Corporate IT took more than six months to validate it and fullfill the request. To avoid hassle like that, the most you can get from just the initial install, the better.


Keep the python brand and fix these things in a new backwards incompatible python release. Call it Python 4.

I’m sure it’ll only take a couple of years for everyone to migrate


Python 3 ?

I feel like Python just went through that big shift. Doubt we need a other one of those.

Also, I'd just look at Julia for some of what you're describing. It's basically going for a more focused approach with the learnings of Python in place.


Please no, the migration from python 2 to 3 was a disaster and we’re finally on the other side of it.


> I think the time is ideal for the "next Python".

We could call it Python 6.


I agree with you that Python now features some redundancy. Yet I do not agree with you that there is 'complexity', going by your examples.

Notwithstanding the various ways of doing the same thing (which sometimes goes against the zen of Python), as of Python 3.6 there's been a constant trend towards f-strings above all else (barring maintenance and legacy code), and generally beginners have no need for anything beyond f-strings. If they'd like to go into the old string formatting techniques, they're welcome to, but they're not that complex.

I don't understand how type annotations constitute a 'compromised design' - they've worked quite elegantly with Python's dynamic typing, they greatly assist linters and IDEs when you turn them off, and they can be disregarded when you don't need them, which is a far cry from the enforced typing of other languages.

And I certainly do not agree that there is a need for a "next Python". Python does a lot of things very well - it's design is clear, the design patterns are methodical, the language is well-governed, the central packaging system is reliable (and not like the insane NPM and dependencies of JavaScript), there is a distinct idiom/style consistency amongst Pythonistas, a and a large gathering of minds behind the language. Further, Python reads very well, and can be even pseudocode without being pseudocode, which makes it very much suited for beginners who can stray as far as they want to into the language when they feel ready for it.

Python is a great language for beginners, so I have no idea what you're talking about.


> ... And many of the new features such as type annotations have a compromised design due to their being added to Python late in the game. ...

But they tend to get ironed out over time. Python 3.9 adds parametrized generic type hints based directly on container types, alleviating the need for a parallel type-hint type system. i.e. `dict[str, list[int]]` over `typing.Dict[str,...]`


That’s exactly the feature that’s causing the problem in this article.


I'd love a Python 4, or one by any other name, to be like the latest Python, but cleaned up, with no regards for backwards compatibility. For example: a trimmed standard library, and much of the alternate syntaxes removed. Better tooling and official-docs support; ie like Rust: A modern, official package manager, a built-in linter, formatter, binary-creator etc.


I have a hunch that in five years, the global-state package manager built into python will be a feature because we will all be working exclusively in dev-containers.

What does it matter to you if there are features in the standard library you don't use? The full python embedded distribution is only 8MB including runtime.


I hope your hunch is wrong! I feel like containers are a symptom of situations like this Python one.


Hehe, Python 4 will probably not happen after 2->3.


I don't think any single language will be the "next Python" for everybody.

Many seem to be happy replacing Python with Go, to get better performance, parallelism and packaging support. However, for my uses I'd much rather replace it with another high-level scripting language, even if it's still slow and single-threaded.


This is what I’ve done and it’s way easier from a packaging perspective but the thing that stings is Go misses out on all the data science stuff like pandas, numpy etc


In terms of style, Swift feels like the next Python. And it’s starting to pick up a sizable standard library.


Yeah, every time I’m playing with typing and mypy I start yearning for Swift’s type system instead.


f-strings are amazing

Can we make that the standard and get rid of the ugly str.format(UGLINESS) syntax.

Yes please, and thank you


`str.format` can’t be removed completely - it’s necessary if you want to build format strings at runtime.

But otherwise, yes - make f-strings the “one obvious way to do it” and use `format` only when necessary. And get rid of the old %-formatting.

Also consider making f-strings the default: https://pyfound.blogspot.com/2020/04/all-strings-become-f-st...


> Can we make that the standard and get rid of the ugly str.format(UGLINESS) syntax.

`format` is still useful, particularly when working with pre-templatized strings. It's also a (relatively) new feature; Python 2 didn't have it at all.

OTOH, I wouldn't cry if Python 3.X finally removed %-formatting. `format` does everything that `%` does, with a much nicer API (especially in 3.7+).


I'm not a big fan of f-strings. I like to have a nice separation between the format and the values.

f-strings work well in simple cases. But once the things you want to output are somewhat larger expressions, format() works much better.

Example: instead of something like

    print(f'{id:<5}  |  {", ".join(members}:<50} |  {", ".join(str(score) for score in scores):<30}')
This is much clearer IMO:

    print('{:<5}  |  {:<50} |  {:<30}'.format(
        id,
        ", ".join(members),
        ", ".join(str(score) for score in scores),
        ))
Actually, what do you mean by "ugly" and "UGLINESS" in "the ugly str.format(UGLINESS) syntax"? The UGLINESS are simply the things you want to format. Ugly is in the of the beholder of course; personally I don't really see anything ugly about format. The f-string equivalent here is much uglier and much less clear/readable to me, because the formatting and the expressions are mixed and parsing the whole thing to see what is what is more effort than it should be.


Both examples look unreadable to me. Just because you can do a lot on a single line doesn't mean you should.

    all_members = ", ".join(members)
    all_scores = ", ".join(map(str, scores))
    print(f'{id:<5} {all_members:<50} {all_scores:<30}')


> Can we make that the standard and get rid of the ugly str.format(UGLINESS) syntax.

No, please.

F-strings are useful for some things, but there's a good reason. to define templates other than deep in code for ease of editing them distinct from functionality (often, completely externally to code) and using str.format supports that.

If anything, I'd prefer r-strinfs as the default with both escape handling and formatting non-default options, but the status quo is acceptable and better than r-strings as the base.


Jokes on you, my coworker still obsessively uses % args syntax.


I can understand that, to a point. Not the so much the obsessive part, but I sometimes fall back to % formatting too since that uses the formatting syntax I'm familiar with from C and C++. The syntax used in f-strings and .format is probably better, but requires re-learning.


Please tell me you correct them in the code reviews?

Everytime someone uses %args inappropriately a baby seal dies.


f-strings are standard, what do you mean? That the "f" be ommitted? I'm all for it, yet it would need a bit of adaptation for those old programs that put {} inside non-f strings.


What's wrong with having a robust stdlib? Are those extra 50 megabytes really a problem?


The issue is not the size of the stdlib but rather the process for how things get in there. IMHO Python's stdlib has a number of modules of sub-standard quality and design. It would be good to have a gradual process for inclusion in the stdlib, so that different 3rd party libraries could be vetted by the community before we bake one into the language.


I agree Python has some frustrating imperfections, but it has a pretty great ecosystem, and a new language would presumably have to start over.

I wish Python went with static typing from the offset, and I wish it had better parallelism, but with Python you can get a library for just about anything with a simple pip install, and that's worth a great deal.

Need a WebSocket library? No problem. Need a cross-platform TUI library? No problem. Python's maturity means you can do a lot of things with little hassle. Some of the less mainstream programming languages are nowhere close to Python in this regard, even after many years.


Like a statically typed, compiled Python? With a more flexible approach to OO?

That can compile to binary, javascript and other languages?

That would be https://nim-lang.org/


Relatedly, I made a flake8 linter plugin for modern Python if you really feel like being constrained:

https://pypi.org/project/flake8-simplicity/

https://github.com/wryun/flake8-simplicity/blob/master/flake...


Golang looks like that language. it would be my go to, but why they decided to have Nil Pointers, I don't know. Golang without Nil and also mostly pass by value, and only using pass by reference for people doing lower level stuff, would've been good. for now, staying with python.

Golang is easy to get started with tooling and packaging is good. but then the code in the wild is littered with pointers for when you wouldn't need them.


>A new, general-purpose, beginner-friendly language focused on readability

Sounds great. What made Python so beginner-friendly and readable? Most of the "let's improve Python" gang I see can't begin to answer that - by their metrics, the language must be a failure.

Beginners have the advantage of not thinking "this is the best way to do it because that's how C/C++/Java do it".


F# is that language and is rapidly gaining steam.


> Python's "batteries included" approach of a large standard library was a great asset in the days before package managers but these days I think it would be better to start with a much leaner stdlib and add things in more stringently.

With all these memory safe languages getting popular NSA needs more npms, not less.


Could something be added to the linter, to flag language features that you don't want to use? This would allow creation of a beginner mode, where all but the most primitive features are left out. It would also be a potentially useful way for us to impose consistency on our codes as they grow in size.


So a Kotlin for Python?


Yes, but the other way around :)

Kotlin solved the syntax and language warts on top of a good runtime.

For python the syntax is (mostly) good but the runtime and package management needs a rehaul.


> mostly

Bolted-on classes and types annotations, and missing real lambdas isn't awesome, and lack of proper variable scoping is error-prone.

I agree that the runtime is a mess, but more in naming inconsistencies and how its organized than functionality.


Kotlin solved Google's Android problem, on the JVM it is just another radar noise in the set of guest languages.


Kotlin was developed by JetBrains, not Google, as a 'better Java'.


So what? #KotlinFirst is being pushed by Google.

Android is the only place where Kotlin actually matters.


Do you need to use all the features of a language all the time? I don't think you do.

A language with a consistent design can have layers, and beginners can just do with the external bits without getting too deep initially, but with enough space to grow.

IMHO Python is a good language on that regard.


Rinse and repeat. Three hundred years later, we're at the original python again. Invented on the USS Enterprise.

"To boldly create programming languages no man has created before."


Julia in many ways was designed as the next Python. Their user base is growing steadily, and they may displace Python in many applications a decade from now.


Learn Common Lisp and you'll find "the techniques & features learned in the past 2 decades" notion laughable. Or sad.


Why does nobody talk about drawbacks of common lisp?


Because in this thread it would be off topic?


Ruby 3.0 is a great time to get back into Ruby :)


Totally. What Rust is to C.


I thought that article was going to be about the newer stuff like decorators, "async", and unchecked type hints. But no.

The Rust compiler is doing a reasonably good job on error messages. It's especially good at explaining that thing A being done here is incompatible with thing B way over there. It prints the source for both places, which is a rare feature.

A compiler that does global analysis can tell you more about what's wrong. Python, in its interpreter implementation, doesn't have the global info to tell you about global problems.


rustc sadly comes to a screeching halt when anything is done with generics. I seriously don‘t understand the obsession with naming generics „T“, „X“ or „Z“. Single character variables have been declared bad since the 60s, why hasn‘t the same happened for generic type parameters?


It's a math culture thing. Names barely matter, relationships do. All math since junior high is about f(x), not function_of(variable_entity).

It's very disturbing from the application Dev world view where people seek longer names for reasons, but after spending time in FP/LP and math you quickly forget long names. You seek variable count reduction most the time also.


This comment hits the nail on the head - if you think in terms of expressions and subsequently, expression substitution; there sometimes /is/ no good name for something.

This is /especially/ true for abstract code - functions like identity, const, map.


I understand where you're coming from. The problem is when using multiple generic Parameters, rustc outputs something like this:

`the trait 'Factory<_, _, _>' is not implemented for 'fn(Json<Auth>)' -> impl std::future::Future {auth}”.

What is this error saying? Can you tell? I certainly can't


I haven't programmed in Rust... but presumably it's just looking for an instance of that trait for that type?

It seems the type here is a function from JSON to a Future?

Given Factory means Factory, then, its saying there's no way to create an instance of that function?


Are you annoyed / confused because of <_,_,_> ?

It seems rust compiler team didn't write a proper error message or I cannot find meaning in this .. usually _ means 'whatever type' but <_,_,_> is useless if all types are the same whatever.


More accurately, math is about f:X->Y which is almost Haskell syntax


I would contend that single letter names for generics are the ideal state. They're intentionally not given a meaningful name, because if you can give them a meaningful name, you've modelled the data wrong. IMO, if you're giving a generic a name beyond T, Z, etc. the logic probably isn't really generic, and should be a concrete implementation instead.


For single param generic types I'd agree - i.e. with List<T> it's perfectly obvious what T is. But any generic with more than one type is obviously doing different things with each one and that needs to be explained. The obvious example would be a Map<Key, Value>.

Now that I've typed that out though it kinda seems that List<Value> would be clearer, if not for the years of getting used to List<T> being the standard.

You shouldn't be talking about the meaning of T from the consumers perspective, but you should be describing it at the level of abstraction that your generic code is operating at.


To touch on the Map example, I do think Map<K,V> is just as clear and saves keystrokes, so that's how I'd write it. The "meaning" of K,V doesn't come from their names, it comes from Map.

I suppose clearer is a question of semantics, but to me there's not any more information conveyed by writing List<Value> vs List<T> - we already know T is a "Value" (because it has to be, due to List).

The data structure itself should provide the needed context - if the base data structure is so complex that the meaning of its generic parameters isn't obvious, that may be a case where naming them makes sense... but to me it's also a code smell of square-peg-in-round-hole.


I think it mainly depends on your target audience. The reality is that, the first time they saw Map<K,V>, everyone had to stop and think about what it was. Then after the first time or two it's immediately clear. Same with i for looping.

So if your code targets beginners, especially ESL people, then the verbosity has value to make that first time as easy as possible. Then it becomes noisy and weights you down.

That's an "easy" case, but in my experience, it happens all the time (though it might be especially true in C++). Should I use that "advanced"/uncommon language feature? Do I use it, but document what it does because I expect most people that will read the code not to know about it? If I'm in a part of the code base where the C embedded engineers contribute frequently, I'm going to err on the side of verbosity/clarity. On a code base with experienced C++ Dev, less so.


Why would Map<K, V> save any keystrokes? You should not be implementing it yourself anyway, and when using it, you replace K and V with concrete types anyway.

> I suppose clearer is a question of semantics, but to me there's not any more information conveyed by writing List<Value> vs List<T> - we already know T is a "Value" (because it has to be, due to List).

Yes, I agree. However, think about functions taking 3 or more generic parameters. There's a much larger need for better information in that context


I'm usually in favor of conciseness for one or two type parameters (much less so for any more than that), but if you do go full-words, “Value” is not much more descriptive than “T”, like writing “argument” instead of “x” for not much benefit. “Element” would be more descriptive of the role of the type parameter relative to the List type.


I'm a fan of terseness when writing code myself, but I dislike terseness very when when it's sparsely documented code found in a library.


This is really well worded. I agree whole heartedly


For the same reason we still use i,j,k for loops, x and y for continuous variables, and n,m for integers. We all know what they mean and needless verbosity can makes things less readable. It’s why people really seem to like the throwaway variable ‘_’.

   for outer_index over array1
      for inner_index over array1[outer_index]
        print array1[outer_index][inner_index]
Bleh.


Because there's often only one or two type parameters in scope, and then disambiguating them isn't too hard. It can still get confusing, especially in unfamiliar code, but it's not on the same level as ordinary identifiers.


In my experience it's because of two things mostly. (1) Naming generics is harder than naming other variables for whatever reason. (2) Signatures with generics in them are stupidly long. Signatures with generics tend to be super unaesthetic, and oftentimes unaesthetic code is conflated (correctly or incorrectly) with bad code.


Probably because it gets harder to name things as they get more abstract and more generic? I'd be interested to see your three favorite examples. Or I guess those would be the most obnoxious examples.


Do you have an example of where single-letter type parameters lead to confusion? I haven't seen one yet.


This Rust function has badly chosen single-letter type parameters that confused someone on my team recently: https://doc.rust-lang.org/stable/std/result/enum.Result.html...

Result<T,E> uses T for a generic contained value and E for another generic contained value that should be an error. So far so good. The `map_err` method then uses F for the new error type because it's just next alphabetically to E and uses O for the closure it accepts as parameter because F is taken. Usually, F is used for function arguments. F and O are not just non-descriptive, having F not be the function when there's a function argument is actively misleading. Instead of `F`, something like `NEW_E` or `NEW_ERROR` would have been much easier to understand.


Slightly off topic, but as you mentioned async: why is Python bothering with async given its use of green threads? Can't the virtues of async be achieved with fibers? Wouldn't that be a better fit for Python's model?


Please give mypy, Pycharm or one of the other mature python ecosystems for type-checking a try. Even 10 years ago with PyDev in Eclipse I had insane type-checking and intellisense-like features.


It's a really minor thing, but decorators are really not a new python feature. The RFC is from 2003, decorators existed in python 2.


Python is a very intuitive and easy-to-read language for experienced programmers, but I wouldn't consider it a "beginner-friendly" one. It's packed to the brim with features that are hard for beginners to understand without first coming to grips with a more simple language. Those features are extremely useful for experienced programmers who want to get stuff out the door quickly and easily, but they're overwhelming for novices. Most experienced developers I work with produce what I would consider to be pretty bad Python code - not because they are bad developers but because they haven't had enough experience with Python to understand everything it comes with out-of-the-box.


I literally just finished a leetcode problem[1] to practice some python. I found two solutions, one seems more "pythonic" to me and the other seems more clear though also more verbose.

Clear/Verbose -

    def maxWidthOfVerticalArea(self, points: List[List[int]]) -> int:
        x_coords = sorted(x for x, _ in points)
        greatest_diff_so_far = 0

        for i in range(len(x_coords)-1):
            diff = x_coords[i+1] - x_coords[i]

            if diff > greatest_diff_so_far:
                greatest_diff_so_far = diff

        return greatest_diff_so_far
Pythonic -

    def maxWidthOfVerticalArea(self, points: List[List[int]]) -> int:
        xs = sorted(x for x, _ in points)
        return max(x2-x1 for x1, x2 in zip(xs, xs[1:]))
I'm curious which approach is bad/worse? On the one hand, the "pythonic" approach seems brief and kind of neat and I feel clever writing it. On the other hand, it is also ~20% slower per my testing, it's less obvious what it's doing, and it would be harder to make some change to the logic (or add logging/metrics) without completely rewriting it.

On the one hand, just from the reasons I wrote above, it seems to me like the first way is better and the second worse, but on the other hand, the first way is what I would write while knowing zero python features. Curious if anyone else has thoughts on this.

1 - https://leetcode.com/problems/widest-vertical-area-between-t...


A super common solution to the problem of too many ideas packed into a clever solution is just naming intermediate things. For example, naming the zip(...) is a little change that goes pretty far in making the second one more obvious:

    def maxWidthOfVerticalArea(self, points: List[List[int]]) -> int:
        xs = sorted(x for x, _ in points)
        neighboring_coordinates = zip(xs, xs[1:]))
        return max(right-left for left, right in neighboring_coordinates)
or even naming the width generator:

    def maxWidthOfVerticalArea(self, points: List[List[int]]) -> int:
        xs = sorted(x for x, _ in points)
        neighboring_coordinates = zip(xs, xs[1:]))
        widths = (right-left for left, right in neighboring_coordinates)
        return max(widths)
Same thing, just with more built in explanation. It's easy to go overboard, of course, but it's a useful go-to to make stuff more obvious.


Whitespace helps too! And I think using `sorted(key=)` is more Pythonic than a generator expression ;)

   from operator import itemgetter

   def maxWidthOfVerticalArea(self, points: List[List[int]]) -> int:
       xs = sorted(points, key=itemgetter(0))
       x_pairs = zip(xs, xs[1:])

       widths = (right-left for left, right in x_pairs)

       return max(widths)
NumPy also has stuff to solve this sort of thing built-in:

   from numpy import diff

   def maxWidthOfVerticalArea(self, points: List[List[int]]) -> int:
       xs = sorted(points, key=itemgetter(0))
       widths = diff(xs)

       return widths.max()


For the last bunch of years, wherever my instinct says to add white space, I instead put a comment. If you want something to be a visually separate entity, it’s the perfect place to put a couple words about that entity, and it still gets the visual separation in an editor with any syntax highlighting at all.


> And I think using `sorted(key=)` is more Pythonic than a generator expression ;)

But it doesn't do the same thing. The original code extracted the xs from the coordinate pairs, sorted; your code sorts the coordinate pairs by their x values, but doesn't extract the xs from them.


Oops! Good point! Unfortunately too late for me to edit.


Clever has its applications and its drawbacks. First off, you may not feel very clever when you need to read the code you wrote (no caffeine, or a hangover, or an incipient cold ...), or perhaps the next coder to pick up your code is not as clever.

I save clever for the real mechanics of problems, for data structures, and occasionally optimization of running time if that is a factor (it often is not as much as you would think).

I eschew clever if it seems like it is just a way to pack a lot of stuff into fewer lines of code. One of the reasons that Perl attained a reputation of "write once, read never" unmaintainable code (whether or not it was justified) is a culture that cultivates "one-liners" and packing everything down into a minimum space. Our filenames in Windows no longer have to be 8.3. We are not constrained to write our code in a few kilobytes.

I programmed in the days where you might have two kilobytes in which to program. In those days, extreme terseness was a value. It is not any longer. "Cat" and "man" and "mv" are holdovers from those days. This is not a telegram where you are charged by the word. I program for total obviousness and sometimes that makes my code seem "dumb." I might do only one or two things per line. That's really fine.

Just as an example, I wrote this obnoxious web app in Perl as part of my job. At some point, some students were to take it over, but they didn't know Perl. I received an excited phone call from them at some point: they were able to understand and port my application to a language they were familiar with because I commented, I was not clever in packing, and I kept it to "one or two things per line."


To be fair, cat, man and mv are things you type many times per day when working in a unixy environment.


Interesting, I'm using this [0] to test and got different results. The Pythonic function is 2% slower on 3.9.1 AMD 4900HS but 17% faster on 3.9.1 E5-2660 v2 VPS and 6% faster on 3.6.9 i5-3570S bare metal.

I personally think either function is fine. The max finding code pattern in the long function should be familiar to most. For the Pythonic function, I don't think it's less obvious enough to matter. You have a list of stuffs and you want to find the max so you wrap it in the max function. Maybe zip(xs, xs[1:]) can take a bit to recognize but I think it's eh. For the Pythonic one, one can also write

    all_diffs = [x2-x1 for x1, x2 in zip(xs, xs[1:])]
    return max(all_diffs)
Using list is also markedly faster than the Pythonic generator for me, which is to be expected: ~16% faster than both.

[0] https://gist.github.com/squaresmile/8dd08898851a4e19359cdb30...


That is interesting. Trying your timeit code I've come to see that maybe something was wrong about how I was estimating the different performance. I was just using cProfile and profiling the two functions with big inputs and then comparing the cumulative time results that way. When I use your timeit examples my results (Intel Core i7-6700K CPU @ 4.00GHz) are different and closer to yours (pythonic ~1% slower). Thanks for sharing.


I just solved this problem yesterday and my code looked exactly like your second one. Even xs was the same, except I used a and b instead of x1 and x2 cause I always do that. But I actually like yours more.

This is a very personal opinion, but I do like the second one more. I find it easier to read, but maybe that's just because I'm used to that style. And maybe you're more likely to have to rewrite it for a change, but that's kind of okay since there's less code there.

On readability, here's how I'd read it out loud:

Sort all the Xes from points. Return the max of the difference between two points that are next to each other.

I find that quicker to put in my head than the equivalent parsing of the first solution, and exactly the intention of the code.


I know a lot of people have jumped in already, but no one has come up with the intermediate version I think I'd've probably written (with one variable rename 'cause that's really long):

    def maxWidthOfVerticalArea(self, points: List[List[int]]) -> int:
        x_coords = sorted(x for x, _ in points)

        max_diff = 0
        for x1, x2 in zip(x_coords, x_coords[1:]):
            max_diff = max(max_diff, x2 - x1)

        return max_diff


Possibly dumb question, but why do type hints for a leetcode problem?


I assume you meant using leetcode's editor without type checking then yeah, there's much less benefit using type hints there.

In my vscode setup with mypy strict, pylance basic and python 3.9 though, type hints all days for me. I personally forget the arguments types like 1 second after I write the function. Maybe I'm conditioned by typed python.

Leetcode codes usually have simpler data structure and without 3rd party libraries so that's even easier to add types for.

Edit: Oh looks like Leetcode python code template has type hints


i'd honestly ask, why not?

when you paste this code, it's infinitely more readable out of its original context with those hints.


I would argue it's less readable, substantially. Type hinting was never about readability, it was about getting IDEs/tools to do type checking for you.

     def max_width_of_vertical_area(points):
        xs = sorted(x for x, _ in points)
        return max(x2-x1 for x1, x2 in zip(xs, xs[1:]))
Just seems easier...


Although I see both sides, I personally lean towards finding that type hints improve readability. And since I tend towards consistency, that ends with most/all functions being typed.

Though I spend most my time writing Swift code, which definitely influences my preference.


And then who-ever reads your code has no idea what "points" is. We had readable "type-annotation" 10+ years ago with comments to functions, this is just a logical extension to that. People need to know how to call your function, whether it's a statically-typed language or not.

Would you say this is "less readable substantially" than your example?

    def max_width_of_vertical_area(points):
        # type points: List of List of int
        xs = sorted(x for x, _ in points)
        return max(x2-x1 for x1, x2 in zip(xs, xs[1:]))


I would never comment that such a function takes a `List of List of Int` - I would instead describe `points` as being something like an `Iterable of Pairs of Numbers`.

Are Python type hints able to specify types like that?


The new type-hints sure: List[List[int]] (and you can do this to any arbitrary types or levels of nesting. See fancy example at the end.). But older iterations before the formal type-hint spec had varied/non-standard ways of doing it. Some machine-readable but others more for the user (like List of List of int). Even once the type-hint spec got approved, it was backported to older versions of python using docstrings.

My experience even before that was mostly with PyDev in eclipse. The trick back then was identifying good spots to add type-hints, afterwhich the "IDE" would take over and propagate them through the different variables and calls using static code analysis. It was surprisingly good and helped immensely with very few actual instances of types being hinted. Another one was PyCharm that would "save" or "cache" the types during debug/test runs and use them as type-hints without you explicitly having to define them.

Fancy example:

List[Dict[str, MyCustomTypeGeneric[str]]]

Which gives you a list of dictionaries with keys of type string, holding values MyCustomTypeGeneric with generic type str.


> I would instead describe `points` as being something like an `Iterable of Pairs of Numbers`.

Yes, Iterable[Tuple[Number, Number]] is a valid type, though 2D points are probably Tuple[Real, Real] or just Complex, not Tuple[Number, Number].


People? It’s a leetcode problem...


leetcode adds them automatically and I just didn't bother to take them off. leetcode also names the functions with camelCase versus whatever_this_is_called (which is the python way).


> whatever_this_is_called

Snake case.

this-is-called “kebab case”

ThisIsCalled “Pascal Case”


I would guess the Pythonic approach is slower due to the slicing operation, which creates a new shallow copy of the original array.


I found the second much easier to read.

Particularly the for loop into the range of the length of the coords minus one, is difficult to wrap my head around and easy place for mistakes to sneak in.

With that being said, I have had my difficulties wrapping my head around zip, and don't have difficulties anymore, which might actually be in favor of parents point.


FWIW, it was easier to me to follow your intent with the second example. The only clunky thing about is was the zip followed by the comprehension. It might be slightly nicer with something like:

   return max(xs[1:]-xs[:-1])
...using something like numpy. I wonder if APL/J has successive element operator.


J has the verbs behead[0] (for xs[1:]) and curtail[1] (for xs[:-1]), so the J oneliner for the above code would be

    <./ ([&}. - }:) xs 
[0] https://code.jsoftware.com/wiki/Vocabulary/curlyrtdot [1] https://code.jsoftware.com/wiki/Vocabulary/curlyrtco


Thanks. The "right way" to do max of the differences (with the sort) in J might be something like?

  >./-2-/\/:~ xs


If you're gonna bring numpy into it you may as well use np.diff():

    def maxWidthOfVerticalArea(self, points: List[List[int]]) -> int:
        return np.diff(sorted(x for x, _ in points)).max()


My mind just cannot visualize what zip is doing. I’ve written production code with zip and I’ve finished leetcode problems with zip. But every single time I need a piece of paper to help me through it.

I’m fine with most of the rest of Python’s functional-style iterable helpers. But something about zip just doesn’t work for me.


Zip is just "pair each x with each y". Literally:

  [1,2,3,4]
  [5,6,7,8]
  # go down each column, yielding these in order
  [1,5]
    [2,6]
      [3,7]
        [4,8]
And then you drop ones with no pair. So zipping [1] and [2,3] makes just [1,2]. The same is true if you reverse the arguments, so [1,2] and [3] => [1,3]

The example is sorta obtuse at a glance[1], but this: `zip(xs, xs[1:])` equates to this:

  [1,2,3,4]
  [2,3,4] # copy and drop the first element
  # outputs columns
  [1,2]
    [2,3]
      [3,4]
So it's a one-liner to make a list of of each consecutive pair of items in a list. All `zip(x, x[1:])` do the same thing regardless of the ultimate use of such a construct.

But yeah, I have seen some very confusing uses of zip before sitting and thinking for a bit. TBH I think much of it is just familiarity though - after seeing a million `for (i=0;i<thing.len();i++) { ... }` I can pattern-match and figure this out easily:

  for (i=0, l=thing.len(); i<l-1; i++) {
    list.push([thing[i], thing[i+1]])
  } 
(though `i<l-1` ease may depend on font) but similarly-common functional idioms just don't slide into position as easily. Yet.

[1]: I'm guessing it's the equivalent of finding the largest "gap" between a bunch of sizes of things.


Think of a physical zipper, where the two sides with the teeth come together to interlock...

  xs = [1,2,3,4,5,6,...
  ys = [a,b,c,d,e,f,...
 
  [    (1,a),
       (2,b),
       (3,c),
       (4,d),
       (5,e),
      |6| |f|
     |7|   |g|
    |8|     |h|
    |9|     |i|
    |0|     |j|
     .       .
     .       .
     .       .
  ]

https://kezan.eu/wp-content/uploads/2016/05/Zipper_Thumb_144...


    zip(xs[:-1], xs[1:])  # explicit

    zip(xs, xs[1:]) # implicit


I guess the second approach requires you to be more familiar with Python's concise syntax capabilities while the first is a nice waterfall model that anybody can read.

Depends on your team. If you wrote stuff in the second way you would be obstructing, adding friction to less python experienced developers but you would be minimizing the amount of code written.

It's hard to say, which is right or wrong here but the opacity of the second approach as you mention adds unpredictability.

I honestly think Dart is a very good solution from the culmination of various classical languages we've been using so far with Java and Javascript but there just doesn't seem to be any major FANG company investing in creating something more "modern" with Python.

I think it is a testament to Python's accessbility and also the depth it provides for experts as well. I guess the forced indentation is what gets to some developers but its a minor inconvenience.


Most developers produce bad code - period.

And I think most Python developers that produce bad code do so because they try to use too many of the whiz-bang features - or overuse bad design patterns.

You can do almost anything with functions namespaced in modules - KISS.


I am learning python right now, Python is not only non beginner friendly, it's also not C-family(java,c++,js...) friendly(assuming most developers knew some C before taking on Python). That been said, it's versatile enough for me to continue, but calling it beginner-friendly is misleading to say the least.


> Python is not only non beginner friendly, it's also not C-family(java,c++,js...) friendly(assuming most developers knew some C before taking on Python).

Python is at least as close to “C-family” as JS; it's also probably not the case that most people learning Python knew some C first, since it's been fairly prevalent in introductory classes as a first language, while C is not particularly common as an introductory teaching language anymore.


I disagree with it being non beginner-friendly. Too many of the resources are geared towards beginners, and its docs aren't as thorough as Java's.


Python is pretty awful for beginners really. Every keyword has half a dozen meanings and invisible characters can break your program.

People say scheme is hard but it is by far the easiest to teach, especially compared to Python.


What language would you consider beginner-friendly?


Python is great glue. But it's huge. Packed full of features. Has all sorts of syntactic sugar. It's the C++ of scripting languages. Great for an intermediate programmer who just needs to slap something together. But at the very beginning, these can actually be drawbacks because there's so very much to learn.

I would suggest something small, that you can realistically fit all in your head. I honestly believe Pascal is still an excellent choice. For something a little less low-level and more scripting-like, Lua seems nice. The JavaScript ecosystem is awkward, and there are things to criticize (no integers?) but by itself, it's a nice small language as well.


> It's the C++ of scripting languages.

This is such a fantastic analogy. Thank you!

C++ tries to be all the paradigms, even if they're bolted on. Python is similar in that regard. Complex OO, functional, exceptions, decorators, lots of container types, iterators, generators, operator overloads... The only thing it's missing is a macro language.

Python's "batteries included" standard library stance takes it even beyond the C++ analogy in terms of sheer volume of out of the box capabilities you get.


Python has a built-in macro language. It's called Python.


Checkout Hylang, a Lispish language that compiles to Python, or maybe a Lispish syntax for Python? You can write pieces of code in .hy files and import them from .py files and vice-versa.

And then you have macros like in macros that traverse and transform code.


But Python still has a reputation of being a simple language for beginners. I’m not sure how that reputation endures. Is it just because of the lightweight syntax?


Because Python is easier to start programming with, BASIC like, but a large majority fails to actually read the official documentation.

If they would do that, then they would realize Python is as powerful as Common Lisp, Ada, C++,.... with similar doc size in pages.


It is deceptively simple. The Python learning curve looks like a snake - https://fabienmaussion.info/acinn_python_workshop/figures/xk...


Does this curve really represent everyone's experience?

I never experienced the "I hate Python" phase. I think I went straight from "So exciting" to "productive" phase with Python.


I hated Python 2 because it did the Wrong Thing with integer division (it didn't cast to floats), so my attempts at simple scripts just didn't work.

I reencountered Python at a data science course that was somehow mandatory for my masters in ODE numerics. It has a HUGE availability of standard and third-party packages, so you can be INSANELY productive. I'd write in Brainfuck if it was the only way to have like Pytorch and giotto-tda and networkx and fastapi in the same room.


My only hate Python is how hard it is to push for JIT adoption versus other dynamic language communities.


Back to the original question, is there any other language that doesn't look like this?


Why the requirement for a beginner to know their entire language? Writing code is fundamentally about making changes to a system with incomplete knowledge - abstraction, encapsulation etc. Working in the presence of unknown language features is another facet of the same thing.


So you don't get bogged down in details. There are many programmers out there who can glue libraries together but have no understanding of algorithms. If you want to focus on the CS side of it, remove all distractions.

My first CS professor insisted the first year be done all on the whiteboard. And this wasn't back in the dark ages. It was long after every student had a personal computer on their desk. Everything was done in a low-level pseudocode somewhere between Algol and C.

She felt that computers and actual programming languages would get in the way of learning the principles. I'm not quite that hardcore. But she was on to something. I learned more from that class than any other.


You can avoid getting bogged down by learning a limited subset of a language, though. e.g. in Python you could teach most algorithms with just conditionals and functions, and keep things pure functional using only tuples for storage. Introduce assignment, lists and loops whenever it suits you.


Coincidentally well timed - the new PEP on pattern matching would be great to include in the limited subset for teaching:

https://www.python.org/dev/peps/pep-0636/

The above tutorial is based on user input but to my mind this mechanism seems equally good for teaching recursion on data structures as you would in say, ML.


> It's the C++ of scripting languages.

Sorry, but Perl still wins there.


Pascal is nice to teach the basics, but I would be very discouraged if I had started learning something that's so rarely used in the "real world".


That's a fair point. Pascal is no longer a commercially significant language. Still, I do run into solid little projects that are written in FreePascal from time to time. Sometimes not so little.

https://wiki.freepascal.org/Projects_using_Free_Pascal


Lua was the first language i learned where i actually started to 'get' programming. I'd tried python and a bunch of other languages before that, but lua was simple and barebones enough i could focus on learning how to program instead of how the language worked.

When i tried to learn python i got caught up in trying to learn imports and classes and other things before i actually fully understood how stuff like variables, arrays and functions worked.


Lua could benefit from a larger standard library for sure.


Don't get me wrong, i've got my issues with lua, it's good at what it's designed to be, a small easily embeddable scripting language that plays nice with C, but i've definitely hit some limits with lua on larger stand alone projects that make it not so suitable.

Its tiny standard library, which kind of makes it limiting, I think is a boon for learning. It forces you to remake the wheel at times. Despite not being an OOP language, I learned about OOP by trying to write my own class system in lua using metatables, hitting the limits of what I could do, and looking into languages with built in classes that had all the 'things I was missing' in my own hackish OOP attempts in lua.

As far as learning goes, not having the tools baked in forces you to learn about how they actually work when you have to remake them. It might not be efficient for people who just want to get the job done, but for learning it's definitely helpful.

Lua's also used in a bunch of 'fun' things. It's used as a scripting language in tons of games, half the time when I look up lua stuff these days it's all about roblox...which I think is big with the kids or something...it's used in fantasy consoles like the pico8 and the tic80, it can be used to script whole games with things like LOVE and the solarus engine.

Personally, I found it to be a good language overall for inspiring self-motivated learning. There's always something fun that can be accomplished with lua relatively easily.


I don't think there is one objectively best language for anyone to start with, but I'd suggest something with a relatively concise set of features so as not to overwhelm a novice or hand-wave away important details.

I am very happy that I started with C. It has surprisingly few features compared to most languages and it gave me a good appreciation for how memory is managed at a relatively low level. The biggest downside was that it took me a long time to learn how to make anything I thought was "cool" at first (like GUIs, graphics, sounds, or networking). Some people might struggle with that.

I rarely write C code now. It's not as practical for the kinds of things I make and I have nitpicks with it just like any other language. But I don't think that's the point. Your first programming language is an instrument for learning first and foremost. Once you've become adapt at one language, it's relatively easy to pick up more.


I'm not the person you replied to and my perspective is pretty limited overall, but when I was younger, I found Visual Basic to be the easiest thing for a beginner. However, that's as much about the IDE as it is about the language. I remember when the Visual Studio express/community editions were first coming out. Visual Basic was a relatively simple, readable language, and the IDE itself made it simple to pick-and-place GUI components on to forms and write callbacks for them, even before I understood the concept of event-driven programming. I felt really empowered, making GUI applications that previously seemed to me like they would require years of study and experience to create.


Scheme is great. It has basically no syntax to bog you down and you can focus solely on the algorithm side of programming.


I came here to say the same thing. A lot of the discussion with modern languages focus around syntax. Why not use a Lisp which has very minimal syntax and focus on solving problems instead?

Do you have an opinion about Common Lisp? I know that Scheme is more minimal and cleaner but since you recommend Scheme I want to know what your opinion about Common Lisp is.


Not the OP, Common Lisp relates to the genesis of IDEs alongside Smalltalk, as such it has a very good developer experience, alongside native compilation, which can only be fully appreciated when using the surviving commercial offerings.


Yes, I answered the question from a very different angle (Pascal or Lua) but my motivation was also avoiding the learner from drowning in syntactic details. Scheme would be even better than that.


And good native code compilers to choose from.


Does one need to use advanced features? The interessting aspect of python IMHO is the "pseudo code"-nature of it's syntax that you already have on the basic levels. So a beginnwe need to learn only a sjort bit and can write friendly code that feature are feature can be enriched with something more advanced over time.


If anyone has any recommendations for good ways to pick up this knowledge except by attrition, I’d be interested in hearing them!


How to write idiomatic Python? This talk is often recommended: https://www.youtube.com/watch?v=wf-BqAjZb8M


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: