Hacker News new | past | comments | ask | show | jobs | submit login
Micro Python – a lean and efficient implementation of Python 3 (python.org)
468 points by maxerickson on June 3, 2014 | hide | past | favorite | 93 comments

It was just pointed out to me that micropython starts an order of magnitude faster than both python2 and python3:

    $ time ./micropython -c 'print(1)' 
    ./micropython -c 'print(1)'  0.00s user 0.00s system 0% cpu 0.002 total
    $ time ./python2 -c 'print(1)'
    python2 -c 'print(1)'  0.01s user 0.00s system 52% cpu 0.019 total

    $ time ./python3 -c 'print(1)'
    python3 -c 'print(1)'  0.03s user 0.00s system 85% cpu 0.035 total

Yeah this is one of my longstanding annoyances with Python... the interpreter is quite slow to start up because every import generates a ton of stat() calls. Moreover, there is a tendency to use a very long PYTHONPATH, which increases the number of stats.

It's basically doing random access I/O (the slowest thing your computer can do) proportional to (large constant factor) * (num imports in program) * (length of PYTHONPATH).

When you can use it, a shebang of

#!/usr/bin/python -S

can speed things up substantially.

Perl starts up an order of magnitude faster too (more like 3ms than 30ms). Ruby seems to have the same problem as Python.

What are you doing that you are noticing this routinely and finding it to be a large annoyance?

Any command-line script in Python that uses a significant codebase or a significant number of libraries. For me the main example is Mercurial -- it takes about 200ms (on my Mac) to run even a trivial command, and that's plenty long enough to get in the way.

It's about 60ms on my other laptop (Linux), and that's just at the threshold of noticeable and annoying.

Hmm should be possible to reduce that to num imports + length of pythonpath, no?

For each import statement, it has to check each directory in the path. So (num imports) * (length of path).

If you compare the startup code needed it is clear where at least a major part of this time comes from. Micropython has close to no startup code actually (it just initializes some dictionaries, stores arguments etc). Something like Python3 on the other hand: oh my. Recently I was stepping through pretty much all of it in the debugger to figure out a problem, day afterwards I had a cramp in my finger from hitting the 'Step' button too much :P

This is very interesting. This implementation may well address the 3 killer issues for Python on Android: start-up times, memory usage, and battery draw.

I wonder how hard it would be to get Kivy running on this.

I wonder how that would stack with PyPy3 or if there are too many changes that wouldn't port over.

PyPy has a much slower startup.

Sorry if I was unclear, what I was wondering was whether there might be a practical benefit to adapt the changes that were made in micropython for PyPy3 or if I was misunderstanding the nature of micropython's optimizations.

Bare in mind it's statically linked also, so it doesn't need to read a bunch of dynamic libraries from disk.

Actually, no, the unix version is dynamically linked (against libc, libm, etc).

> Supports almost full Python 3 syntax, including yield (compiles 99.99% of the Python 3 standard library).

What parts of Python 3 syntax are missing? Which parts of the library don't compile?

It's discussed some here:


But I guess they are still in the process of writing that/filling it out.

The biggest difference listed is probably no Unicode. Another significant point is that lots of library functions are only partially implemented.

> The biggest difference listed is probably no Unicode.

Funny, considering that was the core reason for Python3 in the first place (though I can't immediately come up with a reference).

This is sort of the design document for Python 3:


Along with this one:


(Of course there was lots and lots of thought and discussion that is not reflected in those documents)

I would characterize the Unicode/bytes change to simply be the most visible and disruptive change, it didn't really stand apart as far as being a reason, it was one of many.

From what I remember from various posts, most other changes could be done gradually on the 2.x branch. Unicode couldn't.

I may be mischaracterising it :)

That said, I suppose it's wise to base MicroPython on the "live" branch of Python (3.x), not the legacy one (2.x), even if Unicode support is too costly for them.

Unicode will come, there are plenty of nice-enough implementations for Lua that could be adapted.

> What parts of Python 3 syntax are missing? Which parts of the library don't compile?

The only things that don't compile properly are certain uses of "super()". super() without arguments is a very strange beast that captures the first argument of the function (interpreting it as the self object), and needs to infer its class.

Other than that, all the Python scripts in the Python 3 standard library will compile.


I am not sure what you went for with this, but it really strikes the reader as over the top and condescending, no need for that in a discussion thread about a programming language.

There are people reading those pages who are thinking of engaging the programming community and behavior like this really gives a bad impression.


Yeah, I actually spent a few minutes checking out that site before asking this question and the answer is not anywhere obvious.

Slowly gaining traction it seems. Has been posted before without even getting comments (https://news.ycombinator.com/item?id=6763123) and then some comments (https://news.ycombinator.com/item?id=6996692) but already has more points now :]

The kickstarter for the controller boards got a few votes too, but didn't get much traction: https://news.ycombinator.com/item?id=6727326

Glad to see people getting on board now, it's pretty awesome. Fitting Python onto such a resource limited microcontroller is a great achievement.

WOW!! I see this as a serious competitor to Lua. I think lua because so popular because of its small footprint and easy embedability... Perhaps this is the first step to a Python revolution...

I expect to see a lot of interesting devices and software embedding Micro Python.

It's still well over an order of magnitude bigger than Lua. MicroPython is ~200k non-blank-non-comment lines of code (as reported by cloc), while Lua 5.1 is 12k.

Actually, most of that SLOC is in the stmhal/ port, of which ~100k is the ST library.

The true count comes from the py/ directory, for which cloc gives only 25k. And there are a lot of extra bits there, eg the inline assembler, the Thumb and X64 assembler helpers, etc.

EDIT: without the additional assemblers, cloc gives 22k. Remember that Python has ints, floats, complex and bignum, and these are all included in Micro Python. So that 22k SLOC includes complex arithmetic and a 1k bignum implementation.

    $ size /usr/bin/micropython 
       text    data     bss     dec     hex filename
     272356    1336    1216  274908   431dc /usr/bin/micropython
    $ size /usr/lib/liblua.so.5.2.3 
       text    data     bss     dec     hex filename
     195101    6408       8  201517   3132d /usr/lib/liblua.so.5.2.3
I think it would really interesting for µPy to be opened up to run as a hosted runtime.

And then there is LuaJIT. As fast as V8, smaller than Micro Python, with a dead simple FFI.

God I wish we had Lua instead of Javascript in the browser.

and in the desktop.

It always strikes me, that the things that parts of python that create problems for people trying to optimize it, are all things of relatively small importance. Like __new__, __del__ semantics and changing the stack by inspection. I wish Python3 had just let go of the slow scripting-only parts.

The problem is that often while regular users are not directly using those features, they are using libraries that do use them. As a concrete example, people writing webapps with Django probably (often) aren't using some of the crazier class metaprogramming, but they are using Django's models, which use lots of it.

Attn: Kivy can really use this for mobile deployements, but we use Cython and almost everyone needs cpython C modules. We need to investigate making drop-in replacements for Python.h and other cpython headers to stub out reference counting etc, which micro-python doesn't use. The compiler will just skip the calls entirely in some cases.

If these drop-in replacements are technically feasible, not only does Cython magically work, but so does a lot of the Python ecosystem. There's probably more work to get linking and other aspects working, but this might also be a model for moving to alternative Python implementations in general. As long as straight Python "just works" and the headers are available for compiling C modules, we're very close to having a sensible alternative to cpython that can grow without being wedded to it.

Please comment on technical requirements. Issue opened here: https://github.com/micropython/micropython/issues/658

The project's website, from the linked post:


Apparently, there's a kickstarter for a dev board that runs this version of python. Looks interesting.

Got one!

Haven't fired it up yet.

There is a talk about it here from a few weeks ago https://www.youtube.com/watch?v=hmGISrwPtyA

This project provide a so-called pyboard. It is interesting! The board's design can be found here https://github.com/micropython/pyboard

Does it strike anyone else as odd that garbage collection is used in such a resource constrained environment?

Also tangent question, what is it about languages like Python and Ruby that make it more amenable to reimplementation than Perl?

To take a stab at the second question, I think BNF that fits on one page https://docs.python.org/3.4/reference/grammar.html vs http://perldoc.perl.org/perlfaq7.html#Can-I-get-a-BNF%2fyacc...

This basically means Perl is very complex and its grammar can be self contradicting, such that behavior is undefined. C++ has a similar problem to a lesser extent.

To expand on the non-syntax, perl has an incredible amount of language-level features, which may appear very weird to people who have only seen it from afar.

For example, perl formats[0] are language-level support for generating formatted text reports and charts, which is basically a whole sublanguage (much like perl regexen).

[0] http://perldoc.perl.org/perlform.html

That's pretty crazy. I used Perl a lot, but haven't seen that feature :)

Printing nicely formatted reports was one of Larry Wall's original motivations for Perl.

When I looked at such things last, Python had about 29 terminals & nonterminals in its grammar. Ruby had 110. (These are numbers I remember from playing with particular parser libraries, so YMMV.) By contrast, a commercial Smalltalk with some syntax extensions had 8. I have no idea about Perl, but I'd guess it's about the same as Ruby.

> To take a stab at the second question, I think BNF that fits on one page

Maybe for python, but not for Ruby. Ruby is not particularly simple to parse (though it may be simpler to parse than Perl, and clearly seems to be simpler to implement -- or perhaps its just that more motivation exists to implement it.)

First google result: http://www.cse.buffalo.edu/~regan/cse305/RubyBNF.pdf

I think 2 pages is not bad :) The point is, Perl is just impossible to formally define, it depends on the implementation to make arbitrary choices. This means multiple implementations are much harder, if possible.

> First google result: http://www.cse.buffalo.edu/~regan/cse305/RubyBNF.pdf

Yeah, but its not:

1) One page, or

2) Current (it claims to be for Ruby v1.4), or

3) (apparently, I can't verify this for the version of Ruby it claims to represent) Accurate [1]

[1] http://stackoverflow.com/questions/663027/ruby-grammar

But, yes, Ruby can be parsed independent of being execution, which makes means you can separate the work of a separate implementation into (1) building (or reusing) a parser, and (2) building a system to execute the result of the parsing. Being able to divide the work (and, as a result, to share the first part between different implementations) makes it easier to implement.

I don't think it's as much about ease of implementation as it is about the finality of it and formal verification of completeness of any implementation. But of course, those are very much related :)

"Formal verification" of the completeness of any Ruby implementation has been a problem for some time (even when multiple implementations existed) as Ruby had neither a reasonably-complete specification nor reasonably-complete unit tests (for a while, the closest thing to a "standard" is "does it seem to be able to run Rails") -- the RubySpec effort (which was, IIRC, led by the the Rubinius team though its been accepted generally since) has improved that condition tremendously, though I still wouldn't say that Ruby is exactly in a great condition with regard the ability to have "formal verification" of an alternative implementation.

Though it may be way better than Perl in that regard -- does Perl have anything like RubySpec?

I don't know either Ruby or Perl 5 well enough to be 100% sure of the following answers but my interest in P6 and its spec/test approach has led me to investigate the spec/test framework of several languages and their ecosystems.

> does Perl have anything like RubySpec?

TL;DR It depends on what you count as Perl and how one defines "anything like". I'd say the production Perl 5 (P5) approach is not like RubySpec and the Perl 6 (P6) approach may have been one of the inspirations for RubySpec.


The production P5 doesn't have a direct equivalent to RubySpec -- testing isn't tied to a specification document or part thereof.

There are around 50K unit tests for this year's production P5 interpreter, 5.20, and a half million for the core modules shipped with 5.20.

The bundled tests for each "distribution" uploaded to CPAN -- there are around 30K distributions, 100K+ modules, many with numerous versions -- is then automatically tested against various Perls (from the 5.0 of 1994 thru to the in development 5.21) on the various platforms Perl runs on. There have been about 50 million reports, each of which reports on a run of the unit tests for a single distribution on one version of Perl on one platform.


P6 has a written "spec". The spec is not "formal" in the normal formal mathematical sense of the word formal. (The same appears to be true for RubySpec.) We're not talking IBM formally verifying Z here!

The P6 spec inlines the "spectests" associated with that section/paragraph/sentence of the spec. About 35K so far.

So, aiui, if one squints a little, the Perl 6 spec/test approach is, perhaps very roughly speaking, equivalent to RubySpec/RSpec.

Aiui there are specific versions of RubySpec corresponding to specific versions of Ruby. (It looked like the latest RubySpec is 2.1, ie not covering the latest Ruby version. But that doesn't sound right. Perhaps I misunderstood what I've seen/read.)

The P6 spec, and its tests, aren't yet structured to serve for multiple versions. (Talk of dealing with this surfaced recently. And of course it's all git backed so spec versions can be correlated with spectest versions with Rakudo versions.) The spec and tests are in some cases years ahead of the latest Rakudo, the main P6 implementation, and in some cases behind. (The word "spec", in regard to the P6 spec, seems to mean both specification and speculation!)


P6 ecosystem testing is currently very ad hoc. But P6ers are in the process of integrating P6 distribution (module) management with CPAN, which will mean that user contributed P6 distributions/modules will get the same automated testing that I described above for P5.


Finally, to confuse everyone, there are not only multiple implementations of P5 and P6, but also a P5 re-implementation written in... P6.

Tobias Leich is developing "v5", a P5 compiler written in P6. (Well, for now it's written in NQP, a small subset of P6, but he has said he plans to switch to full P6 this year.) The test suite for v5 is a fork of the production P5 test suite.

Hope someone found this info interesting. :)

Why yes, yes it does. But it's complex. The rule for embedded C is to never call malloc() or free(). You manage everything because heap fragmentation could lead malloc() to fail and then your system crashes. So given that, reference counting would seem to use more space for the same set of variables which would be bad. OTOH, the pause that may occur while collection happens can be fatal to such a system. The trade here is memory use vs non-deterministic run time. I'll take the memory use any day in such a system.

It depends on how resource constrained and real-time the system is. I imagine Micro Python will not be used in cases where you never use malloc().

Actually, reference counting needs some garbage collection in order to handle cyclic structures. I believe that basically refcounting is more space-efficient and less CPU-efficient. I'm not totally sure about this, and it might depend on the actual techniques used on both sides as well. If a justification is to be given for this switch, I would say that increasing RAM size is generally easier than increasing CPU speed.

Obviously, operating on a GC is never ideal on a resource-constrained system, but most modern scripting languages wouldn't be the same without it. The only example I know of, of a scripting language that has the feeling of the scripting language but without using a GC is Newlisp (http://www.newlisp.org/MemoryManagement.html).

I'm not sure about Ruby, but Python is a smaller language than Perl. Avoiding parsing complexity is also an explicit design goal in Python (where Perl probably somewhat prefers convenience).

The kickstarter updates for this project are a good resource for the thought process behind its design and how it was all implemented, very interesting stuff:


Is this a route to a good Python stack for mobile?

Any interest from the Kivy team or related projects?

Should be pretty easy, the interpreter core is damn clean and minimal. Look at arm-bare port, it should be pretty easy to convert that to a static library port. It uses 16-bit Thumb2 instruction set thought, that might involve additional hacking if you wanna target 64-bit ARM cores, which some of the latest phones have. Ping me at @errordeveloper if you wanna chat about that.

People are constantly running Java on mobile and that's very heavy. With phone storage into the gigs. There's no reason Python couldn't have been used before Micropython, except that vendors didn't want to allow it in.

It's about:

a) executable size (packaging a large runtime with every app b) RAM usage

Python suffers on both counts on mobile.

Yes, I'm trying to feel out the necessary work to get Cython and other Cpython modules working by hacking the Python headers to make a plain Cpython C module compile to a micro-python compatible module. It's not clear that there are C modules yet, so maybe there's a lot of work to do, but it just struck me how much both ecosystems can benefit if the headers can indeed be backed so that C modules (and Cython modules) "just work."

My thoughts exactly. Python is a beast to embed on iOS.

This is pretty neat, I'm gaining an appreciate for microcontrollers and using high level languages is attractive.

That said, the alternative I'm exploring is to upload a standard Firmata firmware to the microcontroller, then drive it remotely, say from python on a full computer (like raspberry pi).

i think the interestng area comes when you can actually put a fairly "smart" microcontroller firmware on the device (GRBL) and then program it remotely, say with a scripting language. At that point, the boundaries between a firmware that is a dvice controller, and a firmware that is an open-ended remotely drivable VM starts to break down. Interesting area.

If someone is wondering what's going on with unicode support:

> No unicode support is actually implemented. Python3 calls for strict difference between str and bytes data types (unlike Python2, which has neutral unified data type for strings and binary data, and separates out unicode data type). MicroPython faithfully implements str/bytes separation, but currently, underlying str implementation is the same as bytes. This means strings in MicroPython are not unicode, but 8-bit characters (fully binary-clean).

This was linked in a later post to the mailing list thread:


They plan on improving the Unicode support.

Just got my board (kickstarter backer). Can't wait to take a few hours to play with it!

Could you share what projects you can imagine this being great for?

I'm excited by the idea of it all but I can't seem to come up with a project that I'd want to mess with enough to purchase one.

Maybe I just need more coffee....

This is pretty cool. Could it run on something like an Arduino?

If you're talking about the traditional 8-bit AVR Arduinos, the answer is almost certainly no. The AVR boards like the ATmega328 (on the Uno) only have a few kbyte of RAM available. But MicroPython apparently can run on some pretty tiny and inexpensive boards: namely the Arm Cortex microcontrollers.

They are a bit more expensive than the chip in the Arduino, but well under $20 per chip. You can buy very nice evaluation Cortex boards for ~$10 from ST and Freescale[1,2].

Incidentally, Arduino now makes a board that uses the Arm Cortex chip: the Arduino Due[3]. I believe it's around $30.

1. http://www.element14.com/community/docs/DOC-46626/l/element1...

2. http://www.st.com/web/catalog/tools/FM116/SC959/SS1532/PF252...

3. http://arduino.cc/en/Main/arduinoBoardDue

It appears from [0] that the chip of choice is the STM32F045RGT (datasheet [1]). This is from the Cortex M4f series, which includes such wonderful things as a hardware floating-point unit. That is wonderful news, although, this board appears to have no external memory, so it would be limited to 128kB.

[0] https://raw.githubusercontent.com/micropython/pyboard/master... [1] http://www.alldatasheet.com/datasheet-pdf/pdf/510587/STMICRO...

You can get versions with external memory.

It runs on the micropython board: http://micropython.org

Besides the Arduino Due and Freescale dev boards mentioned, you could probably get the TI LaunchPad Tiva C Series to run Micro Python. TI sells the low cost model for 13 USD.


from the kickstarter page:

Pledge £1,000 or more

Micro Python will be ported to run on a microcontroller of your choice. The microcontroller must have the capabilities, and you are responsible for set-up costs for the development board and/or software.

I think the cheapest chips it runs on cost $3 in 1K volume.

Do you have a part # or reference?

Anyone know the "scale" of this re-implementation? Is it simply a fork of CPython with some performance tweaks, or is it a primarily new code-base? Regardless it's pretty cool.

It'd definitely be a new code-base. In terms of comparison, I'd guess this uses "cheaper" algorithms, that use less memory, and don't scale as well. There are many things that CPython does behind the background that most people don't fully appreciate.

Edit: Here is a good document on the differences https://github.com/micropython/micropython/wiki/Differences

I am truly very happy that he was able to fund this cool project. The funding went very well it seems, more more than 6x the initial proposed funding!

I might just quit my job and do something similar!

Is there a way to try it on windows ?


> This is experimental, community-supported Windows port of MicroPython. It is based on Unix port, and expected to remain so. The port requires additional testing, debugging, and patches. Please consider to contribute.

It works ok though: all tests pass both for msvc and mingw

So upset that I missed this kickstarter. I've been obsessively refreshing the homepage to see if I can buy a micropython board yet.

Nice! How much of this optimization can be ported back to CPython, keeping compatibility but improving memory usage and speed?

A lot of the internals are significantly different, and would be difficult to back port. Eg, a Micro Python object is a machine word which might be a pointer to a real object, a small integer, or an interned string. By stuffing small integers (and strings) into the unused bit-space of a machine word, Micro Python can do a lot of integer operations without allocating heap RAM. This is in contrast to CPython where all integers are on the heap (although the ones below 257 are cached).

Another example: calling a function in CPython requires a heap allocation of the call frame. In Micro Python this is generally not needed (unless the function call is significantly complicated).

So never mind back porting. Can the implementation be extended to 100 percent compatibility without losing the benefits? In other words, could it replace cpython?

I would rather have pypy replace cpython than this. Seems like a better bet too due to the design goals.

Quite simply, this project cut most of the features that big programs depend on, in order to make a smaller, faster core. Things like most of the standard library, meta-programming, and general scalability are included in these cuts. Most people don't realise how much python does underneath for them, which is why cpython is so "bloated" in comparison to this.

The problem is that CPython core developers does think performance is a priority. They think CPython is fast enough for everybody.

i got one of the dev boards like a month ago, i was part of the kickstarter.

this thing is amazing, it took me a week to stop using arduino.

we use a lot of python on the windows platform - especially the win32 and multiprocessing modules. Would love a variant of micropython on windows that can support them.

There a large number of embedded Windows CE devices that run in retail.

any benchmark vs cpython avail? (not startup speed). just curious.

Anyone else read that as "Monty Python"?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact