$ time ./micropython -c 'print(1)'
./micropython -c 'print(1)' 0.00s user 0.00s system 0% cpu 0.002 total
$ time ./python2 -c 'print(1)'
python2 -c 'print(1)' 0.01s user 0.00s system 52% cpu 0.019 total
$ time ./python3 -c 'print(1)'
python3 -c 'print(1)' 0.03s user 0.00s system 85% cpu 0.035 total
It's basically doing random access I/O (the slowest thing your computer can do) proportional to (large constant factor) * (num imports in program) * (length of PYTHONPATH).
When you can use it, a shebang of
can speed things up substantially.
Perl starts up an order of magnitude faster too (more like 3ms than 30ms). Ruby seems to have the same problem as Python.
It's about 60ms on my other laptop (Linux), and that's just at the threshold of noticeable and annoying.
I wonder how hard it would be to get Kivy running on this.
What parts of Python 3 syntax are missing? Which parts of the library don't compile?
But I guess they are still in the process of writing that/filling it out.
The biggest difference listed is probably no Unicode. Another significant point is that lots of library functions are only partially implemented.
Funny, considering that was the core reason for Python3 in the first place (though I can't immediately come up with a reference).
Along with this one:
(Of course there was lots and lots of thought and discussion that is not reflected in those documents)
I would characterize the Unicode/bytes change to simply be the most visible and disruptive change, it didn't really stand apart as far as being a reason, it was one of many.
I may be mischaracterising it :)
That said, I suppose it's wise to base MicroPython on the "live" branch of Python (3.x), not the legacy one (2.x), even if Unicode support is too costly for them.
The only things that don't compile properly are certain uses of "super()". super() without arguments is a very strange beast that captures the first argument of the function (interpreting it as the self object), and needs to infer its class.
Other than that, all the Python scripts in the Python 3 standard library will compile.
There are people reading those pages who are thinking of engaging the programming community and behavior like this really gives a bad impression.
Glad to see people getting on board now, it's pretty awesome. Fitting Python onto such a resource limited microcontroller is a great achievement.
I expect to see a lot of interesting devices and software embedding Micro Python.
The true count comes from the py/ directory, for which cloc gives only 25k. And there are a lot of extra bits there, eg the inline assembler, the Thumb and X64 assembler helpers, etc.
EDIT: without the additional assemblers, cloc gives 22k. Remember that Python has ints, floats, complex and bignum, and these are all included in Micro Python. So that 22k SLOC includes complex arithmetic and a 1k bignum implementation.
$ size /usr/bin/micropython
text data bss dec hex filename
272356 1336 1216 274908 431dc /usr/bin/micropython
$ size /usr/lib/liblua.so.5.2.3
text data bss dec hex filename
195101 6408 8 201517 3132d /usr/lib/liblua.so.5.2.3
If these drop-in replacements are technically feasible, not only does Cython magically work, but so does a lot of the Python ecosystem. There's probably more work to get linking and other aspects working, but this might also be a model for moving to alternative Python implementations in general. As long as straight Python "just works" and the headers are available for compiling C modules, we're very close to having a sensible alternative to cpython that can grow without being wedded to it.
Please comment on technical requirements. Issue opened here:
Apparently, there's a kickstarter for a dev board that runs this version of python. Looks interesting.
Haven't fired it up yet.
Also tangent question, what is it about languages like Python and Ruby that make it more amenable to reimplementation than Perl?
This basically means Perl is very complex and its grammar can be self contradicting, such that behavior is undefined. C++ has a similar problem to a lesser extent.
For example, perl formats are language-level support for generating formatted text reports and charts, which is basically a whole sublanguage (much like perl regexen).
Maybe for python, but not for Ruby. Ruby is not particularly simple to parse (though it may be simpler to parse than Perl, and clearly seems to be simpler to implement -- or perhaps its just that more motivation exists to implement it.)
I think 2 pages is not bad :) The point is, Perl is just impossible to formally define, it depends on the implementation to make arbitrary choices. This means multiple implementations are much harder, if possible.
Yeah, but its not:
1) One page, or
2) Current (it claims to be for Ruby v1.4), or
3) (apparently, I can't verify this for the version of Ruby it claims to represent) Accurate 
But, yes, Ruby can be parsed independent of being execution, which makes means you can separate the work of a separate implementation into (1) building (or reusing) a parser, and (2) building a system to execute the result of the parsing. Being able to divide the work (and, as a result, to share the first part between different implementations) makes it easier to implement.
Though it may be way better than Perl in that regard -- does Perl have anything like RubySpec?
> does Perl have anything like RubySpec?
TL;DR It depends on what you count as Perl and how one defines "anything like". I'd say the production Perl 5 (P5) approach is not like RubySpec and the Perl 6 (P6) approach may have been one of the inspirations for RubySpec.
The production P5 doesn't have a direct equivalent to RubySpec -- testing isn't tied to a specification document or part thereof.
There are around 50K unit tests for this year's production P5 interpreter, 5.20, and a half million for the core modules shipped with 5.20.
The bundled tests for each "distribution" uploaded to CPAN -- there are around 30K distributions, 100K+ modules, many with numerous versions -- is then automatically tested against various Perls (from the 5.0 of 1994 thru to the in development 5.21) on the various platforms Perl runs on. There have been about 50 million reports, each of which reports on a run of the unit tests for a single distribution on one version of Perl on one platform.
P6 has a written "spec". The spec is not "formal" in the normal formal mathematical sense of the word formal. (The same appears to be true for RubySpec.) We're not talking IBM formally verifying Z here!
The P6 spec inlines the "spectests" associated with that section/paragraph/sentence of the spec. About 35K so far.
So, aiui, if one squints a little, the Perl 6 spec/test approach is, perhaps very roughly speaking, equivalent to RubySpec/RSpec.
Aiui there are specific versions of RubySpec corresponding to specific versions of Ruby. (It looked like the latest RubySpec is 2.1, ie not covering the latest Ruby version. But that doesn't sound right. Perhaps I misunderstood what I've seen/read.)
The P6 spec, and its tests, aren't yet structured to serve for multiple versions. (Talk of dealing with this surfaced recently. And of course it's all git backed so spec versions can be correlated with spectest versions with Rakudo versions.) The spec and tests are in some cases years ahead of the latest Rakudo, the main P6 implementation, and in some cases behind. (The word "spec", in regard to the P6 spec, seems to mean both specification and speculation!)
P6 ecosystem testing is currently very ad hoc. But P6ers are in the process of integrating P6 distribution (module) management with CPAN, which will mean that user contributed P6 distributions/modules will get the same automated testing that I described above for P5.
Finally, to confuse everyone, there are not only multiple implementations of P5 and P6, but also a P5 re-implementation written in... P6.
Tobias Leich is developing "v5", a P5 compiler written in P6. (Well, for now it's written in NQP, a small subset of P6, but he has said he plans to switch to full P6 this year.) The test suite for v5 is a fork of the production P5 test suite.
Hope someone found this info interesting. :)
Obviously, operating on a GC is never ideal on a resource-constrained system, but most modern scripting languages wouldn't be the same without it. The only example I know of, of a scripting language that has the feeling of the scripting language but without using a GC is Newlisp (http://www.newlisp.org/MemoryManagement.html).
Any interest from the Kivy team or related projects?
a) executable size (packaging a large runtime with every app
b) RAM usage
Python suffers on both counts on mobile.
That said, the alternative I'm exploring is to upload a standard Firmata firmware to the microcontroller, then drive it remotely, say from python on a full computer (like raspberry pi).
i think the interestng area comes when you can actually put a fairly "smart" microcontroller firmware on the device (GRBL) and then program it remotely, say with a scripting language. At that point, the boundaries between a firmware that is a dvice controller, and a firmware that is an open-ended remotely drivable VM starts to break down. Interesting area.
> No unicode support is actually implemented. Python3 calls for strict difference between str and bytes data types (unlike Python2, which has neutral unified data type for strings and binary data, and separates out unicode data type). MicroPython faithfully implements str/bytes separation, but currently, underlying str implementation is the same as bytes. This means strings in MicroPython are not unicode, but 8-bit characters (fully binary-clean).
They plan on improving the Unicode support.
I'm excited by the idea of it all but I can't seem to come up with a project that I'd want to mess with enough to purchase one.
Maybe I just need more coffee....
They are a bit more expensive than the chip in the Arduino, but well under $20 per chip. You can buy very nice evaluation Cortex boards for ~$10 from ST and Freescale[1,2].
Incidentally, Arduino now makes a board that uses the Arm Cortex chip: the Arduino Due. I believe it's around $30.
Pledge £1,000 or more
Micro Python will be ported to run on a microcontroller of your choice. The microcontroller must have the capabilities, and you are responsible for set-up costs for the development board and/or software.
Edit: Here is a good document on the differences https://github.com/micropython/micropython/wiki/Differences
I might just quit my job and do something similar!
> This is experimental, community-supported Windows port of MicroPython. It is based on Unix port, and expected to remain so. The port requires additional testing, debugging, and patches. Please consider to contribute.
Another example: calling a function in CPython requires a heap allocation of the call frame. In Micro Python this is generally not needed (unless the function call is significantly complicated).
this thing is amazing, it took me a week to stop using arduino.
There a large number of embedded Windows CE devices that run in retail.