Mono has traditionally had a reputation as being a lot slower than .NET- perhaps that isn't the case any more either? It would be great to see some independent third party figures on this.
It's funny that the MS created C# is one of very few languages that lets you make native apps in iOS, Android and WP.
If you can in _any way_ achieve a method of using the existing android ui components and making them faster, this is a massive win.
Games (and often multimedia creation apps) have been rolling their own UI for a long time.
So games have always been the fastest, most optimized performers on any platform.
This isn't about games though - this is about regular ol' apps, many of which are incredibly slow and could really use the 8x performance improvement that these folks have been able to achieve.
As a mobile dev myself, I don't think mobile platforms are moving away from Obj-C or Java. Sure, we have a lot of custom-built widgets to overcome shortcomings in what Google/Apple provides, but ultimately they're subject to the same performance limitations that plague the stock widgets. Nobody out there is digging into low-level OpenGL optimizations to write a custom navigation bar, for example.
Most devs use some kind of wrapper library like Cocos2d to take the pain out of raw OpenGL, of course, and they can usually get by with much simpler library of widgets than the OS itself has to provide.
IMHO, the biggest problem with Windows Phone 7 (that wasn't emphasized much in all comparisons to iOS or Android) is the lack of support for native code. Whether this was done for legitimate reasons (like portability between ARM and x86) is not really important, what's important is that C++ is the language most games are written in and if you lack the support for it, then most games will only get ported if your platform is a clear winner, which is not the case yet for Windows Phone or Google TV ... therefore from what I know, Windows Phone 8 will have support for native code, because it badly needs it.
So I wouldn't worry too much about it.
We need to consider memory consumption too. Dalvik obviously uses less memory than regular JVM, and probably less than mono if I were to guess.
In general, HotSpot is a more advanced VM that Mono is. They support dynamic recompilation at a higher optimization level, which we do not.
This is part of the debate in Google vs Oracle. Google could have used HotSpot had they figured out some agreement, but instead ended up with Dalvik which lacks many of the advanced optimizations of HotSpot.
That said, java -server compared against out of the box Mono 2.10.8 is not an apple's to apple's comparison, it is by no means "regular Java". That is Java tuned with some specific parameters. It comes at the expense of interactive time, Mono's default is fast-JIT, and fast startup.
Java -server uses a fixed heap, this means that you must preallocate how much memory the application will use during its entire lifetime. This is good for performance because the GC knows that it only has to scan for example memory between points A and B. 2 comparisons is all you need during GC to scan. Meanwhile, Mono is uses a dynamic heap, which means that it can grow its memory usage based on demand (no need to fine tune the maximum heap every time your app crashes due to a low setting) and can return the released memory to the OS. This complicates GC, because now we cant just compare against 2 values, we need to consider every object in the scope of a series of differently sized heaps.
With Mono 2.11, you can force your app to go fixed heap; The only people that really have a use for this are HPC people and a handful of server people. To be honest, most people dont want to configure the max heap of their server apps by crashing and trial and error.
The second component is that Mono's default is aimed for desktop/mobile configurations, not server loads. If you want to use server loads, run Mono with the --llvm flag: you will have very slow startup times, but you will get code quality that is essentially the same code quality that you get from an optimizing C compiler nowadays.
As for Mono vs Dalvik memory use, in practice people run Mono with Dalvik (Mono for Android), not in standalone mode, so we end up paying for Dalvik's memory footprint. But I agree, it would be nice to find out how much memory it actually uses.
Considering that we get the benefits of a more advanced GC, value types and generics while Java does not, it just seems like we would use less memory for comparable loads.
Since at least Java 5 the default on machines with 2 or more processors and 2GB or more memory has been -server. That's what regular out-of-the-box Java is on those machines.
Do many new laptops not have 2 processors and have less than 2GB?
Here is the Java source for binary-trees, which is 6x faster than c#.
Notice that the Java code multi-threaded, while the C# version is single-threaded. And the Q6600 being used there has 4 cores.
Also not sure about how optimized are the C# sources in relation to the Java sources
For example, in the Mandelbrot benchmark there are several differences
- Java uses more arrays
- C# and Java use different break conditions
- Calculations are done differently (with similar results)
Overall they're doing the same things, the C# is more like I remember it, so I'm guessing the Java code has some smart tricks up its sleeve
Whats more bizarre is that windows phone 7 does not have a development kit for using C/C++. Only C#/XNA. I've heard on HN that this may change for WP8. I hope so.
I can develop ONE piece of core logic, and have it run on iOS, Android, Windows, Mac, even web (via NativeClient). I posted an AskHN about this earlier today (though not much interest). That sounds like a win.
I would expect the opposite - some runtimes do not allow exceptions, RTTI, dynamic_casting, or simply ABI interworkings due to name mangling and calling conventions for methods. Also virtual inhertiance and it's interworkings (where the this pointer is stored, and how multiple virtual ihneritance is done).
V8 is also C++, but lots of other successfull VM's are written in "C"
I like "C" because it's more limiting, hence I won't see templates overused, or latest trick used (SFINAE). It's also easier for me to read.
Also at binary level things are simpler - here is your function, and it's name, it takes this and this as parameter, and returns this and that, also follows this convetion (stdcall, pascal, fortran, etc.)
With C++ my biggest pain has been name mangling. I'm not sure why this was not standartized. It's actually much better that the name of the function automatically contains what types it can take, which would've been much easier for dynamic binding, but then you have GNU, MSVC and others totally differs on how they mangle the name (and from version to version it changes a lot). Also exception handling (mingw/cygwin/msvc - there is big confusion there, and on binary compability level it's hard to combine one with another).
My last show-stopper for C++ was - on this platform you can' t use this feature. For example on certain game consoles exceptions are not allowed. So you go with longjmp/setjmp, but then this would not unwind the stack and call destructors.
Most of all, I got bitten by a heavily templated math library - it was much faster with optimizations than the one before, but without (debug) it was several times slower than same one before it. Why? Because it was relying on inlining everything, and back then gcc for playstation2 was not really inlining everything, even if it was forced.
So every overloaded operation (matrix by matrix, or matrix by vector) was actually function call.
There is another gotcha - sometimes overloaded C++ operators lose their ability to short-cut. For example overloading && and || would not longer short-cut, and many other gotchas.
Most of all, I can't stand boost - it's just too huge.
But I'm totally fine with C++ the simple way (whatever this is) - best example for me is ZeroMQ lib - it's C++ internally, they limit themselves to not use exceptions, and provide "C" interface by default. This makes it very easy to use from other languages - python, lua, ruby, etc.
You're right, a standardized mangling scheme would have been nice... but there's always 'extern "C"' if you need a C++ function from asm or a linker script.
Compilers really should short-circuit &&/||... were you using some kind of experimental compiler?
My choice right now is Lua + C - for fun & experimentation. I also dabbled with Common Lisp, but haven't touched code in it in months, might get back at it later...
Then again, Linus said C++ developers were dumb and C was better.
Boost is almost a second standard library (it is one reference source used in the standards process), and it demonstrates a lot of what's cool and awful about C++. Boost invented a lot of the smart pointer classes that are now standard and save a lot of boilerplate that makes memory management in C annoying. Boost has things like Asio, which lets you write portable synchronous/asynchronous network code. On the other hand, it has Spirit, which is a massive abuse of operator overloading which is at once both impossibly complex and nifty/convenient.
Linus's opinion is worth noting, but he's hardly the best source.
I like C# although mono always failed to be nearly as good as .NET and that hindered C# a lot.
If microsoft had realized it and made .NET multiplaform as well, and open source, we may live in a different world today. Heck, Microsft could be leading.
But then again, rewritting the past is easier said than done.
And that's not a bad thing. I love C#/.Net (one of my projects, Tagxedo, was built with Silverlight), but given Microsoft doesn't have even the slightess will to push it, we may as well move on, or find some way to ride Mono. C++ is currently on my mind since it covers all ground (sans UI, which you still have to do anyway), probably in a way more straight-forward than Mono.
Derived from Win8 != WinRT- could just be the kernel and not app model.
I guess Armin Ronacher's blog post here makes a pretty good introduction:
That's one reason why the Mono implementation was always suboptimal, because implementing that API on top of poll, epoll, kqueue and the like is not really straightforward. It's also the reason for why the attempts to build alternative web servers are using bindings straight to libevent, bypassing the socket API.
The original plan was indeed for .NET to be portable to non-Windows platforms, but Windows-specific details have leaked in nonetheless.
- Microsoft has released a cross-platform CLR ("Rotor"). Win/Mac/BSD I believe.
- C# and the CLR are ECMA specifications.
- A lot of the class libs are definitely not Windows only.
- Microsoft also has Silverlight (CLR) running on Mac, if that counts.
The whole .NET stack, apart from some Windows specific libraries (some COM+ management stuff, Win Forms) were definitely made to be cross-platform.
IIRC Rotor was more of a proof of concept, only supporting .Net 1, and later 2.0. It could not be built that easily either, and not bootstrapped.
Early C# (1.1 and 2.0) and the CLR specs were posted to ECMA, then it stopped. I think they published some more of it to ECMA much later, and some of the class platform too but it's still incomplete for .Net to be even remotely called an ECMA standard.
Silverlight does work on Mac, but on Linux you have to use Moonlight.
All in all, the Mono team went much, much farther than Microsoft, but it is impressive for Microsoft to have shown so much effort and so clear intent towards cross-platform support.
It costs $400 for a license, though. But boy I am tempted... I know, I know- you need to go with the native languages for the best experience. But keeping a huge chunk of my codebase between platforms is a very, very interesting idea to me.
With MonoTouch, you still write against the native iOS UI. Therefore, from the user's perspective, your app is native.
Since the UI is the same, the only thing that could be different between an ObjC and MonoTouch app is performance. In my experience there's no noticeable performance hit with MonoTouch except for app startup time (my MonoTouch app takes about 2 seconds to startup on a 3gs)
iOS is more challenging, since F# depends on some dynamic-ish features of .NET that are not currently supported by MonoTouch on iOS.
But MonoDevelop could sure use a lot of love...
The good news is that the community edition of IJ is Open Source (Apache licensed, IIRC) and there are quite a few existing language editors built on top of it. It isn't a rich client platform à la Eclipse or NetBeans, but I doubt such a thing would be required to win the affections of those who are unhappy using MonoDevelop.
What about doing this improves the experience?
Never forget about the users.
I get that there is a tradeoff involved, but the Mono-x products are in an interesting space. They aren't webviews (like PhoneGap), they aren't a weird JS hybrid (like Titanium)- they're full native experiences. You'll get some newly released features later (I imagine), but the user experience really shouldn't be affected that much.
That should be bolded.
OTOH, I've often wondered if those tradeoffs are actually worth it in the end. This may just be demonstrating how weak/slow the Dalvik VM actually is.
The first, the myTouch 3G had about 100MB of RAM, with only about 25MB free from a cold boot, meaning the phone is really really slow due to memory pressure.
My current phone, a G2, has about 350MB of RAM total and about 70MB free on a cold boot (ICS).
A full Ubuntu Desktop install can use only about 250MB of RAM on cold boot and even Windows XP would run in 256MB of RAM.
So with so much emphasis on saving RAM, why is 350MB of RAM insufficient to run basically one app at a time on a phone (plus the various background processes the system is running)?
If you look at the specifications of system.gc(), it can be implemented as a placebo, like the "push to walk" button at the street corner. So even explicitly GC'ing isn't guaranteed to do anything.
Android also has a strategy for "swapping" components within a process, and whole processes. This is, effectively, Android's GC strategy across processes. An Android system's memory, especially on small-memory devices, should always look full-ish, but you can almost always launch a new task, since GC and components and process "swapping" (the "destroy" phase of component lifecycle) and GC can almost always make room for more.
Thanks for using that comparison - oddly enough it provided some nice entertainment for me :D
Since I wasn't aware if "push to walk" was a cultural reference that I might not know I started some quick research that lead me to a couple of interesting articles.
I started off with a search that resulted in the Placebo Button (http://en.wikipedia.org/wiki/Placebo_button) article on wikipedia.
From the references section in that article I found a nice column from NY times about the street crossing buttons in NY (http://www.nytimes.com/2004/02/27/nyregion/27BUTT.html).
From which I picked up an interesting pattern in traffic control solutions - called the 'Barnes Dance'.
Which lead me to this video (http://www.streetfilms.org/barnes-dance/) from which I learned that it's also known as Pedestrian scramble (http://en.wikipedia.org/wiki/Pedestrian_scramble).
It's funny how a couple of well picked words on your side has lead me to an entertaining research.
I just wanted to say thanks and share this experience :)
edit: relevant paper: https://agora.cs.illinois.edu/download/attachments/28320883/...
As a lark I just booted ubuntu 12.04 with mem=256M. It got to the greeter reasonably quickly, but it's been 10 minutes since i hit enter and I still haven't seen the desktop. I planned to launch chromium and thunderbird and report the swap used, but you get the idea. Its true that a lighter DE would be more usable, but I think you're looking at desktop RAM usage with rose colored memories.
Out of curiosity, would you be willing to write a little blurb (or post a link) about those tradeoffs?
Isn't this like huge, though? If you can actually make it run faster, I am guessing it would be possible to spin it off and get serious funding.
I merely watch from the sidelines, but what the Mono/Xamarin guys accomplish year after year could be a good lesson for any founder. Having spent enough time in the .Net world, I can tell that any attempt at creating a compatible .Net framework is a daunting, enormous undertaking. I guess the key thing is, FOCUS.
Why would using Mono result in less fragmentation? Most of their "fragmentation problem" comes from having developers having to develop for multiple devices (screen size, performance, GPU, android versions, etc.), and changing the language won't magically fix it.
SUN and Google have both expressed concerns over fragmenting the Java ecosystem. And with Dalvik, that has certainly happened.
That isn't to say it isn't happening, I just don't experience and would be interested in knowing where it does occur.
It has become evidently clear that 9 out 10 times that the word "fragmentation" is uttered on HN or any comment board regarding Android, it is by people who have never developed a line of Android code and whose knowledge of the platform is what they picked up on advocacy sites.
Dalvik isn't fragmented at all. It is, in fact, a bloody marvel. Implementations of hardware specific APIs of course differ, exactly as expected.
But really, it is astonishing seeing the claims that we see. Microsoft struggles to run WP on a single reference hardware platform with the most trivial of variances. Android runs on friggin' everything with an overwhelmingly high level of compatibility that is so refined that those few outliers become a really big deal.
As someone who admittedly isn't an Android developer, isn't the point that Dalvik's very existence has fragmented the Java ecosystem? That's how I read the GP comment.
isn't the point that Dalvik's very existence has
fragmented the Java ecosystem
Dalvik is not Java, being a totally different VM, just as .NET's CLR is. And just as .NET's CLR, you can transform Java's bytecode to Dalvik bytecode by means of a compiler. The equivalent project in the .NET world would be IKVM, which allows you to transform Jars into .NET dlls.
It is true that Google relies on the Java ecosystem for fueling Android ... they rely on Eclipse for IDE support, they rely on Harmony for the base classes and so on. But everybody working with Android knows that Dalvik is not Java and that Java libraries that are doing bytecode manipulations do not work on Dalvik out of the box, because Dalvik is not a JVM.
So really, Dalvik is fragmenting Java in the same way .NET did.
This translates to "compatible, except in the case most people would and should not care about" (running newer code in older vms).
This is far better than SUN's idea of Java forever keeping compatibility with old VMs with makes the language stagnate.
That's pretty much how Java works too. In general, you can't take code compiled with a newer version and run it on an older VM. You will get class version mismatch errors.. they update the class version every time there's an incompatible change, such as when 'enum' became a keyword.
The compatibility that Sun, err.. Oracle, strives for is to be able to mix code compiled with old & new versions without issues.
However, I feel that Google did not make the "wrong choice" because they were likely trying to leverage two (interdependent) advantages: existing Java programmers and the overwhelming number of Java libraries.
I haven't made it through all the threads here, but I'm fully expecting to find a post containing "yeah, and we can just run the Java-to-C# translator on all those libraries", and then I'm hoping to find a reply that says that machine translation is not the same as porting a library.
For example, Sharpen was also used to translate JGit to C#: https://github.com/mono/ngit
Original Sharpen -> mono/ngit sharpen -> XobotOS sharpen
So the XobotOS one is the most complete, but also requires more setup than the NGit one.
1. It's not entirely clear how to use this. Is XobotOS a replacement for Android, or is it something that can be shipped as a standard application? In my limited time reading the documentation on the GitHub page, this was not clear to me.
2. It looks like a it is a terminated research project, to quote the README:
This code is provided as-is, and we do not offer support for any bits of code here, nor does Xamarin plan on continuing evolving XobotOS at this point.
From the blog post, it sounds like they are integrating some of this technology in their products, but XobotOS is otherwise just a code dump. As such, unless someone is really interested in this, it doesn't seem like it's going to go anywhere.
The other problem is that Dalvik uses a model that limits the amount of memory your app can use, so most of the heavy duty tests could not be ran to compare the apps, since Dalvik would crash with an out of memory condition while Mono does not impose this limit.
Dalvik has also other ugly limitations, like their GC being suspended whenever a JNI invocation is taking place.
Myriad had been trying to sell their own VM as a "turbo" Dalvik, but, so far, no takers, that I know of, among handset OEMs.
The thing is that Dalvik balances performance and battery life by aiming for fast interpretation plus minimal JIT compilation.
One could easily show that the Oracle VM would trounce Dalvik at computation benchmarks, since it is aiming for best possible performance using a very sophisticated JIT compiler. But I suspect getting the Oracle VM to not suck your battery dry would be a challenge.
Not in my experience. I write software for large enterprises, mostly .NET glue between systems like SAP, e-commerce engines, search engines, databases, etc. The UI is a thin little layer on top of complex systems integrations. I'd say the UI code is 10 to 20% in most systems I've worked on.
That said, indeed many apps are not like that. Going forward though, I would conjecture that there will be more and more sophisticated apps.
poses an interesting question though: if google had to or wanted to switch platforms/languages, which might they choose?
Unlike Sun with Java, Microsoft submitted C# and the .NET VM for standardization to ECMA and saw those standards graduated all the way to ISO strong patent commitments. The .NET framework is also covered by Microsoft’s legally binding community promise.
It's strange- with the work Xamarind has been doing, C# is one of the few language choices you have to go cross-platform and native- MonoTouch does iOS, MonoDroid does Android, and MS themselves do Windows Phone. I've not used any of these extensively so can't vouch for them, but it's an interesting development.
"As of December 2010, no ECMA and ISO/IEC specifications exist for C# 3.0 4.0, and 5.0."
Does this mean newer syntax updates won't make it to Mono? or not as open as it used to be in terms of patent and copyright?
But cheer up, the ECMA team just updated the VM spec a couple of months ago and the committee is resuming work on new features.
We are all pretty psyched about the next steps for the ECMA standards.
As for Mono, we already have C# 5 and many of the new class library features. Having two independent implementations helps coming up with a better standard.
C# 3.0 was released 4.5 years ago.
MonoTouch is currently based on Mono 2.10, while C# 5.0 is based on the new codebase on Mono 2.11.
We are waiting for Microsoft to officially bless C# 5.0 as complete before we can ship it, otherwise we will break people's code.
I don't know how current that wikipedia article is.
Heck, at this point Google probably can't even trust Python anymore, seeing how there are patent and copyright leeches everywhere. Their best bet is to use a language they own, be that Go or another one they made or bought.
Um, given that Guido works for Google now, I'm guessing they are pretty safe on that one. :-)
Having an open standard can be moot if effective implementations of it are patent-encumbered, forcing you to either use a licenced runtime, a second-class one, or risk getting sued.
More on this:
"RMS: You shouldn't write software to use .NET. No exceptions.
The basic point is that Microsoft has patents over features in .NET, and its patent promise regarding free software implementations of those is inadequate. It may someday attack the free implementations of these features.
This is no reason not to write and distribute free implementations such as Mono and DotGNU. But we have to keep in mind that using and distributing these programs might become dangerous in certain countries. Therefore, we should minimize our dependence on them – we should not write programs that use those features.
Mono implements them, so if you develop software on Mono, you are liable to use those features without thinking about the issue. It is probably the same with DotGNU, except that I don't know whether DotGNU has these features yet.
The way to avoid this danger is not to write programs in C#. If you already have a program in C#, by all means use a free platform to run it. But don't increase your exposure to the danger – don't write additional code in C#, and don't encourage people to make more use of C# programs. We need to guide our community away from dependence on an interface we know Microsoft is in a position to attack.
It is like the situation with MP3 format, which is also patented. When people manage to release and distribute free players and free encoders for MP3, more power to them. But don't ever use MP3 format to encode audio!"
And the patent wars regarding Java just make me not want to support that development ecosystem anymore.