Hacker News new | past | comments | ask | show | jobs | submit login
Micronaut 2.0: a full stack Java framework for modular, testable applications (micronaut.io)
92 points by el_duderino 17 days ago | hide | past | favorite | 59 comments



I personally don't like the reactive model especially with RxJava. You quickly end up in Observable hell.


I agree. Having used reactive Java and the Elixir/OTP model of message passing, I can safely say that I'd never go back to reactive Java unless I was forced to. It feels like you are fighting the language the whole time.


I haven't used reactive Java, but I have used the async/await model of JS as well Erlang/OTP, and I definitely prefer message passing. The mental model is so much easier to reason about.


Hopefully Loom will be our savior.


If it ever sees the light of day. How many years has it been?


I have been testing the OpenJDK builds, afaik it's usable. It's still in heavy development though


It's a shame because it seems to me like RxJava is eclipsing what was previously a pretty fertile ecosystem of Actor based approaches, but RxJava itself is not very nice to use compared to those which are much more natural.


Loom should solve all that.


I think Loom is thrown around too haphazardly here.

Loom will present an ergonomic alternative for concurrency that is strictly superior to futures, but it will not replace or significantly simplify RxJava[1].

Project Loom is also orthogonal to actors - it really doesn't dictate any specific concurrency model, it just provides fiber support in the JVM. We will definitely see actor libraries once project loom is released, but these are already available now, and they are not widely used. The only thing Loom can help them with (I hope) is better performance, but I don't think that's the reason the actor model lost ground in the Java world.

[1] Project Loom can be the basis for some minor simplifications, as JetBrains did in kotlinx.coroutines.flow.


I have never used Rx libraries. What are the cases where you have low friction coroutines such as loom (no "coloured" functions afaik) where you want to use RxJava? Handling backpressure? Modelling complex flows? Honest question



It's wrong to focus on the reactive part - it's a feature that it is fully supported, but Micronaut is a generic framework that has both synchronous and async support. It's real secret sauce is the ability to make API building extremely concise, well type checked and performant (for JVM based apps - 10-100ms scale startup times etc.).

For example, this is a Hello World API in Groovy (this is almost literally the entire Application):

    @Controller('/hello') 
    class HelloController {
        String index() {
            'Hello World' 
        }
    }


As someone that uses Java in my dayjob, I wish annotations were never invented. Debugging problems with them is hell. Why can't we just use method calls?


Because method calls don't compose as compiler plugins or runtime reflection.

They were also not invented by Java, other languages like Dylan, Python and .NET actually go them first.

Actually they are like a poor man's Lisp macros kind of.


>Actually they are like a poor man's Lisp macros kind of.

Best description of annotations I read. Literally this, a lot of times I figure an annotation is way simpler than fighting the generic system and all it's flaws.


In fact, I invented a new kind of Lisp macro, the parameter macro, which decorates functions, but at compile time rather than run-time, and with much more flexibility.

https://www.nongnu.org/txr/txr-manpage.html#N-00B4065C

An example is given of a memoization parameter macro that you invoke simply by adding :memo to the front of the parameter list, as in (lambda (:memo ...) ...). This inlines everything; the function itself rewritten to do the caching with open code, rather than wrapped with a caching function.

TXR Lisp's support for keyword arguments is also implemented as a parameter macro, using the documented define-param-expander public interface.

http://www.kylheku.com/cgit/txr/tree/share/txr/stdlib/keypar...

This is possible because the parameter macro can introduce identifiers into the scope of the function, and rewrite the parameter list.


While that is concise, the sparkjava version (without boilerplate):

get("/", (req, res) -> "hello world")

Is just as clear to me.

Not saying micronaut doesn't have things to offer, but simplicity of setting up an http endpoint isn't unique.


Sure - I was mostly just responding to the commenter implying that Micronaut leads you to "observable hell". The point is, if you apply it in standard synchronous mode it's just like any other framework these days.


I have other issues with micronaut as well. I much prefer the EE spec over it. My choice of framework for where micronaut would be used is quarkus or helidon.


The reactive model is not here because people like it but because it enable to do optimizations that are normally mostly impossible to do. The real breakthrough is to use reactive programming + Rsocket which allow true asynchronous real time applications.

And rejoice yourself, Kotlin Flow/coroutines has a clean API that feels synchronous and that still manage to be reactive streams compliant! It's mostly perfect and project loom will be the ice on the cake! Others platforms (non JVM) really have a lot to learn in this regard


Have you got any links, or can you explain the optimisations that are possible in RxJava?

I've done some vert.x and creating new Futures all the time is a bit cumbersome, but I understand it. With RxJava it's another layer on top, and you are even more removed.

The whole non-blocking experience in java feels like there is a lot of plumbing. I know it's fast for IO reasons, but it really feels like the JVM/CPU has to jump through a lot of glue code to make it work.


RxJava or Reactive Streams won't solve the issue of ceremony needed for dealing with async I/O. You're correct that it only makes it worse.

The real advantage the reactive model offers is a more powerful abstraction for data flow than futures. I found it useful than straight coroutines or async/await in cases where I had to process complex data flows or combine periodically polled data from multiple sources, but in well over 99% of the use cases for an async REST API it's an overkill.

Removing the ceremony around asynchronous I/O, on the other hand, requires language support, so your best option right now is to just use Kotlin coroutines. Or wait for Project Loom, but I assume it's at least 2 years away from a stable release.


I'm neutral on RxJava, but honestly don't know how to handle backpressure without it. Would be glad to learn.


My naive response is that if the socket buffer you're trying to write to is full, there's backpressure. What am I missing here?


You assume that the thread pool which takes requests is the very same one that gets bogged down. It might very well be something down the line in the processing pipeline, and unless you poll these stages or they signal their capacity through the pipeline, you can't know that.

You're wasting resources when your application is already in a state where it knows it won't be able to handle the request. Eventually the memory taken by the partially processed requests is going to exceed what you can take in (unless you cap the number of concurrently processed requests, which is also an inelastic backpressure of sorts) and the service will crash.

What you mentioned is decent for inelastic blocking synchronous processing (you can have at most X concurrent requests, because that's how many threads for processing you've configured based on performance tests and production monitoring), but you can relatively easy fill in an internal queue somewhere if it's async.


It can be unacceptable sometimes and other times just wasteful to block a whole host OS thread just because a read or a write on one socket out of 1000s is not ready.

We solve this problem with APIs for asynchronous or nonblocking IO.

But such APIs must be cleverly designed if they are to permit you to propagate backpressure from the downstream end of your program's dataflow to the upstream end. And handle errors in a sane way, etc.


I've used a reactive model on the front-end for several years now (MobX at one job and then RxJS at another) and haven't encountered anything I would call "Observable Hell". I will admit that cascading side-effects can be hard to debug, but they're also a code smell. The whole point of the reactive pattern, IMO, is allowing you to have as few open-ended side-effects as possible. i.e., reactions that change state (instead of re-computing a pure function) should be scrutinized heavily and kept to a minimum. If they're kept to a minimum, then in my experience they remain manageable and fairly easy to reason about.


Oh, you didn't do it in java. The mere boilerplate and heaviness of the language makes it Observable hell. Well, it got improved with lambdas the last time I used it, but it's still a clusterfuck of Observables, Singles, wrappers and knee-deep stacktraces. JS allows you a lot more flexibility as a language and it is a far better experience using RxJs.


But I also hate to see my colleagues spawn threads and polling the whole time waiting for something happening, all without timeout


Micronaut supports Kotlin coroutines since 1.3.

You're mostly stuck to Rx outside Kotlin, tho.


It is terrible, and has hidden gotchas.

Use it only if you really really have to, then check if you can't do it in Kotlin instead with native async/await syntax, and only if the answer to those questions is "no", continue to shoot yourself in the foot.


I'd try the 2.0. The 1.x line is really pretty buggy. I spent more time on gitter and trying to work around it than actually using it. I'm glad they're targeting performance, but the ergonomics of the API could use a lot of work. I was implementing an API that was designed without a particular technology in mind--and it was a good design--and Micronaut couldn't conform without significant hoop jumping. It's probably better suited to greenfields development.


This link seems to be working. You can at least read about the updates.

https://docs.micronaut.io/2.0.0/guide/index.html#whatsNew


We've changed the URL from https://objectcomputing.com/news/2020/06/26/announcing-micro... to that documentation page. When a project hasn't been discussed on HN before, it's usually best to have the submission be about the overall project rather than the latest release.

(This is the exact opposite of the usual pattern, by the way! if anyone's curious: https://news.ycombinator.com/item?id=23071428)


The point of Micronaut is Graal, you can compile it native, it consumes 10 times less memory and starts in few milis.


I think it's a bit of misleading marketing. Sure, Micronaut (just like Quarkus and Helidon) is ready to be compiled with Graal out-of-the-box. But it doesn't magically make any library you use work with Graal, and it also doesn't mean you cannot get code based on other frameworks like Vert.x or Spring Boot work with Graal (with varying degrees of extra effort).

Also, from all the benchmarks I've seen, GraalVM is not a free lunch. Native images trade peak performance for faster startup times and lower memory usage. It's a worthwhile trade for some people, but not for everyone.

And yes, I know there are features like Profile-guided Optimization that can theoretically bring Graal closer to JIT in terms of peak performance, but these requires a very expensive Enterprise license ($18 per core/month).


I prefer Helidon which also came out with 2.0 release


Can you comment about how to actually use this approach for something useful? I've gotten toy services working with graal nativeimage, but every time I try moving a pre-existing project, I find that some library causes a problem.


Graal is a lot of work on some libraries, I had to use an xsd to pojo mapper, and it was a lot of profiling with the tracing agent that save the reflection and ressources used, when using spark java micro framework. But micronaut has a facility to automatically handling the mapping lib, so it is less work if it is supported. When porting the tracing agent and a lot of integration tests that test all the code branches is key.


Anyone that cares about AOT in Java could have gotten one of the commercial JDKs that offer it since around 2000, what GraalVM brings into the table is not having to pay for such tooling, although the best optimizations are still only available on the professional version.


You'd be surprised how many developers and companies would choose the worse alternative just because it is free.


No I wouldn't, that is how UNIX and C got widespread (those source code tapes on university campus), how nice productive RAD tooling is only available alongside Fortune 500 paychecks, and plenty of other examples.


Big fan of this project. I try and avoid reflection heavy frameworks


Try Helidon then


How does it compare to Spring or Micronaut?


This looks good. Glad to see more microframework coming out of Java with emphasis on performance, memory footprint, and development ergonomic. Will try it out.


But if the goal is performance, why then not move to C++ or Rust


Why not move to assembly then?


because C++ can generate optimized code compared to assembly, with proper attention.

Micronaut with Kotlin is much more productive than with Java. A must try


What's Micronaut?

(Of course I've found more info by searching, but my real question is: why do teams proudly make major announcements without even including a link to their project, or at least an inline blurb about it for those first hearing about it via the announcement?)


Same old same old, this drives me nuts on so many open source project websites. I wonder how many good libraries languish in obscurity because of the lack of something simple like an elevator pitch on their landing page, or worse there is one but it's an overload of incomprehensible-to-outsiders jargon.


Not only open source projects. Even many SaaS and PaaS projects have the same issue. They try too hard to make themselves distinguish from others that they end up with a fancy landing page that does not say anything useful and straight about them but some vague description. You really have to dig into to find out what is that they are doing.

I like to think that's just a bad marketing tactic because I can't believe that they can't explain themselves clearly.


There are actually 3 frameworks in Java that are competing head to head with Go, Micronaut, Quarkus and Helidon.

Helidon is spearheaded by Oracle and currently fails to get some tractions. Micronaut is the baby of Graham Rocher one of the father of Grail (RoR in Groovy) and Quarkus is the baby of Emmanuel Bernard one of the father of Hibernate and supported by RedHat.

All of them run on either a standard JVM or let you use Graal SVM which compiles Java to native with roughly the same trade off as Go (latency is more important than throughput).

At my company, we have prototyped the migrating of a Go application that relies heavily of Kafka to Micronaut or Quarkus. We have found that Micronaut is less polished and more buggy than Quarkus but take it with a pinch of salt because it's just 5k of code.


> because it's just 5k of code.

Are you talking about the app you're migrating or micronaut?

I'm a go developer that is on a Java team; trying to use a framework that offers a similar experience to building HTTP services that I've seen in Go. Quarkus, micronaut and Spring boot all "look" similar to me; I went through the getting started for Quarkus and it feels pretty fast. Your comment makes me want to stick with it for now.


A Java framework which tries to do at compile-time what other frameworks usually do at run-time (dependency injection, SQL query generation, etc.)

(Yours is a lost cause.)


Since the project doesn't seem to have been discussed on HN before, we've changed the URL from https://objectcomputing.com/news/2020/06/26/announcing-micro... to one that answers this question more directly.


your link is broken :/


Yeah. It's very slow to load right now.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: