Hacker News new | past | comments | ask | show | jobs | submit login
Molten: A modern Python (3.6+) web framework (moltenframework.com)
218 points by dternyak 8 months ago | hide | past | web | favorite | 79 comments

Bogdan has put together an awesome framework with molten that I think can use a lot more visibility. With his permission, I've posted here to get more eyes on it. I'm excited to see how much attention it's getting.

For those who want to get started with molten, I've put together a boilerplate for a more full fledged production application here: https://github.com/dternyak/molten-boilerplate

It demonstrates:

- OpenAPI (swagger)

- SQLAlchemy

- alembic

- py.test

- invoke (task management / CLI)

Feedback is super welcome on the boilerplate, which is partially inspired by https://github.com/Bogdanp/molten_cookiecutter

Bogdan has been incredibly friendly and helpful while I've been learning the ins and outs (and pestering him over email). I'm sure he'll be happy to answer any questions here as they come up.

Happy hacking!

How does this compare with Connexion?

Uses Swagger/OpenAPI that then uses convention (or fully defined) packages, and methods for REST. Runs on top of Flask, and does a lot of all of the validation all from the yaml file. Plumbing taken care for you, etc. so you can focus on business logic?

I'm not familiar with Connexion (this was my first time hearing about it), but it seems, from reading its README, that the two approaches are polar opposites: in Connexion you write the schema first and that is used to hook up your API to your business logic, whereas in molten you write your API using normal Python code and idioms and an OpenAPI schema is generated from that code. As a user of Molten, you don't need to know anything about OpenAPI to be able to use it.

Molten does what I've seen in RoR land do. You decorate and document your schema. It then can spit out an OpenAPI spec, use its schema and tooling around OpenAPI.

So it becomes a choice. A decision of writing mostly OpenAPI, or writing Python via an API that is defined by someone (or a small set of people).

I'd personally rather learn OpenAPI specs as you're going to live in that world anyways, as it gives you so much tooling.

It's not hard, and you could take that yaml file and generate code for other languages in case Python's performance specifically for that API (especially it if is a microservice), runs too slowly in the real world and need a faster runtime, etc.

So refreshing to see a new Python web framework that isn't trying to hoist more event-based non-blocking IO on us.

For those who are looking for web frameworks that use async I/O:



I don't get the hate. I can reason about asyncio code without fuss and the throughput/latency impact is worth it. My only pain point is the derth of libraries. There are few if any palatable libraries that provide the powerful abstractions I'm used too. Things like ORMs.

Grandparents concern about async is documented here:


I get that SA ORM is unfit to run on the eventloop thread, but I’m increasingly unclear if you hold your position that asyncio is bad solely because it makes SA ORM less competitive - and less of a ecosystem standard.

The alternative being just using SA Core and something like aiopg, using asyncpg, or using SA in a threadpool (unfortunately, a technique that’s involved enough that the recipe for using SA isn’t a 5 min “gotcha”, re: flask).

I like SA. I’ve used it for years, and you’ve helped me on the forums, so I’m appreciative for the work you’ve done. I just don’t understand why you’ve written off an entire style of programming.

> I just don’t understand why you’ve written off an entire style of programming

Async(io) is not a "style of programming"; it is an approach to solving a certain class of problems which arise undercertain conditions, and is only really useful if it fits your use case. I'm certainly happy that Python has introduced it on a low level, but I definitely won't use it in situations that don't warrant it -- the same holds true for many other Python idioms, like say ABCs or Ellipsis.

I’m not going to debate on the definition of the word “style”.

Yeah, I see it as an alternative concurrency model to threads. And I think each has its pros.

As for you not using it in situations that don’t warrant concurrency, that’s a good idea.

> but I’m increasingly unclear if you hold your position that asyncio is bad solely because it makes SA ORM less competitive - and less of a ecosystem standard.

It's not that the ORM is less "competitive" because I am certainly able to make an async version of SQLAlchemy, it's just I don't have the time / interest / resources to do so, and as I've gone through the effort to write about in detail, it is a bad idea. If you want to pay me a salary to make an async SQLAlchemy I can totally do that. It would be a worthwhile effort in that it would make people happy. But also it would be a wasteful effort in terms of advancing the field of Python database programming. That's the decision I'm facing and it would be a lot easier if folks would just realize asyncio isn't getting them much overall.

It's like, suppose I make bikes. But everyone wants to get to work on pogo sticks instead. Everyone who uses pogo sticks is terminally late to work, they're getting injured all the time, they are miserable, but someone told them "hey pogo sticks are FASTER than bikes!", people were just bored with bikes so much that they didn't even bother testing this assertion, and now everyone just has discussions like "oh well I have this shiny new pogo stick, so I only broke three ribs this week! hooray!" and people have just forgotten that there is nothing wrong with bikes at all and they are in fact a lot better than pogo sticks for the task of getting to work every day (you might find pogo sticks to be more interesting and fun in the moment, but as a real solution to the problem, they are not any better than bikes and probably worse). Someone who makes bikes tries to point all of this out. That puts you in the position of saying, "all your arguments about bikes being better than pogo sticks for getting to work, who cares. you are just concerned pogo sticks are a threat to your bike sales". Well, yes, they are. But the bigger issue is that it is ridiculous everyone is killing themselves trying to get to work on pogo sticks. Somehow I feel that should not be lost.

> I just don’t understand why you’ve written off an entire style of programming.

I've written and tweeted about this a lot so feel free to point out the reasons that might not be clear. To recap http://techspot.zzzeek.org/2015/02/15/asynchronous-python-an...:

asyncio is essentailly intended to provide an interface around the concept of non-blocking IO. The relational database use case, for the vast majority of uses (e.g. CRUD), gains nothing from using non-blocking IO (most databases don't even offer a non-blocking API anyway) and only complicates the application, introduces subtle bugs, and does not improve performance. The other argument made about asyncio is that replacing the OS'es task scheduler with a cooperative one defined by when IO happens to occur is "safer", because context switches are no longer implicit. The second section of the post above makes many arguments why this is not the case and especially not for modern server-side application design which is inherently multi-process.

I don’t know, an asyncio ORM version seems very challenging, but if anyone could do it, it’d be someone with deep domain knowledge (like you).

Relationships as properties for example, I just don’t know how you’d solve attribute access in an idiomatic way (sure it could return a future, so maybe the idea is that you’d get a future subclass that still supports operations)?

Anyway, the problem is that it’s really difficult to mix sync/async code. And as soon as you’ve got a need for websockets, you’re pretty much thrown into the world of asyncio. To use the ORM is then a threadpool issue.

Anyway, my approach has instead been to use SA Core (with aiopg). I’ve probably always gotten to involved with my Mappers anyway.

> Relationships as properties for example, I just don’t know how you’d solve attribute access in an idiomatic way (sure it could return a future, so maybe the idea is that you’d get a future subclass that still supports operations)?

I think the answer would be that you just ditch "lazy loading" altogether, in the spirit of asyncio's theme that "implicit IO is bad", you'd have to have explicit conversations with the DB. Loading attributes from relationships is one thing, but there's also the notion that other kinds of attributes can refresh themselves if they happen to have been expired, which is something the ORM does when a server-side rule or a transaction boundary that might mean the attribute changes on the database side. It's a very elegant and simple solution to the problem but if you have decided that implicit IO is bad, it means you don't really want that anymore. You'd just start treating an ORM mapped object more like an active row and you'd have yields on most operations, and you'd also want to add more coarse-grained boundaries to when state is expired and re-loaded (e.g., instead of expiring particular attributes as needed to be reloaded when accessed, there would need to be a more formal "refresh the object" kind of step).

That's one way to do it. Another way would be if an implicit non-blocking approach such as gevent could be cleanly integrated with asyncio - then object heavy database interactions could simply be inside of a block written in "blocking IO" style, and that entire block is entered as a yield. There is no technical reason this can't be done. The reason is cultural; people who like asyncio really hate implicit IO. So trying to make that work seems like it wouldn't be very worth it either.

> Anyway, the problem is that it’s really difficult to mix sync/async code.

I think this is because people just aren't working on the problem. You read asyncio tutorials and they say things like, "if you really MUST use evil threads...here's this crappy executor thing we think you should never use", I'm exaggerating a bit but that's kind of the vibe they send off. Making the asyncio / thread transition could be done very nicely, and there can also be integrations between gevent style and asyncio style, and finally there are many other ways to use non-blocking IO without using a full event style programming interface. The asyncio crowd isn't there yet. They are still having the endorphin high of asyncio "clicking" for them and they are rigidly dogmatic and pretty short on real benchmarks that show actual gains. I have nothing against using non-blocking IO but the next time I have to use it, I'm going to purposely build a non-blocking IO library that seamelessly makes use of threads and queues and shows how easy it is if you really need to reach out to a dozen slow web services at once to get the non-blocking IO advantages without turning your entire codebase and dependencies inside out.

> Anyway, my approach has instead been to use SA Core (with aiopg)

right, so another thing to keep in mind, Postgresql is the only database that has a native non-blocking protocol. So building an async ORM is immediately limited by that - for every other database there would still be a threadpool running the socket conversation, and having every database message be a separate threaded unit is less efficient than just having the entire transactional conversation in one threaded unit (hundreds or thousands of shifts in and out of the thread pool vs. just one). There's really many reasons to not write an asyncio ORM right now.

Coming from nhibernate now working with sqlalchemy, more so than async the one feature i miss when working with sqlalchemy is futures with multiple result queries, like this https://ayende.com/blog/3979/nhibernate-futures


Async programming becomes very appealing once you’ve done enough of it to really sink in.

At first though, learning async programming tastes like eating a garden snail.

Do you have any high level wisdom/takeaways you could share?

Just that my experience learning async programming really was a pain cause the concepts are so alien compared to synchronous programming.

Eventually however the bell in your head rings and it all becomes second nature.

At that point you have the ability to do a bunch of things in parallel in an intuitive and robust way.

Also async is good for solving certain categories of problem that are harder in synch, like starting a sub process and reading and immediately responding to stderr, whislt also doing other things, without the issues that comes with multithreading.

What async programming really gives you is event driven programming - the ability just to set up functions to run when certain things happen, as opposed to traditional ... do this step one then do this step two until program finishes. Again quite foreign if you havent done much event driven programming but it’s really powerful.

It is true that certain libraries need to be rewritten to live well in the async world, but that’s ok I think ... it’s not like the 2/3 issue.... you don’t have to go there if you don’t want to.

The other thing that is not immediately obvious when learning async is that once you know what you’re doing, in many cases sync and async style can be naturally interspersed, and while that might sound like a recipe for complexity and pain, in fact it isn’t, because as mentioned earlier, async actually isn’t that hard or complex once you grasp the mental model, and interspersing both just works pretty well. You won’t know how to do that in a simple manner though till you’ve been through the grind of making it hard for yourself because at first you’ll assume that async is much harder than it really is and you’ll be looking for complexity and indeed coding complexity.

I certainly understand the initial distaste that people might have for async because it’s so foreign. It’s made harder in Python because the approach has changed so there’s multiple ways to do the same thing, like “yield from” and “await” which are effectively the same thing. Different ways to do the same thing in a conceptually new and hard idea space makes the hard thing about learning async programming even harder.

BUT, like most things with programming, once you’ve learned it, what at first looked really hard is in fact fairly straightforward, with just a few different rules to follow and that’s pretty much it.

My advice to any python programmer is to invest the time and practice a lot and work hard to master async. It’s not going away, and it will constantly annoy you that this async thing is everywhere but you don’t know it well so you’ll feel excluded and want to disparage it. Get past that and turn it into one of your strengths and a power tool.

It’ll just taste like garden snails while you’re still in that alien learning zone.

Remember that it takes an expert programmer with superior knowledge to know how to do things in the simplest way.

> What async programming really gives you is event driven programming - the ability just to set up functions to run when certain things happen, as opposed to traditional ... do this step one then do this step two until program finishes. Again quite foreign if you havent done much event driven programming but it’s really powerful.

you actually don't need to use non-blocking sockets in order to write callback-oriented code. Nor do you need event-driven programming to use non-blocking sockets. this whole argument is about syntactic sugar but below the surface it is also about confusion over these two distinct concepts.

I admire zzzeek as one of the greatest contributors to python and a programmer of the highest order.

I hope one day he’ll become a real fan of async as it would be good to have all that skill directed into the async space, which frankly needs him.

When you need to talk to 8000 database connections simultaneously, many of which are taking many seconds to respond to you, we'll talk :)


    app = App(
        Include("/todos", [
            Route("/", list_todos),
            Route("/", create_todo, method="POST"),

Wonder what this would look like for a real app and not a toy.

Had the same thought, so I built https://github.com/dternyak/molten-boilerplate

Gets you pretty close to what a production app looks like.

Feedback welcome!

If the author or someone cares to share I’m curious about the decision not to build this on asyncio

Here are my reasons:

* I am not a fan of asyncio's API (`ensure_future`), docs (though I understand these two are improving in 3.8) and of the fact that it is incompatible[1] with much of the existing ecosystem of libraries (eg. the SQLAlchemy ORM or any existing libraries that talk to... anything over the network (postgres, redis, memcached, etc.) which need to be written from the ground up in asyncio style or rewritten so their protocol parsing logic is completely separate from the IO that they do (not a bad thing, mind you, but taking battle-tested libraries and rewriting them from the ground up is not the greatest idea)), tools and even builtins (queue.Queue vs asyncio.Queue).

* I don't buy into the claimed benefits of explicitly marking yield points: any sufficiently complex code will have enough yield points (`async with`, `async for`, `await`) that reasoning about concurrency in the program isn't going to be any easier than if it had implicit yield points.

* I think ergonomics matter for developer happiness and programming with asyncio is not ergonomic in the slightest.

* File IO is blocking.

Put all of those together and you end up with a system where you can't use any of the popular, battle-tested, libraries out there, where your code is more verbose for minimal gain and where seemingly benign things (reading a file, using a `for` instead of an `async for`) may greatly impact the performance of your program all for a small increase in throughput (assuming it does increase!).

[1]: http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...

My only counterpoint to this is two fold.

I know writing asyncio code is newer and therefore less established and that will certainly crop more bugs for less experienced programmers I’ve witnesses this myself.

With that said my 2 points are as follows:

1. Asyncio has comapbility for running threadsade code (asyncio.Lock, loop.run_thread_safe etc) which can take those blocking functions and run them without having to give up all the benefits of awaitable code. In my albeit somewhat brief testing SqlAlqchemy worked okay in this context (I know peewee does its my personal ORM of choice)

I happen to agree however that fileIO should’ve been the first thing they implemented as async as it’s the biggest gain instead of leaving it up to the community to re implemented

In fact thinking about it it’s the explicit use of Futures that allow me to often bridge sync code like an ORM into asyncio without too much trouble IMO

2. Given that, did you test any of your io heavy code against an asyncio version? I’m just curious. I’ve been seeing personally huge bumps in performance by switching our production apps from Django to Tornado

> I don't buy into the claimed benefits of explicitly marking yield points: any sufficiently complex code will have enough yield points (`async with`, `async for`, `await`) that reasoning about concurrency in the program isn't going to be any easier than if it had implicit yield points.

agree. Another point where programmers seem to not be realizing this and I can only assume they need to gain more experience to see this.

1) This is fair.

2) Explicit yield points certainly save you from most of the classic threading errors, and having to use special maps and datastructures everywhere.

3) Other than tying into (1) I guess this is a matter of taste.

4) Supposedly this refers to the `open` function? This can be solved using the `aiofiles` module, but again I suppose this ties into (1).

Yes, a lot of it boils down to the ecosystem split for me.

> 2) Explicit yield points certainly save you from most of the classic threading errors

Can you provide an example of such a threading error? At most I've heard people claim that asyncio removes the need for mutexes[1]. I think that may be true at the local level, but as soon as your function needs to synchronize with other async functions in the system that ceases to be the case. At any rate, that's not enough of a win to justify the split in my opinion.

> having to use special maps and datastructures everywhere.

The GIL already protects you from corrupting builtin datastructures. Neither the GIL nor asyncio protect you from corrupting your data (the need for some form of synchronization), however.

> 4) Supposedly this refers to the `open` function? This can be solved using the `aiofiles` module, but again I suppose this ties into (1).

And most operations on opened files (like `read()`) as well as any OS bindings provided by the `os` module.

[1]: Clearly, asyncio itself still thinks they're useful: https://docs.python.org/3/library/asyncio-sync.html#asyncio....

Wrt (2): Personally I find coroutine programming to be unintuitive and often lead to unreadable code, where the control flow is hard to reason about. Additionally, it introduces the honestly idiotic programming task of profiling to find chunks of code that "don't yield enough" and then breaking them up to yield more. The possibility is always there that some engineer on your team will commit some nasty blocking API handler that stalls your endpoint.

Threads don't have any of these problems. Conceptually they are sequential code. You just have to clearly state what the communication model is between threads, and use a good one. The actor model works really well.

> Explicit yield points certainly save you from most of the classic threading errors

You know what also saves you from most classic threading errors? The GIL, and yet people writing multi-threaded code in Python still use locks and thread-safe datastructures, because most errors is not enough.

Explicit yield points when the developer is forced to yield does not bring much more safety than executing normal Python bytecode.

> 2) Explicit yield points certainly save you from most of the classic threading errors, and having to use special maps and datastructures everywhere.

"classic threading errors" don't apply to the users of a web framework where you are programming logic that shares nothing with other threads. Additionally, the most free-of-race-conditions monolithic application loses all of that advantage the moment it is placed in a multi-process environment, where other copies of it on the same host or other hosts are competing for the same resources. There is no non-trivial web application today that is deployed to production in a single process on a single host only. Web applications are not scaled vertically.

> Web applications are not scaled vertically.

Should be rare, but believe stackoverflow was doing that for a while.

This looks very cool. Is there a reason the HTTP response codes and many other things are exposed as magic string constants?

How does this compare with Flask?

It seems to come with ORM and request validation, metrics, etc. built-in. Flask is fast for getting started but you end up rebuilding most of the things this framework does on your own as your application grows.

It seems like a good middle ground between Flask (minimal and no batteries included) and Django (bloated but very powerful).

I don't see the ORM or metrics in the docs.

After a cursory look at it, this seems like an interesting, modern replacement for Flask. Types-first frameworks are interesting, as they're still very rare right now.

"The molten.contrib package contains various functionality commonly required by APIs in the real world such as configuration files, prometheus metrics, request ids, sessions, SQLAlchemy, templating, websockets and more."

Ah! Missed that, thank you :)

It looks like there's no template rendering so it would seem that Molten:Flask :: Restify:Express in that it is mostly for build an API.

Author here - while molten is API-first, there is built in support for things like sessions[1] and templates[2] for when you need them. The docs could definitely do a better job of highlighting them, though.

[1]: https://moltenframework.com/reference.html#sessions

[2]: https://moltenframework.com/reference.html#templates

Kind of reminds me of https://github.com/timothycrosley/hug , especially using type hints for validation of request parameters.

Here's a comparison between Molten and Hug by the author of the former: https://old.reddit.com/r/Python/comments/8ta3ve/molten_a_mod...

Also similar to Tom Christie's APIStar https://github.com/encode/apistar

APIStar was a major inspiration[1]! One of the reasons I built Molten was because Tom Christie took APIStar in a different[2] direction.

[1]: https://moltenframework.com/motivation.html

[2]: https://docs.apistar.com/#where-did-the-server-go

I recently started my first Falcon project, but this looks closer to what I was actually hoping Falcon would be. Definitely going to give this a spin, thanks.

OK, I've been playing with this for an hour or so now and it's magnificent. Exactly what I needed, thank you.

Bitdefender sees this site as dangerous? Admittedly it is quite overzealous but something you should look into!

In Python 2, omitting (object) inheritance in a class definition had an actual effect. Is something different in Python 3 that makes this okay, or are we doing the less okay thing because it's cleaner?

The short answer was already given, but for anyone who wants the full one:

Many many years ago, Python had two separate type hierarchies, similar to what you see in Java. There was one hierarchy for Python's built-in types, and another hierarchy for classes. Among other things, this meant you couldn't subclass the built-in types (this is why, for example, there used to be a 'UserDict' class you could use to write things that mostly acted like subclasses of dict).

Python 2.2 began the process of unifying the two hierarchies. This was accomplished, ironically, by creating a split in Python's classes.

To opt in to the new approach, you wrote a class which had 'object' among its ancestors, either as its direct parent or as a grandparent, great-grandparent, etc. These were called "new-style classes", to distinguish from "old-style" (pre-2.2 behavior) classes.

    class Foo:
        pass  # This is an old-style class

    class Bar(object):
        pass  # This is a new-style class
"New-style classes" provided the rich data model, including special protocol methods, Python programmers are now used to, along with the ability to subclass built-in types.

In Python 3, the distinction was finally removed (because it was backwards-incompatible, it had to wait for Python 3), and now all classes are "new-style", regardless of whether they explicitly have 'object' somewhere in their ancestors.

Any codebase that's already Python 3+ only does not need to subclass 'object' to get the expected behavior. Any codebase that still supports Python 2 should continue to subclass 'object' to get new-style classes, until such time as it can migrate to Python 3+ only.

In Python 3, all classes are "new style", so omitting the parent is the same as inheriting from `object`.

Yes, it has no effect in python3. There are no old style classes.

Oh that's just wonderful.

Thank you.

Python 3 cleaned up a lot of other language warts like that one (though some may disagree as to whether or not they were warts). No more `super(SuperLongClassNameHere, self).method()`, no more having to teach beginners to always use raw_input() instead of input(), `0 > None` now raises an exception rather than evaluating to `True`.

I personally wish they would just make `self` implicit, but I understand that that adds a lot of magic in a language that currently doesn't have a lot.

> I personally wish they would just make `self` implicit

Hell no. Go use Java if you want that. If you find typing self is making you more work then you're not using classes correctly. Write functions and use classes if you need them.

edit: To be clear, I mean making `self` implicit in the method signature - not implicit in the method body like the way Java does it.

It's a very minor and aesthetic/cosmetic thing. No, I don't write classes for everything. I write functions for things that should be functions and classes for things that should be classes. But when I do write classes, it's annoying to remember to write `self` in the method signature, annoying to get exceptions because I forgot to add it, annoying to see it since it's just that tiny bit of additional redundant line noise when I just want to quickly glance at a method signature. And literally no other OO language does this that I'm aware of; even ones that are generally way less dynamic than Python.

But as I said, I do understand how it came about and why it is the way it is. But on the other hand, it was the exact same story for `super()`, and they did resolve that with "magic". The "magic" floodgates have also opened up a bit more with string interpolation added [1]. So now I think a more reasonable case can be made for making `self` optional. If they could do it for `super()`, they can do it for `self`. It doesn't annoy me that much, though, and I still plan to use Python for a long time even if it's never changed.

[1] https://www.python.org/dev/peps/pep-0498/

This has a lot of edge cases.

Currently, `self` is a strong convention, but not more than that. You can write code like this:

  class Foo:
      def __init__(this, x, y=3):
          this.x = x
          this.y = y
Would a self-optional version of Python be able to recognize that it shouldn't make that method implicit? Breaking that code may or may not be acceptable, I don't know how common it is.

Would it still be possible to access methods on classes as if they're ordinary functions? e.g.

  >>> str.casefold('Foo')
It's occasionally useful for higher-order functions like map, or to call methods from different classes.

Would accessing methods like that return a special unbound method with an extra argument?

What's the behavior of functions that were defined elsewhere and attached to the class later?

  def double(n):
      return 2*n

  class MySpecialInt(int):
      double = double

  >>> MySpecialInt(10).double()
Do they behave the old way? Do they behave the new way? Does it depend on their own parameter list?

How are decorators handled?

  def printme(func):
      def inner(*args, **kwargs):
          print(args, kwargs)
          return func(*args, **kwargs)
      return inner

  class Foo:
      def bar(x, y):
Can `inner` access self? Can `bar/func` access self? `printme` certainly can't access self, because it doesn't exist yet when it runs.

It might be possible to make this change, but not without breaking compatibility. Argumentless super() adds behavior to something that used to throw a TypeError, which is much easier.

As a sidenote, I think super() is much more magical than string interpolation: f"foo: {foo!r}" can be rewritten to "foo: {!r}".format(foo), but rewriting super() requires looking at the surrounding code.

Do you mean remove self from the method signature or altogether?

I think the latter is fundamentally impossible. There would be name collisions and it would be horribly ambiguous. I assume you're suggesting the former.

I can see it being plausible. I'm not sure I have a really clear case against it. I do really like that things are explicit. There's no magical place "self" emerges from. I hate this about some other languages, like JavaScript's 'attributes' collection that magically exists in a function context.

Being able to juggle binded and unbinded functions is nice. The explicit self or cls kind of forces you to declare what the intent of your function is.

Not super clear. But the ship has sailed on that issue. Not sure it could be changed in a non breaking way.

Ah sorry, I meant from the method signature. I wouldn't like the totally implicit `self` / `this` found in Java.

>Being able to juggle binded and unbinded functions is nice. The explicit self or cls kind of forces you to declare what the intent of your function is.

Sure, but we can already do that with the @classmethod decorator. They could just make it so `cls` is a variable that can be accessed from any method. If it's a class method, `cls = self`, otherwise, `cls = self.__class__`.

OK, implicit self in the signatures is probably OK. I don't find it a problem personally because I use snippets to generate functions and methods etc. while I write them.

Despite programming for many years and using VS Code for about 6 months, I've still never used snippets even once. I should give them a shot.

This looks pretty clean. I would like to see benchmarks that compare between Flask and especially Falcon, which IMO is the gold standard for fast, minimal API frameworks.

There are some benchmarks[1] in the molten repo as well as instructions for running them yourself. On my machine it ends up being faster than Flask and nearly as fast as Falcon.

[1]: https://github.com/Bogdanp/molten/tree/master/benchmarks

What’s the benefit / key differences compared to falcon?


Well, the front page illustrates them quite well, I think.

Errr, I’m looking for some words to answer the question, not to try to infer at an expert level by reading a large amount of detailed technical information, and I didn’t see a comparison with falcon, so no, it does not.

I've used Falcon extensively for about 3 years while at LeadPages.

I would say the biggest difference between the two is Molten gives you more out of the box: input validation, doc/schema generation, ORM support, support for multipart/form-data requests (Falcon doesn't actually handle these out of the box!), etc.

So molten is batteries included essentially, whereas falcon is not.

Like flask versus Django.

Straight question, wouldn’t it have been easier to fork falcon and add the batteries?

Probably not, because many of the differences are fundamental:

* Falcon revolves around the concept of Resources, i.e. classes as the primary handlers of requests; molten revolves around functions,

* Falcon supports Python 2.x,

* automatic dependency injection via type annotations is core to molten.

The Molten front page can be skimmed in 5 minutes or less, it has a lot of empty spaces...


Molten supports Python 3.6+. Falcon supports Python 2.6+.

Molten uses MyPy for validation. As far as I can see, Falcon does not.

Does python 3.6 support type hinting? This reminds me of php 7+ with scalor types, and return types now. Didn't think python supported this.

The feature itself is not type hinting but simply a way to associate arbitrary Python object to essentially any other slot (variable, argument, return value, whatever...) that can hold reference to Python object. The interpreter is supposed to not care about values of such annotations, which makes it a powerful syntactic construct that can be used and abused for various purposes. My two evening project was PyCLOS, which (ab)uses these annotations to provide CLOS style multimethods in python (it is sadly CPython dependent, because the Python side relies on CPython implementation details and while the C extension is not mandatory it is essentially required to get reasonable performance, with somewhat surprising observation that PyCLOS dispatch is measuringly faster than normal attribute access in most cases)

Yes, Python introduced static type hinting in version 3.5


Python introduced the 'typing' module in the standard library in 3.5, and declared that henceforth the annotation syntax was for type hints.

Python introduced the annotation syntax (for everything except variables/attributes) in 3.0. The annotation syntax for variables/attributes was introduced in 3.6.

Is this competing with flask or with Django?

I would say Flask.

Post title needs to have "API" in it.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact