Hacker News new | past | comments | ask | show | jobs | submit login

Hot reloading is next to impossible to do safely in the general case in Python, and this library is really basic and doesn’t really do anything special.

The only case where it does work is if you maintain a single reference to the module you wish to reload, _and_ the target module doesn’t load any C extensions, _and_ it doesn’t create state outside its module (very easy to do accidentally).

That’s why every autoreloader implementation restarts the whole process when any change is made, because how would you even detect that it was safe to hot reload something in the general case let alone actually hot reload it safely?

If you need to add an autoreloader to your project use Hupper[1]. If you’re weird like me and you’re interested in autoreloaders, check out the Django[2] and Flask[3] implementations or my talk[4] for more details.

1. https://pypi.org/project/hupper/

2. https://github.com/django/django/blob/master/django/utils/au...

3. https://github.com/pallets/werkzeug/blob/master/src/werkzeug...

4. https://youtu.be/IghyoR6ld60




I got it working well in a large project but I basically designed auto reloading into deep parts of the framework. It's not something you can safely bolt on, and I'm honestly not sure I recommend the sanity impact of trying to build it properly (e.g. ran into a bunch of memory leaking even once I had it working safely). But there are some kinds of programs you can't really restart on every change.

The biggest caveat of these kinds of single file reloaders is they completely fall apart when you import between your modules. For example, if `a` does `from b import obj` and `b` does `from c import obj`, and you modify `c.py`, `a` and `b` will both have a stale reference to obj (which might be a primitive or even interned type like a string or an int, so it's not like you can mutate it as a solution).


I updated my comment to say “impossible in the general case” which is more accurate, but I’d be interested in knowing more about that project - how big was it, and how did handle some of the tricky bits?

Like, module A creates a class/function that is consumed by module B and C. If you change module A, how did you know to also reload B and C (recursively)?


This is for a scriptable desktop accessibility program. It is fairly big, is very actively used by end users, and many users have >80 modules and edit them _constantly_ during daily use. So the reloading system isn't just commonly used, it's a critical path of using the app. (I also have not gotten any complaints related to the module reloading in ~months, when I last fixed a bug in the long dependency chain tracking, and reloading hasn't caused any major problems in general).

I slightly break the mold of conventional Python imports - script folders don't have a single entry point, the entire user script directory tree is recursively imported (in alphabetical order) and autoreloaded on change.

Anytime I call into a module it is marked as the active module in its thread (a sort of call stack but at the module level and only for certain kinds of events).

"Call into" generally means the module is being imported, or I'm calling a callback that was registered by the module.

Anytime code creates a global object (such as opening a window, registering new voice commands, or registering a callback for some kind of event), cleanup for the object is registered on the active module. Additionally, all imports are tracked, so I have a graph of which modules import which other modules.

The import tracking allows me to reload dependent module chains (even long chains, trees, or import loops) at the correct time. The object tracking allows me to gracefully replace a module without it requiring any explicit cleanup.

This graph extends to more than just Python files. There are special functions for opening or reading files that mark those files as a module's dependencies as well, so you can modify something like a CSV and the script that uses it will get reloaded (and any module chains that depend on that script).

This all working well is tightly coupled to how the framework behaves. Scripts won't generally spawn threads or spin in long loops, they'll mostly register for event callbacks. For example, instead of a script spawning a thread or sleeping if it wants to run code later, there's a cron system that works something like javascript setTimeout/setInterval. Instead of storing something in a global, you can put it in the persistent storage layer if you want it to survive reloads (and restarts).

You can definitely do a few things that aren't reloadable, like spawning your own thread, but most things you want to do end up being reloadable, or have an equivalent or alternative API that is reloadable.

There are a few other nice touches and APIs that help glue all of this together. Generally it means you can write your scripts as short, flat, mostly declarative Python (with perhaps a few event handlers and setup code that don't need to go in a __main__ guard) and not worry about common cases for reloading. (There are also subsystems specifically watching out for edge cases, such as a user script running an infinite loop)

Honestly the most annoying part of all of this (that I do generally handle now) was reliably monitoring file changes when users put file or directory symlinks at arbitrary places in the tree (which it turns out a lot of users like to do).


In python 2, you could use the gc module to get a list of every reference to a certain object. Using this, you could replace all references to that object with references to some other object. For references occurring in immutable things like tuples, you would need to replace the immutable thing recursively.

I assume this still works in python 3, but I have not tried it. https://docs.python.org/3/library/gc.html#gc.get_referrers

While the docs say that this shouldn't be used for any purpose other than debugging, it never caused any trouble IME. Far more trouble was caused by classes from python's stdlib just blatantly not following the pickle protocol, not calling __init__, calling __init__ more than once, etc. I assume this is why people advised me to use composition rather than inheritance when making classes that were a list with extra behavior, a dict with extra behavior, and so on.

Edit: Well, the above primitive is not enough on its own, but if one gets the old module and the new module and explores outwardly from those to find the subset of the object graph that differs, then one can replace all references to the old things with references to the new things. This is still not enough in general, because you'll have e.g. a bunch of objects that never had a certain member set during __init__ because __init__ was different when they were created. You'll probably meet an additional can of worms if you want to use __slots__ and add or remove class members.


This is neat, but doing this for a whole module sounds really slow and prone to races if there’s threading




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: