The only case where it does work is if you maintain a single reference to the module you wish to reload, _and_ the target module doesn’t load any C extensions, _and_ it doesn’t create state outside its module (very easy to do accidentally).
That’s why every autoreloader implementation restarts the whole process when any change is made, because how would you even detect that it was safe to hot reload something in the general case let alone actually hot reload it safely?
If you need to add an autoreloader to your project use Hupper. If you’re weird like me and you’re interested in autoreloaders, check out the Django and Flask implementations or my talk for more details.
The biggest caveat of these kinds of single file reloaders is they completely fall apart when you import between your modules. For example, if `a` does `from b import obj` and `b` does `from c import obj`, and you modify `c.py`, `a` and `b` will both have a stale reference to obj (which might be a primitive or even interned type like a string or an int, so it's not like you can mutate it as a solution).
Like, module A creates a class/function that is consumed by module B and C. If you change module A, how did you know to also reload B and C (recursively)?
I slightly break the mold of conventional Python imports - script folders don't have a single entry point, the entire user script directory tree is recursively imported (in alphabetical order) and autoreloaded on change.
Anytime I call into a module it is marked as the active module in its thread (a sort of call stack but at the module level and only for certain kinds of events).
"Call into" generally means the module is being imported, or I'm calling a callback that was registered by the module.
Anytime code creates a global object (such as opening a window, registering new voice commands, or registering a callback for some kind of event), cleanup for the object is registered on the active module. Additionally, all imports are tracked, so I have a graph of which modules import which other modules.
The import tracking allows me to reload dependent module chains (even long chains, trees, or import loops) at the correct time. The object tracking allows me to gracefully replace a module without it requiring any explicit cleanup.
This graph extends to more than just Python files. There are special functions for opening or reading files that mark those files as a module's dependencies as well, so you can modify something like a CSV and the script that uses it will get reloaded (and any module chains that depend on that script).
You can definitely do a few things that aren't reloadable, like spawning your own thread, but most things you want to do end up being reloadable, or have an equivalent or alternative API that is reloadable.
There are a few other nice touches and APIs that help glue all of this together. Generally it means you can write your scripts as short, flat, mostly declarative Python (with perhaps a few event handlers and setup code that don't need to go in a __main__ guard) and not worry about common cases for reloading. (There are also subsystems specifically watching out for edge cases, such as a user script running an infinite loop)
Honestly the most annoying part of all of this (that I do generally handle now) was reliably monitoring file changes when users put file or directory symlinks at arbitrary places in the tree (which it turns out a lot of users like to do).
I assume this still works in python 3, but I have not tried it. https://docs.python.org/3/library/gc.html#gc.get_referrers
While the docs say that this shouldn't be used for any purpose other than debugging, it never caused any trouble IME. Far more trouble was caused by classes from python's stdlib just blatantly not following the pickle protocol, not calling __init__, calling __init__ more than once, etc. I assume this is why people advised me to use composition rather than inheritance when making classes that were a list with extra behavior, a dict with extra behavior, and so on.
Edit: Well, the above primitive is not enough on its own, but if one gets the old module and the new module and explores outwardly from those to find the subset of the object graph that differs, then one can replace all references to the old things with references to the new things. This is still not enough in general, because you'll have e.g. a bunch of objects that never had a certain member set during __init__ because __init__ was different when they were created. You'll probably meet an additional can of worms if you want to use __slots__ and add or remove class members.