Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If this gets wide enough use, they could add an optimization for code like this:

    n = 1000
    a = {}
    for i in range(n):
        a[i] = str(i)
    a = frozendict(a)  # O(n) operation can be turned to O(1)
It is relatively easy for the JIT to detect the `frozendict` constructor, the `dict` input, and the single reference immediately overwritten. Not sure if this would ever be worth it.




Wouldn't ref-counting CPython already know that a has a single reference, allowing this optimization without needing any particular smart JIT?

I think GP was talking about optimizing away the O(N) call on the last line. The GC will take care of removing the reference to the old (mutable) dict, but constructing a new frozendict from a mutable dict would, in the current proposal, be an O(N) shallow copy.

There are also potentially other optimizations that could be applied (not specific to dict/frozendict) to reduce the memory overhead on operations like "a = f(a)" for selected values of "f".


Indeed, the Tcl implementation does this so e.g. `set d [dict] ; dict set d key value` can modify d in place instead of creating a copy (since everything is immutable).

First thought: I would very much expect it to be able to do this optimization given the similar things it does for string concatenation.

But actually, I suspect it can't do this optimization simply because the name `frozendict` could be shadowed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: