I'd love to have the time to examine codebases and get real data, but my strong hunch is that in Python and Ruby, most of the time every instance of a class has the exact same set of fields and methods. These languages pay a large performance penalty for all field accesses, to enable a rare use case.
I'm working on a little dynamically-typed scripting language and one design decision I made was to make the set of fields in an object statically-determinable. Like Ruby, it uses a distinct syntax for fields, so we can tell just by parsing the full set a class uses.
My implementation is less than 5k lines of not-very-advanced C code, and it runs the DeltaBlue benchmark about three times faster than CPython.
People think you need static types for efficiency, but I've found you can get quite far just having static shape. You can still be fully dynamically-dispatched and dynamically type your variables, but locking down the fields and methods of an object simplifies a lot of things. You lose some metaprogramming flexibility, but I'm interesting in seeing how much of a trade-off that really is in practice.
__slots__ = ["x", "y", "z"]
I wasted an entire day because I set self.peices in one place and self.pieces in another & couldn't figure out the bug, until I used the "grep method".
Most classes have static shapes, so for the few types that are more dynamic, I think it makes sense for users to have to opt out.
This could trivially be implemented in a point-release as an opt-in feature – the scientific community has paved the way for this with Cython and Python 3's function annotations would be an excellent way to do something like that where a function could e.g. promise not to change things dynamically, guarantee that you only accept certain data types and thus a particular list operation can be run without checking for dynamic dispatch / duck typings, etc.