Hacker News new | comments | show | ask | jobs | submit login
Using attrs for everything in Python (twistedmatrix.com)
246 points by StavrosK 474 days ago | hide | past | web | favorite | 101 comments

While the conciseness of attrs is nice, it's not very readable.


    class Point3D(object):
        def __init__(self, x, y, z):
            self.x = x
            self.y = y
            self.z = z
...is much easier to read than this:

    import attr
    class Point3D(object):
        x = attr.ib()
        y = attr.ib()
        z = attr.ib()
Readability is what I love about Python. I don't think the conciseness of attrs is worth the loss in readability.

According to the article, that's not really a fair comparison. The first example would need functools and at least 3 more methods to get to the same functionality of the latter example, at which point I think the former is less readable.

That said, I'm not a big fan of wrapping classes, or this library's weirdly-but-concisely-named functions. I guess I'll have to give this a try the next time I'm writing Python to see if it's that much of a boost.

Those 3 methods aren't at all necessary for most objects you'll create and use.

Also, read the post and looked over the docs, and don't see any explanation of what `.s` and `.ib` is supposed to mean (though I can figure out from context). Does anyone know what the letters stand for? Struct and instance b…???

> They are a concise and highly readable way to write attrs and attrib with an explicit namespace.

> For those who can’t swallow that API at all, attrs comes with serious business aliases: attr.attrs and attr.attrib.

I'm definitely in the can't swallow camp. There's no precedent for using attribute namespaces to name things with half the name on one side and half the name on the other side, so who the hell does this project think it is to start doing it? :) Seriously though I'm pretty sure I hate it.

And I'm having trouble liking the "proper" names. So we use a class decorator "@attrs" to declare "this class is going to use class attributes to declare available instance attributes slots, a bit like django or schematics fields". Maybe it would have been better to call it "@attrs.model" since it is essentially a model definition in the terminology of other data modeling projects?

If you use `import ... as ...`, you can rename them to nicer names (full gist at [0]):

    from attr import (
        s as AttributedModel,
        ib as attribute,
    class Bar(object):
        x = attribute()
        y = attribute()

0: https://gist.github.com/gknoy/6e884fce3edda08a1566823fec37ba...

attr.s = "attrs" = plural of "attr"

attr.ib = "attirb" = alternate spelling of "attr"

It's too cute by half.

It's awesome that Glyph wrote this article, but so many people are getting stuck on the default names here (me too, I don't like them so very much) that it's really detracting from the commentariet.

From the Attrs documentation:

(If you don’t like the playful attr.s and attr.ib, you can also use their no-nonsense aliases attr.attributes and attr.attr).

As I think others noted, the attr version does alot more: it adds representation and easily adds compare methods as shown in OP.

Aside from that, this is not a fair comparison because attr names can, and mostly should be, longer, and there can be more of them, e.g.:

    class SomeClass(object):
        def __init__(self, myattr1, some_val, a_bool,
            self.myattr1 = myattr1
            self.some_val = some_val
            self.a_bool = a_bool
            self.my_other_attr = my_other_attr
And of course they'll need to go into __str__ method as well.

It adds magic.

"Explicit is better than implicit"


I feel like a person should be able to back their opinion up (for a specific case, not in general), instead of just mindlessly quoting a line from a bible.

The magic in programming war is eternal and which side you're on ought to just be added to the Myers-Briggs test.

See comment in this post in reply to Glyph. (And don't be a jerk.)

Then let's get rid of the with semantics too, because they add magic.


    with open("file.txt") as somefile:
        for line in somefile:
            print line
To the more explicit and clear:

    somefile = open("file.txt")
    line = somefile.readline()
    while line:
        print line
        line = somefile.readline()
But I'm still using some magic, I should be more explicit:

    def explicit_readline(fd):
        buff = []
        char = fd.read(1)
        while char != "\n" and char != "":
        return "".join(buff)
    somefile = open("file.txt", "rb")  # just to be sure now
    line = explicit_readline(somefile)
    while line != "":  # to be more explicit, of course
        print line
        line = explicit_readline(somefile)
And we could go on like this to replace open with the os module file-descriptor functions, and print with sys.stdout/stderr (because more explicit, right?). But even without getting there, it should be obvious that:

* The "explicit" verison no longer behaves as the original one, because, for example, explicit_readline only handles well the NIX newline character. If I want to provide the same functionality as file.readline() I should add a lot more code.

I've reinvented the wheel for no good reason, and readability has suffered. It may also leave a lingering question in the mind of a read "why did he do that? is there some edge case that wasn't well documented anywhere and he bumped into?".

* I've seen more contrived code do a version of this "more explicit" programming style, to the point of having statements like:

    def f1(number):
        number = int(number)
        return number & 0xff00  # or plug the number parameter in an equation, etc
* ... (not a verbatim example, but along those lines). And that code is redundant and it will fail anyway if another thing other than an int/long/float is passed. It would be certainly saner to have an assert, or simply specify in the docs that said function expects a number (which is implied in the args too).

My point was the "Explicit is better than implicit" means "be clear of your intent". And if I see someone using the attr module (which comes with the standard library), and I'm not familiar with it, I'll read into it. And for it's use case, I think it's clear enough in what it does, and how it should be used.

Oh c'mon Zoidberg, that's a strawman argument. "with" is a Python keyword and the statement syntax is in the grammar. I would expect any working Python developer to know what it is, and even understand how to write a context manager.

...wait wat? It's in the Standard Library? No, I just checked and it's not. Too bad, if it was "blessed" by inclusion in the library I would totally withdraw my objection on that grounds.

You had me going for a second there. ;-)

You make my point for me though when you say that you'd have to read up on it when you first encountered it. That's exactly my point:

The trade off is between one application of attr to the code base to save N time/effort for the current developer one time, versus every bit of lost time/effort of future devs taken to understand the attr package.

It's cute, but it's an in-joke. Don't put in-jokes in code you want other people to use, otherwise they have to "get the joke" before they can work with it. That's what I'm on about.

Weird, in Windows (Python 2.7.10) I have it without having ever installed it. Even pip uninstall didn't get it (just to make sure it wasn't installed from a dependency) but I can still import it.

On my Linux machine (Python 2.7.6) I can't import it without a previous install from pip install (that's reasonable).

Can't track down much about why this happens because googling "python attr" brings results about hasattr, getattr, etc, but few about the attr module.

It's not an "in-joke". It follows conventions set by the Python language:

- "def" instead of "define"

- "cls" instead of "Class"

- "sys" instead of "system"

- "os" instead of "operating_system_facilities"

- "ntpath" instead of "windows_path_names"

- "abc" instead of "abstract_base_classes"

Python aggressively abbreviates almost everything in common use. If it were in the stdlib, perhaps it would be the built-in names "attrs" and "attrib" rather than having dots in them, but the names would indubitably still be very short.

It's an "in-joke" in the sense that it doesn't make sense until you get the joke. There are people in this post discussion asking "What means .s and .ib?" It's not obvious to everyone.

>Then let's get rid of the with semantics too, because they add magic.

With just runs the enter and exit method of a context manager. Nothing magical about it.

>My point was the "Explicit is better than implicit" means "be clear of your intent". And if I see someone using the attr module (which comes with the standard library), and I'm not familiar with it, I'll read into it.

You don't have to read into 'with' to make an intelligent guess at what the code does. It's meaningful even if you've never heard of context managers.

Unlike this.

I find the @attr example more readable.

I've been using Python for a while, but it's not my main development language. I mainly use it for scripting experiments, processing data and generating graphs. I generally avoid defining classes as much as possible, and use namedtuple whenever appropriate.

It feels a bit hampered by "bad" design decisions to me

I think its safe to say that the current view of what decorators do is, a) filter things going to functions (so they error, or hit a cache), or b) register a function with another library (e.g. flask path decorator)

This library does a lot more than that, so it seems that maybe a metaclass (which exists to mutate the class's creation) would be more appropriate?

This seems to act as a __init__ replacement, but it would be cool to act as a __init__ assistant (in other words, run the attr.s __init__, then run the user defined __init__)

The ordering of attrs matters a bit more than one would expect, not only does it define the attribute order for creating a new instance, it also defines the order of attributes when sorting against other classes. You would not want a id=attr.ib() at the top of the class definition, where it most makes sense!

The ordering of attrs is done through the .ib() call itself (which gives itself a number, and increments a global counter). This is a way of doing it, and practically might be the best way (it won't lead to any inconsistencies within a class), but it still feels weird

I personally won't use it, but I might make a competitor to it for personal use

Your view of what decorators doesn't seem to reflect the usage I've experienced writing Python professionally over the last 10 years.

Decorators take an object and return a object (by convention a callable, although this isn't enforced in the slightest). Maybe they modify a function or method signature, maybe it produces an unrelated object, or maybe one augments the object by adding side effects like caching or logging.

There is a special syntax (@) legal for classes, functions, and methods that provides a shortcut for the common pattern of:

    def decorator(): ...
    def foo(): ...
    foo = decorator(foo)
such that we can say:

    def foo(): ...
Things I've seen decorators do:

    * add logging
    * add tracing
    * enable type validation
    * register functions/objects for various reasons
    * produce full objects out of functions
    * swap out implementations for global functions/vars
    * change output formats to json/xml/yaml
    * take a class, inspect a database schema and bind various internal attributes to update items in a row for that schema (somewhat common in ORMs)
I don't see this attrs library as any more egregious than other uses. In particular, I don't see why mutating a type via a metaclass is preferable to using a decorator. I can offer that decorators generating objects compose much better than metaclasses do (at least they have in python 2.x, I haven't had a chance to use 3 professionally).

I know how decorators work and you might be missing the point I was making by a fair mile

The following don't mutate the mental map of how the function works, its "this function is still what I typed": "add logging" "add tracing" "register functions/objects for various reasons"

The following act as a filtering role, the inside isn't "changed", the mental map becomes "This function will operate within this feature in some way, but otherwise is still what I typed": "enable type validation" "change output formats to json/xml/yaml" "swap out implementations for global functions/vars" (assuming the implementations are comparable)

The following is "What I've typed is not a proper representation, and some things may act differently than usual" e.g. things that can take lists can no longer, due to needing to be serializable (and the least common decorator case):

"take a class, inspect a database schema and bind various internal attributes to update items in a row for that schema (somewhat common in ORMs)"

(and that is usually implemented via a subclassing of a Model class, which then in turn can implement a metaclass, to get the full mutate-class-on-creation funs, iirc)

- - -

> In particular, I don't see why mutating a type via a metaclass is preferable to using a decorator.

Its not a technical topic, its a semantics topic. while both can do whatever silliness they want, decorators are usually reserved for incidental tasks on the side, and wrapping the original function in some light filtering. Metaclasses are expected to do mutation to the class (hence requiring a metaclass), so semantically a subclass or metaclass would be more appropriate to use for this task, compared to a decorator (from the standard library, Enums come to mind as an example for this)

Just a data point: MacroPy uses the decorator syntax for "case" classes that produce objects with a similar behavior to "attr.s" generated objects https://github.com/lihaoyi/macropy

A few weeks ago, someone posted an article where the author showed how they could "inline" functions by playing with the AST. That was done to increase performance for the case of very simple functions along the lines of:

    def mul_by_pi(x):
        return x * 3.14
(I'm sure there are better examples, but that was the idea). I've seen production code where methods and functions are used to provide bitmasking with a class-dependent mask, for example (where the mask is a const, not an attribute), which would benefit from inlining.

From what I've seen, it seems MacroPy could do that, but it's not a direct example anywhere, and it seems they haven't tested it. Has it been done before?

My other problem is that the client is extremely averse of adding any new dependency (which is understandable), but I might be able to convince them if I can show a significant speedup on inlined code. The previous post I mentioned isn't a viable alternative because only "toy code" (as the author said) was provided, and in my team we aren't knowledgable enough in AST handling to feel comfortable hacking it together, but a more mature project like MacroPy definitely looks viable.


From the post, it seems to me like another (IMO better) alternative to this awkward attr-specific convention and syntax etc would be to create a factory function for dynamic class definition, in a similar vein to namedtuples.

This is functionally similar to the metaclass route, and to be honest I'm not sure what the tradeoffs would be. Either way, I think metaprogramming is a much better approach than attrs if the problem you're trying to solve is "generic container classes take too long to get set up".

I think, were I to write something like this, I would probably do it using a metaclass, and then inject an __init__ on the MRO between the programmer-defined __init__ and the actual super().__init__. That way, you could call super().__init__(arg1, arg2, arg3) for the automatic attribute setting. You could also hook this in to some metaclass-defined storage containers to automate the __repr__ implementation, etc.

There is one:

>>> C2 = attr.make_class("C2", ["a", "b"])

>>> C2("foo", "bar")

C2(a='foo', b='bar')

While I view python metaclasses (metaprogramming in general) as a tool of last resort, I have to agree with you. I applaud the attempt - I think it fills an important gap between lists/dicts, namedtuples, and classes. Implemented as a metaclass, I think would be much simpler to get it accepted into core python with fewer backward compatibility issues.

>I think its safe to say that the current view of what decorators do is, a) filter things going to functions (so they error, or hit a cache), or b) register a function with another library (e.g. flask path decorator)

Why is it safe to say that? Aren't decorators just syntax for higher-order functions?

its "safe" to say that, like its also "safe" to say that <article> will be used to denote articles in HTML

Yes, decorators can do whatever they want, but, in practical usage I'd say a majority of decorator usages do not mutate the class/function significantly from its original typed-out intent (like, they don't often add 8+ methods to a class for example, but of course they can)

not literally "safe" to say.

If you just want an object that stores a few attributes and you don't want to define a namedtuple, you can use types.SimpleNamespace in Python >= 3.3: https://docs.python.org/3.5/library/types.html#types.SimpleN...

It's a pretty unknown feature from the stdlib, but sometimes very useful.

Would there be any value to allowing a convention for the __init__() method that allows direct assignment of arguments to attributes? For example, replacing the following:

    class Point3D(object):
        def __init__(self, x, y, z):
            self.x = x
            self.y = y
            self.z = z

    class Point3D(object):
        def __init__(self, self.x, self.y, self.z):
            # Other initialization work
It seems __init__ could be made to parse the list of parameters, and for any self.foo that's not defined, assign the corresponding argument to that attribute. But I have no experience in language design. Is there a reason this wouldn't work?

CoffeeScript has a feature like that, where a constructor can call parameters `@whatever` and they will be assigned to the object. For all the ways CoffeeScript borrows from Python, but somehow makes it worse, that feature is actually very nice/useful (and not super confusing, either).

you can just do

  class B:
      def __init__(self, a, b, c):

You really, really, really shouldn't do this under any circumstances, though. If you find yourself messing with __dict__ you're down a hole, if you find yourself mutating it you're down an even deeper hole, if you find yourself calling locals() you probably forgot your shovel, and I'd hope that for all of that you'd have a better reason than minimizing code (that's way less readable, for example). I immediately get suspicious when I read code that interacts with __dict__ and I'm only sympathetic if you're cooking a metaclass or something equally arcane.

__slots__ also makes __dict__ nonexistent.

Well, yeah, but you could implement it without __dict__ just by doing something like:

  for attr, val in locals().items():
    setattr(self, attr, val)
It still feels pretty un-Pythonic either way.

Because it is. There is zero reason to be clever here. Yeah, you have to be explicit in __init__. We are all used to it. The "a," "b," "c" examples are disingenuous though because you're often validating, enforcing types, converting, deriving other attributes...

__init__ is documentation. I read it to understand what's going on.

That makes B.self a field too, which you probably don't want.

Especially because that creates a circular reference and annoys the garbage collector.

yeah I know, I was wondering if anyone would notice

"Another place you probably should be defining an object is when you have a bag of related data that needs its relationships, invariants, and behavior explained. Python makes it soooo easy to just define a tuple or a list."

Yes, and defining a tuple or a list is often better than a small object because of Python's problems with serializing (pickling) objects. If you're doing anything with data, anything functional, anything with multiprocessing, anything with distributed programming -- you want to avoid using objects as small bags of data. (Namedtuples should have been a way to improve this, but they are implemented as a class, so they have serialization issues too.)

Lists of data can be serialized easily. Lists of objects cannot. dill tried really really hard to fix this, but the problems that remain are problems created by the Python language spec.

None of this is a problem with attrs. Serialize however you like.

    >>> import attr
    >>> @attr.s
    ... class Thing(object):
    ...     a = attr.ib()
    ...     b = attr.ib()
    >>> @attr.s
    ... class Many(object):
    ...     things = attr.ib()
    >>> many = Many([Thing(1, 2), Thing(3, 4)])
    >>> many
    Many(things=[Thing(a=1, b=2), Thing(a=3, b=4)])
    >>> attr.asdict(many)
    {'things': [{'a': 1, 'b': 2}, {'a': 3, 'b': 4}]}
    >>> import pickle
    >>> pickle.dumps(many)
    >>> import json
    >>> json.dumps(attr.asdict(many))
    '{"things": [{"a": 1, "b": 2}, {"a": 3, "b": 4}]}'

...how does it do that?

You call attr.asdict() and it searches the attribute values, including inside lists, for more attr objects to convert into dict values?

What kinds of values does it search through? The documentation doesn't say, it just gives an example where it works inside a list for some reason.

Why would it need to recurse? The method probably just returns a copy of the instance's __dict__. Maybe updating it with the __slots__ and their values.

It would need to recurse because you want something you can encode in JSON, not a dictionary containing a list containing miscellaneous instances.

We have an example there of an 'attrs' instance, containing a list containing 'attrs' instances, and all of the instances turn into dictionaries.

That's a problem with pickle, not namedtuple. I assume you're complaining about not being able to pickle an instance of a locally defined class?

This is how I develop in Python. I do not use this helper library (and will certainly look into it), but definitely wish that more Python hackers built software in this way. Passing plain-old classes around that are very small and close to their domain is hugely valuable. Adding convenience methods or synthetic properties with a @property decorator is also hugely important. It lends itself to a very nice DSL where, for example, you can have image.png where the png property is a literal PNG representation of the image.

Haven't finished reading yet but I just wanted to say that this _really_ resonates with me...

> You know what? I’m done. 20 lines of code so far and we don’t even have a class that does anything; the hard part of this problem was supposed to be the quaternion solver, not “make a data structure which can be printed and compared”.

YES. I don't know much about attrs, but if it helps solve this, I'm in.

The problem with constructs like this one is that it looks nice at first glance, but adds a cognitive load to an ecosystem that already has about "20 ways to do it" for every single problem.

Python is beginning to rival C++ in that respect.

Please don't tell the C++ community that! We really don't want to encourage them.

To someone who uses Python every day, the extra cognitive load of `attr` is more than made up for by all the boilerplate it eliminates. But for beginners, behind-the-scenes magic is a real turn off.

I think that making `attr` an optional library is a fair compromise, but perhaps it would be a good idea to add a note to the library reminding the programmer not to use it if the code is likely end up in the hands of novices.

>To someone who uses Python every day, the extra cognitive load of `attr` is more than made up for by all the boilerplate it eliminates.

That's funny because the first thought I had when reading this was "this does not eliminate nearly enough boilerplate to justify using that much magic".

>But for beginners, behind-the-scenes magic is a real turn off.

It's a turn off for beginners and experts but a turn on for intermediates.

While I do see the usefulness for the specific example, and while it does seem to reduce boilerplate which is really nice, how useful it is outside of the Point3D example? For example, how often do you actually gt/lt comparisons between classes, or equate classes when they're not mathematical constructs like this? And if you add an attribute that shouldn't be used in repr or comparison, you have to explicitly say that in the attribute declaration, meaning it's would be much less clear what's going on down the road -- as in it won't be apparent how the comparison works or why this is being printed the way it is -- which I don't like in a language like Python. It almost seems like you're trading language level boilerplate for this library's boilerplate, and while it may be less with this library it obfuscates what's going on at the expense of more explicit code.

I can kind of understand something like this in Ruby, as it kind of encourages this (and it also has nicer class string-ification by default), but in Python I'd rather have all of my logic be explicitly stated out at the expense of less boilerplate. Also how often do you really need to overwrite those gt/lt methods in production code? If I saw that, rather than an explicit e.g. `.gt` method, I'd consider that a code smell.

In the end, this is about defining a typed structure. "@attr" plus validator constraints is effectively a typed structure.

Python got further with minimal declarations than any other mainstream language. Yet there's a demand for typed variables. This is going into future Python, although in a strange and clunky way, because types don't fit the syntax.

We seem to be converging on a consensus on when to declare types. The breakthrough was "auto", in C/C++, which declares a variable as the type of the value to which it is initialized. This is the default in Go and Rust. Function formal parameters and structure slots have explicit type declarations, while local variables are usually implicitly typed.

Python seems to be moving in this direction, but from the no-type-declarations direction. Unfortunately, because the language itself doesn't do this well, it's being kludged through decorator macros.

The latest version does allow you to decorate variables with types... Not sure its doing anything with them yet.

It's not. See [1]. The retrofitting of type declarations to Python is taking a very strange path.

(Evil interpretation: this is an effort by Python's little tin god to sabotage PyPy. Type information is a huge win for a compiler; it can now generate hard code for a specific type. It doesn't help CPython much, since inside CPython everything is a CObject. With type information, one could write things like NumPy, with its arrays of uniform type and functions which operate on them, in Python. The PyPy compiler could then generate efficient code for them. This would make PyPy the primary Python compiler. But if type annotation isn't enforced in CPython, code won't port reliably to a compiler which enforces it.)

[1] https://www.python.org/dev/peps/pep-0484/

How is this "sabotaging" PyPy? Also, even with type declarations, I'm not sure you could get Numpy-esque performance, given that Numpy doesn't have to dereference every element in the array.

If the compiler knows a Python array is uniformly of type float, it can, and should, use a dense array of floats to represent it.

Is this safe in all cases? What about complex types? Specifically if you assign a variable to an element in the array, and then replace the element in the array? You'd have to take care not to update the value of the variable.

Assigning a variable to a destination declared to be an incompatible type should produce an error at compile time if possible, and at at run time otherwise. That's what type declarations are all about.

I think you misunderstand. Here's an example:

``` foos = [Foo(a=0), Foo(a=1)] f0 = foos[0] foos[0] = Foo(a=2)

print(f0.a) # 0 print(foos[0].a) # 2 ```

`foos` is a list of type `Foo`, but it still can't be safely made into a dense list (at least not naively).

You are right. Array elements need to have value semantics to do complete unboxing. But for NumPy-like use cases, they do.

It should be noted that PyPy already does unboxing Animats suggested, without any type annotation. At least for 4 years. It is not theoretical. https://morepypy.blogspot.com/2011/10/more-compact-lists-wit...

Python's arrays are already typed, and have been for long years. Python's lists may hold arbitrary objects and would be much harder to reason about in this context.

This seems nice, but other than easy destructuring of functions that return long tuples (which really should return dicts), I cannot figure out a good use case for it over using a dictionary.

I am keen to hear others' opinions.

You'd have to expand more on your proposal for using a dictionary for people to be able to reply. The article was full of use cases, so why not address those rather than simply state that you can't think of any?

When you say use a dictionary, do you mean subclass `dict`? One of the aims of the project is to constrain the available instance attributes, as in a normal class. How would your dict proposal do that?

In that case, I'd just deal with the overhead of a normal class. More boilerplate, of course, but I'd have more control over the class.

As to the use cases, I was not really buying into them other than the nice destructuring example, which is actually very nice one.

I think the point is that this is basically a normal class, but one with the boilerplate already written for you. If you don't mind boilerplate, this library does nothing for you. If you don't want to write classes and think dicts are good enough, this library does nothing for you.

If you do want to have classes but don't want the boilerplate that comes with making a basic-but-full-featured class, then that's where this library comes in.

Dictionaries are not for fixed fields.

if you have a dict, it maps something to something else. You should be able to add and remove values.

Objects, on the other hand, are supposed to have specific fields of specific types, because their methods have strong expectations of what those fields and types are.

attrs lets you be specific about those expectations; a dictionary does not. It gives you a named entity (the class) in your code, which lets you explain in other places whether you take a parameter of that class or return a value of that class.

A class with more methods than the Point3d example.

Hmm, IDK.

Stuff like @attr.ib is kind of cute looking. (This not being a good thing).

Though in reality, I'd probably want more explicit methods. If you are using something like Django REST framework, you can validate with the serializer. And sometimes you want to validate (often) the combination of variables and how they interact.

Kind of feels a little non-pythonic to me. Clever, but the ways it is changing things are stuff you have to unroll if the behavior grows, versus stuff you just add to.

From the docs:

(If you don’t like the playful attr.s and attr.ib, you can also use their no-nonsense aliases attr.attributes and attr.attr).

I would consider that a negative.

> There should be one-- and preferably only one --obvious way to do it.

Whats going to happen when all professional code is using `attr.attributes` but all the documentation uses `attr.s`?

Anyway its a minor point. I personally think it is a poor decision when a programmer decides to be cute than clear.

https://xkcd.com/927/ comes to mind when I read this.

  I love Python; it’s been my primary programming language for
  10+ years and despite a number of interesting developments in
  the interim I have no plans to switch to anything else.
"interesting" and "developments" are referred to as haskell and rust respectively in the original post.

"A History of Haskell: Being Lazy with Class" [0] by Paul Hudak et al suggests that it emerged before 1990. See http://haskell.cs.yale.edu/wp-content/uploads/2011/02/histor... the page in the report showing Haskell timeline.

[0] http://haskell.cs.yale.edu/wp-content/uploads/2011/02/histor...

For lightweight containers that improve on the built-in namedtuples consider using the excellent boltons library's namedutils: http://boltons.readthedocs.io/en/latest/namedutils.html

This feels a little bit like Lombok in the Java world. Can anybody with experience with both comment on that?

We use Lombok a fair amount within our code to abstract away a lot of the class setup, which has been nice. But as I move more towards the Python world, I'm interested to see how attr fits in.

Lombok is kinda bigger commitment than attrs, because it interfaces with Java compiler internals (last time I used it, it was only possible to compile with sun/oracle jdk).

attrs seems lesser risk IMO, it's just a plain Python library, no other deps.

If you like Lombok, I'd expect you'll love attrs. :)

How does this compare to schematics? It avoids inheritance, whereas in schematics and similar projects you'd have to inherit from their `Model` class (and thus acquire methods you might not want).

Until your comment, I had never seen schematics. I poked around a bit and my initial impression is that it seems like marshmallow-- it's heavily focused on validation rather than reducing boilerplate.

Schematics looks a bit like a less elegant version of schema. It does seem more focused on validation than attrs, I agree.

Schema [1] does not appear to have any support for object initialization.

[1] https://pypi.python.org/pypi/schema/0.6.2

The short names confuse me a bit on this.

Hello, Hynek! Thanks for attrs, wonderful library!

I'd suggest to market the "serious business aliases" more prominently. It's not about aesthetics, but expectations: people like me are immediately puzzled by "attr.ib()", thinking "why ib?".

The reason is because after reading a lot of Python code, our brain is already trained to recognize attributes and method names after the dot.

Also, it's a reasonable expectation to be able to import a function or submodule and the code still make sense, but this won't make sense:

  from attr import s, ib

  class Thing(object):
      x = ib()

I understand it looks such a small thing for you and others already used to this DSL, and also that it's not the most important technical aspect of the library. However, I do think it's an important human aspect of it.

I have the feeling that by making the "no non-sense" alternative more prominent in the examples and documentation, it would reduce cognition steps for newbie users and maybe make your own life easier by not having to explain and paste this link every time someone finds it odd (which will probably continue to happen).


Hello, thank you for the rare nice words in this thread!

I got a bit hit by surprise by the submission of this article to HN (it’s like two weeks old now) so it caught me a bit off guard close before a new release and there’s some inconsistency between the GitHub README and the docs on RTD.

The version that’s on PyPI now <https://pypi.org/project/attrs/> has it as the first sentence after the first code sample anyone ever sees (and people still complained because the just don’t read :() and the new version <https://attrs.readthedocs.io/en/latest/> has a whole section around the example explaining attrs’ scope and ib/s: <https://attrs.readthedocs.io/en/latest/overview.html> (the new version also adds and promotes aliases that make more sense but that’s beyond the point).

I feel like I’ve really done my due here and I’ll have to accept that I can’t make everyone happy. :|

I'm glad they recognize it, but I honestly don't even like the expansions.

I can't find it at the moment but wasn't there another solution to this problem where you could just populate an object at will (like matlab's struct for instance)? IIRC it was dict-based and allowed something like

    foo = SomePythonWizardry()
    foo.x = 1
    foo.a.b = 2
    print( foo ) #yields e.g. foo.x = 1, foo.a.b = 2
Or am I just dreaming? It would be all the goodness and none of the weird syntax.

You're probably thinking of jsobject¹:

    from jsobject import Object
    foo = Object()
    foo.x = 1
    foo.a = {}  # foo.a.b will fail without this line
    foo.a.b = 2
    print(foo)  # prints: {'a': {'b': 2}, 'x': 1}
¹ https://pypi.python.org/pypi/jsobject/

I think if you import mock you can create mock.Mock() instances that allow you to assign fields that way.

Added to the Python OOP catalog: https://github.com/metaperl/python-oop

attrs seems to be all over the place, but I find Schematics (http://schematics.readthedocs.io/en/latest/) much better designed and more powerful.

Yes, Schematics is impressive. Thank you, I added it to my list of Python OOP extensions [1]. I wonder why you need to supply a dictionary to the object instead of keyword pairs.

[1] https://github.com/metaperl/python-oop

> I wonder why you need to supply a dictionary to the object instead of keyword pairs.

My understanding is that the main reason is to make constructors more flexible, with arguments like _raw_data_, _trusted_data_ etc. But it is trivial to write a simple helper function that would accept keyword pairs and build a model instance.

This is wonderful and cute as hell and please PLEASE don't ever use it in production code. ;-)

Why not? It's aggressively tested and benchmarked to ensure its correct and its performance impact is as close to zero as possible. (It's pretty close.)

tl;dr: I don't like metaprogramming.


Both of those things are very good, even great, but they don't speak to the issue: It's [bad] magic.

In this case, it does at module-load-time what should have been done previously at build-time. (And yes, I know Python normally doesn't have build-time the way e.g. C does.)

I'm working on a large production codebase right now where several of the cowboy-coders here have added all sorts of crazy metaprogramming and it's a PITA.

Let me tell you a story. We had a class decorator that introspected the attributes of the class and added "constants" to the enclosing module providing named string values for each of the class's attributes. It lets us access dict versions of these objects (they're data models) like foo_as_dict[foo_module.SOME_ATTR_NAME_BUT_IN_CAPS] ...

Good luck finding foo_module.SOME_ATTR_NAME_BUT_IN_CAPS in foo_module.py though, because it's not there. (and notice that if the model field name changes ALL of your uses of the "constant" have to be refactored to the new name too, so it's not really as helpful as it might seem in the first place.)

And now every new developer has to ask, "Where are these coming from?" and we get to point out the magic extra-clever decorator. (Try to imagine the horrible wonderful "voodoo" that decorator encompasses... Imagine the innocent newbie encountering it.)

We have a "service builder" thing that takes a bunch of arguments and build a web service object in memory at module-load time. No source for any of that machinery. A bug in the service gives you a traceback running through a bunch of weird generic meta-code.

Did I mention the guy who originally wrote it bailed to a new job a month after he finished it? No one else knows how it works. We can read the code and reverse engineer it, of curse, but that's kind of B.S., no? A junior dev could debug the real service code if it existed, but the meta-code that generates the services is another challenge (that they shouldn't have and the company shouldn't have to pay them to overcome.)

We had unittest that read a JSON file and built a test suite in memory to run a bunch of other tests. They finally let me re-write that to a build-time script that scans the exact same JSON files and emit plain-vanilla unittest modules, that then can be run as normal, and the dev/user can look at the actual code of the test, rather than the meta-code of the test-builder.

The same problem applies to this attr package.

If you're trying to trace into or debug code that uses attr you've got to know attr at least enough to be able to follow what it's doing. It's an in-joke. ;-)

(P.S. I'm a big fan dude. Nice to interact with you. All respect.)

The same problem does not apply.

I understand that spooky action-at-a-distance magic can really ruin a codebase's maintainability. But what you've developed here is not a judicious appreciation of its danger and its power, but a blanket aversion to both its risks and its benefits.

An apt metaphor would be, let's say: toast. If you try to make some toast, but accidentally burn your house down, it's understandable that you might want to have your bread un-toasted for a while. But it doesn't make sense to switch your diet to be entirely raw as a result, especially if you eat a lot of chicken.

In python, metaprogramming is like fire. People who haven't seen it before are fascinated before it, and try to touch it. This always goes badly. But that doesn't mean it's useless or we shouldn't have it. There are many things it's good for, if it's carefully and thoughtfully applied.

attrs goes out of its way to avoid any behind-the-scenes effects. It even generates Python code, rather than using setattr(), so that if any error happens in the constructor, you can step through it normally in the debugger, and see it in the traceback. Its semantics are clear and direct. It doesn't have these problems.

I like toast, and a toaster is better than a folded wire clothes hanger perched over the stove burner, but this robot toast-o-matic is too fancy for my kitchen at work. We gotta toast that toast and ain't nobody got time to debug some fancy toaster.

:-) Metaphors. Heh.

Let me reiterate the first part of my statement above, to wit: "this is wonderful and cute as hell".

I like it.

I could use it if it generated complete code for the classes you define. That would actually put to rest my "metaprogramming BAD" problem in this case.

Tucking __init__ source in linecache for the debugger is neat but it's also more magic. You might wind up stepping through code that doesn't exist in any file. Is this wonderful hacky weirdness mentioned in the docs? I didn't see it.

And what about __repr__? No precomputed string template?

Don't imagine that I'm crouching in the dirt tentatively reaching out to the monolith of metaprogramming. I'm into it. I read GvR's "The Killing Joke" for fun. But he called it that for a reason. :-)

I still recall a junior dev slamming into a wall trying to extend a Django HTML form widget. The reason? The cowboys over at Django made it with a metaclass.

We should look at a problem and think, "What's the simplest thing that will work?" Or, more to the point, "Does this problem require metaprogramming to solve?" ("or am I just showing off?" could be the rest of that question...)

HTML form widgets don't require metaclasses. The problem attrs solves doesn't require metaprogramming. It's a simple DSL for data models. Right? Riiiight?

If attrs took in a spec (as Python code or whatever) and emitted all your boilerplate, ready to go, I would love it so much. You would get all of the benefits without any of the downside. I know that's boring and not sexy or fun, but I just want to get this bug fixed and ship it and get on with my life. I'll play with attrs at home, on my own time, but I'm sick of magic at work.

(Damn, am I really that old? heh)

And, thanks for the P.S. Always nice to see people enjoy my work :)

Cheers! :)

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact