Hacker News new | past | comments | ask | show | jobs | submit login
Why Pylint Is Both Useful and Unusable, and How You Can Actually Use It (codewithoutrules.com)
107 points by itamarst on Oct 20, 2016 | hide | past | favorite | 90 comments



FWIW, I like pyflakes because of its design philosophy:

""" Pyflakes makes a simple promise: it will never complain about style, and it will try very, very hard to never emit false positives.

Pyflakes is also faster than Pylint or Pychecker. This is largely because Pyflakes only examines the syntax tree of each file individually. As a consequence, Pyflakes is more limited in the types of things it can check. """


I love Pyflakes. And when I do care about style, I like Flake8 – it has sane defaults and lets me override them on a flexible basis.


I have pyflakes as a vim plugin but it is not enough on big projects, as it will not catch import errors (which are the most common code-breaking errors)


Yep pyflakes is fast, but the tradeoff is that it does relatively little and can't do cross-file inspections or anything somewhat complex.

It's also even less well-documented, I have no idea if you can even create a custom flake. pylint's got the advantage that as badly documented as it is creating custom lints is supported and normal.


agreed. i've found pyflakes easier to work with than pylint in terms of setting a minimum standard to enforce (in terms of running it with CI and using it to mark builds as unstable if there are any warnings/errors). pylint warns about various things that arguably aren't actual problems but might be good to clean up in some cases. yes, most of these can be disabled by configuration. on the other hand if you see pyflakes output it tends to indicate something is unambigiously wrong.


> The real problem is the amount of output.

That's a nonsense argument. The output is proportional to the number of violations. If you keep your project clean, then you'll have about a report of about 100 lines showing the statistics, with a one-line message: "your code has been rated at 10/10" at the end.

Of course, if you run it against a project that is full of violations, you'll get lots of violation messages. If you use a linter or style checker or whatever on a project that is full of violations, and it doesn't produce lots of output, then the tool is rubbish.

The right thing to do is to use it from the start and either fix or explicitly ignore any violations as they come up.

The approach described in this blog post is good for existing projects. However, the linked examples disable all warnings and explicitly enable those that the author has deemed important. It would be better to explicitly disable unimportant ones.


But having classes without an __init__ method is not a violation. (Amongst several other warnings, like number of methods, size of methods, etc)

Don't get me wrong, pylint is great, but at some point it seems those tools begin to get opinionated instead of helping


It's a violation of someone's coding standard, if it's not a violation of yours, then add it to your exclusion list. The point of the __init__ warning is that a class without a constructor might not really be a class at all.

Number and size of methods are both code smells that indicate that you might be creating a God Objects or God functions. If there's a good reason for an enormous function, then violations can be ignored on a case, project or user basis. I'd prefer a tool that can warn about that sort of thing, to one that can't.

Most of the limits are configurable, and set to reasonably generous defaults. Methods per class is 20 and statements per method is 50.


This seems like one of those articles where nobody seems to have read past the first couple of paragraphs. It doesn't matter how wonderful the warning about things missing __init__ are if it's buried in a pile of other noise. The noise is the problem, not the specific warnings.

And the author has already provided a constructive solution to the problem other than "not linting", which is to start with a whitelist and slowly crank the linter up, providing instant value and a path to move forward to obtain incrementally more value over time, instead of trying to crank the linter down, which means a lot of work before you're provided with any value, followed even then by, in the real world, probably staring at the list of issues, getting overwhelmed, and deciding the code seems to be working fine the way it is so why bother?

(Remember, we are not superhumans. None of us respond to a list of 20,000 style issues by dropping everything and fixing them all. This is not even wrong; we can not operate like that, so there's no point in pretending that we either do or should.)


First thing I do with a legacy code base is to commit it to git, then pass it through pep8 and pylint to get a sense of its level of dirtyness, then run autopep8 to fix all that can be autofixed. Then commit the autofix. Then manually fix the remaining pep8 warnings after muting the ones that are too many. Then run pylint on it. I'll mute some of the most offended violations, and fix the others, so to have pyflakes and pylint run with zero warnings. Then hook the linters in ci, so the dirtyness do not come back through the window.

In this process, I may exclude a folder or two if they are non vital and have a lot of dirty files. I think I have fixed code with nearly 10.000 violations this way, it takes 3 hours max if you use an efficient editor. Then the code is is some shape that you can start working on.


If you have 20,000 style issues, spending an hour to make a .pylintrc that matches your style spec will take care of all of those. And once done, it can be shared with all of your coders. It's not really that onerous.

Pylint is also best used in tandem with an editor or IDE, so you can get live feedback.


I hope you're not suggesting I didn't read past the first couple of paragraphs. The last line in my original post explicitly refers to the antepenultimate paragraph in the article.

The author's problem comes directly from not using pylint, then introducing it to a large, established project.

You simply can't get a list of 20,000 style issues when you first run this kind of checker, if you use it from the start of your project.

If you introduce lint later on, the proposed approach of whitelisting and ramping up is OK, and probably a good idea to start with if you are really swamped, but once you have done a bit of ramping up, you should switch to blacklisting and ramping down. IMO, unless you have tonnes of violations evenly distributed across all the different types of violations, then blacklisting and ramping down is a better approach.

Blacklisting is better than whitelisting in this case for a few reasons:

* By disabling all, you will miss out on any new rules you didn't know about (e.g. from plugins or new versions of pylint)

* Whitelisting requires you to think about all the possible violations, and work out if they matter to you; Blacklisting allows you to just look at the individual high-frequency violations in your codebase, and work out if those particular ones matter to you.

* Blacklisting allows you to knock off the low-hanging-fruit where there are just a few instances of a particular violation, and keep that aspect clean as you go. Whitelisting ignores anything you have personally deemed unimportant, possibly saving up a big cleanup for later.

Looking at one of the linked examples - there are disabled warnings that can introduce quite subtle bugs and maintainability problems (e.g. dangerous-default-value), some that are rarely seen and simple to fix (e.g. unnecessary-semicolon), some that are syntax errors in more recent Pythons (e.g. duplicate-argument-name). Even in an older project, nothing is gained by disabling these warnings and you just end up with noise in your pylintrc file.

Pylint produces a report that shows the counts of the different violations, in frequency order. Start at the top and work your way down to either produce a blacklist or fix the problems.


> is that a class without a constructor might not really be a class at all.

Yes, but there are several cases where this doesn't matter. Exceptions for example.

> Number and size of methods are both code smells

Yes, but Pylint's default values comes from where? It's pretty easy to trigger them without being a God object

20 methods is fine, 50 lines is what fits in my terminal, and a lot of functions don't fit that (especially with blank lines, splitting lines bigger than 80 chars, etc


> Exceptions for example.

Exceptions are not an example. They should inherit from BaseException (PEP-352). BaseException has __init__, therefore any Exception has __init__ by inheritance.

> 50 lines is what fits in my terminal,

50 statements, not 50 lines. Blank lines aren't statements, splitting a long line doesn't turn it into two statements. It's pretty rare to have a 50 statement method that doesn't violate Curly's Law. Personally, I'd make it much smaller. Depending on how much commentary and whitespace you put in, you could have 150 or more lines in a 50 statement function.

> Yes, but Pylint's default values comes from where?

It doesn't matter, pylint isn't an academic paper that requires every assertion to be backed up by experimental data and citations from peer-reviewed journals. It's a handy and easily configurable tool. I would guess that the numbers came from a couple of pylint developers having a conversation like "50 statements is pretty big for a function isn't it? Yes, let's make that the default limit." Were it hard-coded as a magic number deep in the pylint code, then perhaps they might need a bit more experimental validation behind those numbers.


I'm also not a fan of pylint - and I just ran it against a small project, 812 LOC of Python.

The code was developed with help of the 'pep8' tool which only finds 12 offending lines we declared ok (all too long, but it's SQL where formatting for readability is more important than 80 chars..) and wlthough pylint gives it a 7.26/10 there's a TON of errors.

So yes, I could now spend a while blacklisting some rules, fixing some stuff, but the defaults are just so bad I won't bother, as usual. Because I think the shape of this code is pretty good.


I beg to differ, to me python (and js) are tolerable languages only when they are controlled with linters and automated tests. And pylint is still the best linter for big python projects. It is fully configurable and each and every default rule is interesting per se. For example, a class with no `__init__`, no or too many attributes or methods, is a potential code smell and candidate for refactoring.


Pylint complains about:

    foo = functools.partial(bar, 'baz')
because it thinks foo is a global constant and thus should be upcased. It can DIAF.


Are you not in the module's global scope? If you are pylint is right and you wrong, you need to move this line inside a function or a class. There should be no code running at module level. Here you might argue that this is akin to a decorator, and that bar will not be called, so yes it is a corner case, but the main rule (only allow consts at module level) is still useful, and I'd say vital even.


This is not 'akin to a decorator', it's is partial function! Do you name your functions:

    def THIS_IS_IN_GLOBAL_SCOPE(): ...

Of course you don't! Then why would you consider forcing someone to write this:

    def bar(spam, eggs):
         print spam, eggs
    
    def foo(eggs):
        return bar('baz', eggs)
    
Which even manages to lose information, hence why functools.partial exists in the first place.

This is bad because readers now see FOO and think it's a constant:

    FOO = functools.partial(bar, 'baz')


This is exactly why it has ignore flags. Remember Python's attitude is "explicit is better than implicit": if you know what you are doing, you just put a flag over it. pylint is trying to catch the cases where a person didn't know what they're doing. I'd rather have to add a flag than have it miss those.


I don't know any other linter for any other language that complains about too many methods (or too many lines in a method, or too long variable names, etc) in a class. I should know my domain better than pylint.



It's not specific to a language. Those are all code complexity measures. When you see those flags you should at least consider refactoring (and if things are best the way they are you can put a pylint-ignore comment with an explanation right above it).


Perl generates McCabe scores, and can arbitrarily limit on complexity. That's a better proxy than "number of lines", but tends to be equivalent.


> Because I think the shape of this code is pretty good.

That's the wrong attitude. Nobody commits code they don't think is good, but unless you measure "good" objectively and thoroughly there can always be problems you don't notice.

The entire point of the article is that there ARE bugs buried in all that noise, and that you can surface and prevent them from being Future Wink's problem by spending an hour today to fix the tooling. For 800 lines it should take you far less than an hour, and if there is a "TON" of complaints there are probably one or 2 real bugs. Multiply that by 10k LOC and you start to realize the scope of the problem.

Take the pain, fix it today while you are just at 800, so when you're 1000 or 5000 it's 5000 lines of better code.


This is why I suggested whitelisting, because it's much faster. So you get some checks going immediately.


The problem with whitelisting is that you never un-whitelist some things, depriving yourself of potentially useful feedback.

Better to run with -E first, address those, then add in warnings, and blacklist the points which are actually not useful.


  >> configure it with a whitelist of lint checks
I have found Pylint to be very helpful, and always use it on Python projects past the size of a short script.

If you start using Pylint at the beginning of a project, you can blacklist warnings that you want to ignore fairly easily.


Agreed. I mean you don't need to follow it religiously, but it's pretty workable with a small blacklist (docstrings, locally disabled, and a few more I don't instantly recall). When the hacks get bad, maybe the odd `#pylint: disable=<blah>` but at least it shows you've though about the pylint warning.

>> If the errors are utter garbage, delete those checks from the configuration.

I've never had "utter garbage" errors. I've had errors I didn't agree with/made the conscious choice to ignore. And I've had errors I thought were garbage, but really I just didn't fully understand them.

(Although I've experienced how applying it to an existing, unlinted project is horrible.)


>> I've never had "utter garbage" errors. I've had errors I didn't agree with/made the conscious choice to ignore. And I've had errors I thought were garbage, but really I just didn't fully understand them.

I've had lots of spurious errors with numpy code, where pylint seems to think everything is a list.


I've experienced something similar with SQLAlchemy, because of the amount of auto-magical stuff it does. I solved it with `[TYPECHECK] generated-members=query`, although this can be slow so blacklisting SQLAlchemy classes is still an option.

NumPy's `__init__.py` code is a bit messy and manipulates `__all__` a lot [0], so I can understand why pylint (and other static code analysis tools) struggle with it.

Yeah, it's frustrating hitting errors in third party modules, and it's tedious for one person initially to set up pylint correctly, but so worth it for a team.

[0] https://github.com/numpy/numpy/blob/master/numpy/__init__.py...


Why weren't the functionality of those for-loops tested? It seems like a strange thing to burden PyLint with catching.


Well the blog is called Code Without Rules so...


The thought was that in the end you need principles, not rules. "Always write unit tests" is a good rule, and this code actually had unit tests, but the truth is some code doesn't need unit tests.

That being said, since you seem like a suitably snarky person, you may enjoy the email newsletter I write which is dedicated to the ways I've screwed up: https://softwareclown.com/


I try to keep the snark at bay but it probably still wins more than it should.

I signed up. Better to learn from your mistakes than my own. The testimonial was awesome by the way.


They were tested, but it didn't catch the bug unfortunately. I got bit by the fact that I was testing with synchronous callbacks, but real world use was with asynchronous callbacks.


It's an extremely common bug in basically all languages that allow closures to capture "live" variables. Almost every developer that uses any kind of async programming gets bit by it eventually. For-loops and closures do not mix well.


The author did not create any closures in the example code. Python only creates a closure for a def in a def.

Regardless, using functools.partial solves that problem: ``callback = partial(print, i)``.


Sounds like the issue is that you tested parallelized code as if it were serial. That'll be a problem regardless how many issues PyLint spots. Whenever I have async/parallel code I am paranoid about racing and deadlocks. I always test with a bit of fuzzy sleeps to tease out any issues.

Also, you described two separate bugs. The first one doesn't sound like an asynchronous callback issue. It sounds like a text editor with autocomplete would have avoided the problem. Or coding in an environment that encourages frequently running the code to look at its results.


Pylint is usable in PTVS projects:

https://github.com/Microsoft/PTVS/wiki/Pylint

Also Sublime Text has a nice plugin called Anaconda, where PyLint can be enabled:

http://damnwidget.github.io/anaconda/IDE/


Like an article that is both for and against it's subject, I agree and disagree. Pylint is painful for the author this time but it should get easier and easier with each project. If you use pylint from the start you won't get bitten by that "mountain of errors" problem and you develop a default .pylintrc file that you can bring from project to project, tweaking as needed.

A (hopefully) short story: on my last contact job I was brought in as a senior developer for a growing Django project that had 0 coding standards. I imposed a small set of coding rules (what I actually did was say we would follow the Django coding style guide, as I've learned over time pointing at an existing, 3rd-party guide means when people get butthurt about having to change a habit they get mad at the guide and not at a coworker) and then added pep8 and pylint to the git commit hook. That lasted for about a day because of the massive amount of errors from pylint (the situation the author describes), so I modified the hook to block on pep8 violations and just give a guilt-trip in pylint errors for the time being. Then myself and a couple of team members who drew short straws sucked it up and actually fixed pylint errors as we went until we could turn pylint back on. We fixed a number of bugs in that process and then going forward pylint started to show it's strength by catching the sorts of bugs the article mentions (and still enforcing a standard coding style so you could no longer look at a file and say "That's tclancy's code" but rather saw "That's our-project's code".

Having pylint in the hook process is better than running it as part of CI or ad hoc because it's run more often, meaning you get hit with a smaller number of errors at once. Having pylint or pep8 as part of your IDE is even better because you fix problems as you go. One of my current side projects is a Django project written by a Rails dev (no idea why) that I do maintenance on. The first thing I do when I open a file I've never seen is get rid of all the red and green squigly lines (or at least look at why they are there). Most of the time it's just a formatting issue but there are plenty of actual bugs it turns up. Just by opening the files to make an update or fix a bug I often wind up improving things people didn't even know were broken as a side effect.

Linters are a good thing. They can always be better and they will always annoy for some idiosyncrasy they have (or you do), but it beats ignorance.


I think a formatter is really best. Linters know what's wrong, the obvious next step is to just fix it. I'm not interested in thinking about this stuff anymore, just fix it so I can move on.


I use pylama, it wraps pylint and other code checking tools. It also has a cool logo.

https://github.com/klen/pylama


I've got my pylint set up to work right from the very beginning of the project, with most rules enabled, so I can keep a fairly clean linted state and never have to play "catch up" to any great extent. (I also use pep8; pep256 and radon to control formatting; documentation and complexity).

I wrote a wrapper around all of these linting tools so I can call them programmatically from the same Python process, and each time a file changes I run my linting tools on that file. (Not the whole codebase). I also intercept the output and re-format it to give a (brief) report in a common format. I can also control thestrictness of linting (i.e. which config file to use) with a YAML-format comment in the file header, so we can write quick prototypes without worrying too much about style then bring them into the quality system on a file-by-file basis.

In this way, linting runs fast enough to be useful in the edit-lint/test loop, and the error messages are both telegraphic and immediately actionable.


Using pylint in emacs with flycheck has saved me more hours that I can count debugging simple syntax and other python errors. As the author mentions, whitelisting is the only way to go.

http://www.flycheck.org/en/latest/


Have pylint integrated with sublime and configured it to run checks on every save. Its really a life saver!

Plus if you want to disable some features you can even do it at the local level rather then having a single central rule. eg:- using #pylint: disable=I0011,E0401 in your .py files.


Also, now you can use mypy for additional bug catching. Personally I have flake8 + mypy + py.test + coverall + mccabe, running on all my new projects.


I've really got to start using mypy. Looks awesome.


The concept is awesome, the implementation is less so. There is still a lot of ironing to do. But it does catch many bugs, even when you don't annotate anything. Anyway, it's good tu use it early to help guido fix the last warts.


big +1 for pylint - saved my butt more times than I can count.

Subtly, pylint makes it easy to migrate teams to new APIs, e.g. by disallow rules.

-1 to whitelisting - I just devote ~1 day to going through the 1000s of warnings, not a big deal.


How does Pylint compare with PyCharm's inspections? Is there a reason to use both?


I think PyCharm is closer to flake8. It's good but more surface level.


> I think PyCharm is closer to flake8.

PyCharm does type inference and cross-file name/object checking, these are not things pyflake can do.


Ah that is true, it does a little bit of MyPy stuff too.


Pretty sure PyCharm uses pep8 under the hood. It's more permissive than pylint.


I have pylint integrated with vim through syntastic. The amount of output is thus limited to one module at a time and not intrusive nor disruptive. Also, like suggested, I have blacklisted some errors I don't care about.


I'm using pylint daily and usually running it doing "pylint -E" to discard warnings and display errors only. I found it very useful, and never seen a case of an error that shouldn't be fixes. I'm probably missing some useful warnings, but it's always better than not using it at all, or getting lost with thousands of them!


The problem is the bug that pylint caught that would've gone to production was caught by a warning, not an error. So we could've just used -E but then we'd miss out on some very useful stuff.


That's a fair point I agree. The pylint config files in the article are great, and I will definitely look into it. Pylint -E is more like a quick lazy option to make 80% of the work :)


Your unittests should have caught both of those errors. Also Django does use linting: https://github.com/django/django/blob/master/tox.ini#L36-L41


I did have unit tests, they didn't catch the bug though because the test used a synchronous callback (much easier to test) and real world code ran with async callback.


Re django using linting, it uses flake8, which is a nice tool... but misses lots of stuff pylint catches.


What would be awesome is an interactive pylint onboarding tool. Start with no checks, then add them iteratively in order of severity showing the new errors at each step asking if the rule should be adopted or not.


I find frosted (a reimplementation of PyFlakes) very useful.


I actually think that unused initialized variables might be the only pylint rule which has found genuine production bugs for me.


The fact that your unit tests did not catch that bug tells me you should spend less time linting and more time covering.


Linting is cheaper than unit testing.


They serve two completely different purposes. Testing ensures the logic is correct, linting is checking the code for syntax errors and bad practice.


They don't actually serve completely different purposes. Linting is supposed to catch bugs and it does, very cheaply. It just doesn't catch very many.

In terms of coding style I've found it fairly ineffective at picking up on most bad practices.


Sure, "bug" as a term is as elastic as you want it to be. I was pointing out the false equivalence you implied by writing "cheaper". Testing and linting does not provide the same value, so calling it cheaper is a kind of fallacy.


That's like saying that a sandwich and a condo do not provide the same value so calling a sandwich cheaper is a kind of fallacy.

There's no elasticity over what the OP reported. That was a bug. Clear as they come. He may have unit tests but they didn't catch it and since you can't have unit tests for everything, why not spend 500ms running a linter?


You are making my point. It makes no sense to call a sandwich cheaper than a condo when you are evaluating what to buy when looking for a place to live. That is what I was pointing out.

Testing and linting does different things. Of course a sandwich is cheaper than a condo, but what does that have to do with what we are talking about?

As for the claim that you can't have unit tests for everything: What do you mean? No, you can't test all branches or combinations in a system, but you can have a 100% test coverage (aka. hitting each line at least once).


>You are making my point. It makes no sense to call a sandwich cheaper than a condo when you are evaluating what to buy when looking for a place to live. That is what I was pointing out.

Yeah not really. Both linting and unit tests catch bugs. This is a fact that you refuse to actually face.

>you can have a 100% test coverage

You can but it's A) a bad idea and B) it won't even necessarily catch the bug that the OP brought up if you do.


I grant you that linting catches bugs, but what I've argued for (perhaps badly) is that the bugs it catches are very shallow, but more importantly they make up a relatively small subset of the bugs proper unit tests catch. That's what makes the comparison you made deceptive.

I use both linting (pylint actually) and unit tests, so I am not advocating that you should scrap your linting tools. I am just trying to point out the difference.

Your B) argument is yet another indication that the tests are not good enough. The example bug OP posted is what made me comment in the first place. If your tests don't catch that your tests are useless.


In fact there were unit tests. They didn't catch it though, see comments elsewhere.


I know they were there, but they are obviously useless. Hence my comment: Fix your tests.


Vim + Syntastic + pylint + mypy works pretty well for me


I've been quite happy with pycodestyle (formerly pep8), using autopep8 to automatically apply fixes when deemed safe.


Oh Pylint, used by my previous employer as though it were a type checker. Now I just use a language with a type checker.


Does python not have anything like `use strict;`?

Edit: What the heck, this is an honest question. :(


No, but even a use strict type clause wouldn't have helped in this case, since the entire syntax was valid (volme was being properly initialized as part of the loop definition). The error occurred because 'volume' was considered to be part of the function scope, despite being defined as part of the for loop scope.

To me, it's a good reason to use different loop variable names for successive loops. A good practice to use in any language, since it helps keep loops distinct to your mind as well.

    for volume in get_volumes():
        print volume
    for other_volme in get_other_volumes():
        print other_volume
The second loop throws a runtime error and an error level linting alert.


It would help if it restricted variables auto-vivified in a loop declaration to the body of the loop itself, instead of making it available for the entire parent scope. I'm used to working with languages that behave like that.

I'm actually a little surprised something like that hasn't been attempted yet for Python.

Thanks for the answer!



IMO the first example is a good use of a linter, the second should be caught by coding style: I keep the following rule with closures:

1.) Never go more than two layers deep, so you have one "def" inside another "def" and that's it.

2.) Always keep the closures at the top of the containing function - no further nesting.

Kids today coming from Javascript seem to be way too much in love with their closures.

Example:

    def my_routine(flag, foo):
      def my_little_helper(thingy)
         if flag:
           return thingy
         return thingy + foo
  
      return my_little_helper(42)


That's a good idea, but you shouldn't assume too much about why people are making mistakes, I've been working with and on Twisted since 2001, so not a JS kiddy :) I made this mistake fair and square even though I knew better.

Which is why I like having a tool that'll catch it for me.

My typical approach is `lambda x, y=y: f(x, y)`. Inline functions are much more readable, is the thing. But I can see the argument for this coding style, yes.


Sorry, didn't want to attack you personally with the 'kiddy' comment. I think my proposed style mashes well with yours though.


What do you think is the problem with deeper nester closures? Do you really think that a little extra lambda or something inside your helper function would suddenly make it unreadable?


I just think multiple levels of capturing scope into closures is a bad idea - if something with that state goes wrong it will be hard to debug. I also hardly see any benefit in doing it.


No need to pick on 'kids' from JS thanks, and actually in my interviewing experience a lot of JS devs don't actually know what a closure is. Worrying, but true.


Yeah, sorry for the sarcastic tone. It's just that JS/Ajax comes to mind when I think about deeply nested closure pasta. Here's hoping that ES201x will teach async/await to a wide audience soon.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: