I believe the above scenario is common. But now I have to fear being reviewed on something that wasn't meant to be a complete work anyway? You don't like it...branch it and make it better! The community will win! People will end up using your version instead of mine if I don't pull the changes back in. Isn't that the way this is supposed to work?
Maybe I'm misunderstanding this post entirely. It's hard enough for a lot of us to post our work publicly - then to have it scrutinized and attached to my github profile seems unfavorable to me. You don't like my stuff - don't use it! Have I done something incorrectly? Email me and let me know! Github seems to be doing just fine w/o this feature and the good projects somehow rise to the top.
BTW - I'm not opposed to being critiqued on my code. I just don't know if a public rating system is the way to do it! And I do believe I would be more reluctant to release anything if I felt it could be rightly criticized for things I know it lacked. Make sense?
- tests [5 stars]
- documentation [5 stars]
- code quality [5 stars]
- usefulness [5 stars]
But I get your point, I've got the real bad habit of not wanting to upload not finished stuff.. and since code is rarely finished, I tend to upload way less project than I should. A rating system might only get the matter worst.
As somebody who loves to reuse code I do like the basic idea of having a way to find out what others think of it. The simplicity of a 1-5 scale has a lot of appeal, but my gut instinct is the downsides may outweigh the advantages.
Uh, right. Ruby folks have no idea how little of a test culture they have. Doesn't stop them from complimenting themselves though. What, you say, why do I say this about Ruby? Consider the following facts.
- Ruby itself has fairly bad unit tests.
- By default you don't run tests on gems when you install them.
- Many gems that have tests, have tests that only make sense on the developer's machine and can't easily be run elsewhere.
- There is no automated process for identifying which gems are doing a standard list of good things (have documentation, tests, clear license terms, properly identified dependencies...) and which are not.
- There is no easy way for someone with a spare box and an interesting environment to contribute their machine to tracking down which gems fail their tests in your environment.
If you're a Ruby developer, you may be saying, "This sounds like a nice wish list, but look at everything we do..." Stop right there. Everything on this list, the Perl community has been doing for years. And Perl folks are doing all of the other good things that you were going to talk about.
In fact the state of the art of Ruby's "test culture" does not even match the test culture among good Perl developers during the last millenium. You don't believe me? The standard CPAN client has defaulted to actually running unit tests since Perl 5 was released in the mid-90s. And the core Perl test suite back then was much more comprehensive than Ruby's is now.
But there is good news. It is easier to copy someone else's working thing than it is to write your own in the first place. Perl got this working several years ago, and has had time to iron out the kinks. If you are interested, feel free to copy.
A pearl from Perl, of course.
It is too bad that it is opt-in. And it doesn't immediately fix the problem that some authors write tests for themselves only. But it is definitely a good step in the right direction.
It seems to me that this blog post could be replaced by a single line - and would be better, actually - if that line was 'GitHub, please do that review stuff however CPAN is and has been successfully doing it for years'.
Bonus points if he knew enough to explain how CPAN avoids all the obvious failure modes raised by commenters here.
Make it known who rated what project and for how much (and only once per project, obviously). Then the project owner can message those people to change their rating once they fix the documentation or whatever they go the bad rating for.
There's two main benefits to reviews/ratings:
1. The project owner would get an assessment of his project
2. Potential users/committers would know the "quality" of the project
Making the reviews/ratings private would eliminate #2, but it would also solve most potential spamming, gaming, and stale issues caused by making them public.
I get into this argument with my coworkers from time to time, and I have to say I disagree a million percent. Tests are the only way you can see if the software is going to work.
After you make a change, how do you make sure the software works? By trying it out? Congratulations! That's testing! Now you just need to invest some mental effort in teaching the computer how to do that test for you, and now you can automatically achieve the same level of safety that you had before.
I'm all for code reviews too, but code reviews do not scale the way that tests do.
I think projects get off course... teams can develop a culture of 'having unit tests' instead of a culture of 'writing high quality software'. The way the TDD/XP camp talk about software bums me out. As if the test they decided (key word) to write set the software on some Manifest Destiny vector to perfection.
Again, I agree with you. Unit tests are awesome, and the benefit as scale is huge.. but I took antirez's comments as a shot across a different bow.
The tricky part is that anything can become a cargo cult rallying point. Mediocre programmers can drive anything into the ground. Even a seeming no-brainer like the Law of Demeter can be misapplied to create byzantine systems if someone knows how write code, but isn't very good at it. The situation is ten times worse with test code because it's meta—there's no objective measure of quality for test code. At least with application code it's obvious for any user whether it works or not, but with test code it's all about how much manual testing time was saved, or how much smoother refactorings went.
I've long been skeptical of the claim that TDD or whatever makes you write better code. The way I see it, TDD forces you to put in the time to learn how to write automated tests. Once you are good at it, tests help you move faster by covering regressions and documenting your intentions. In Ruby tests can give a nice sanity check that is built into strongly-typed compiled languages. I truly believe in TATFT, but at the end of the day tests are still meta, and you can't polish a turd.
>Mediocre programmers can drive anything into the ground
Sure, totally. But the XP/TDD cult gives them easy to follow instructions for doing so. There's almost a lowest-common-denominator talent-smoothing mantra behind some of these cultures. I think plenty of projects, teams, and people don't end up finding their potential as developers because they're obsessed with a religion that favors obedience to a silly process over intelligent, clever, elegance.
Okay. At this point we're probably drifting steadily off topic :)
That's the kind of thing he's talking about, not just the code itself behaving according to strictly defined unit tests, but the project as a whole, and its health.
1) Does it work
2) Is it being updated
3) Are the docs good enough to get my job done as a user
4) Can I tweak it to my specific use case? Options? Source code patch?
I think it's a great idea to show users what parts of a project are weak (it'd help them understand how they can help), but I have the same problem with reviews on Apple's app stores, Android Market, Amazon, etc...
Some of those are handled a bit by the complaints being about previous versions, but stuff on github isn't necessarily versioned.
However, for the majority of long tail projects that might have a couple pages worth of reviews, it would be useful feedback, and the project owner could address each review individually with ease.
Another option would be to provide a history of average star ratings, similar to the 52-week activity graph. Then people would be able to see, for example, that most reviews have turned positive since the v2.5 release, etc. Tie reviews to versions and/or commits, and allow people to post another review after X weeks, so that the same person can post different reviews of different versions of the same program.
To give more "weight" to the system, GitHub users could have their average project ratings displayed on their profiles, and rankings / leaderboards could be generated. Users could receive badges, maybe: "Documenter of the Month — March 2011", etc.
To address what antirez needs--an at-a-glance, summary assessment of the quality of the project--github could provide a standard rating of a number of dimensions. Project followers, recentness of commits, forks, merge history, code analytics, presence of tests, volume and activity on documentation, etc. This is still subject to gaming, but here's how you fix that: allow each github user to express their own values/constants for the various measures, and allow you to assess any project based on what that user values: "What would antirez say about a project with no documentation, lots of downstream forks, great testing coverage?" Now you've introduced a trust metric, correlating values of people you trust with projects you might be interested in.
Think OKCupid meets GitHub, with bulk rating of projects rather than potential mates.
One thing I'd want is the for the older votes to carry less weight. For example, if one of my projects gets voted way down due to lack of documentation, I'd want a documentation effort to be able to counter those votes that may come from people who don't come back to adjust them.
It's also interesting for people to be talking about testing on this thread. There might be some good points made but you might miss the point/concept that was posed in by the OP.