I open-sourced a tool a few days ago that lets you do mass modification of git repos, including (but not limited to) adding license files: https://github.com/clever/gitbot. I've used it to add licenses to close to a hundred repositories at Clever. Would be great to see if others find it useful.
As few tools as possible (no task runners, etc...)
:thumbsup: to this sentiment. Whenever I see a "skeleton" starter project that includes tooling that could easily be added at a later date, I usually close the tab.
I think people forget that skeleton projects need to serve an educational purpose to newcomers. Each additional framework/tool that you throw into the mix compromises the skeleton's ability to do that.
I had to learn how to use gulp and npm and browserify and and and ohmygodthelistgoeson, and I tell you I feel exactly the opposite.
Trying to bolt tooling onto an existing project is daunting, especially when you're not sure how it works yet. When I started with a fresh skeleton project, two source files, the whole shebang set-up, it was super easy. Just add my extra source files in the dir, and it's picked up.
I love it when a skeleton project includes the entire toolset. I can easily remove stuff I don't need, even when I don't know how it works. But adding JS tooling myself? What a disaster.
Now, I look at my first project, the one I had to add gulp and everything to myself, and it's a complete mess. A lot of time and effort would have been saved if I had just started with a good directory structure right away.
It's my understanding microservices shouldn't have many dependencies on one another. The link to you blog post that explains your rationale doesn't appear to work... do you mind explaining what need this fills?
> What do you consider good feedback? How can you promote understanding and positive approaches in your criticism of the code? How can you help the submitter learn and grow from this scenario? Unfortunately these questions don't get asked enough, which creates a self-perpetuating cycle of cynics and aggressive discussion.
I'd be really interested to hear how teams codify this. To the extent that it's possible, I think feedback should be rooted in objective measures, e.g. styleguide/lint violations, test failures, etc. All too often, though, there are subjective things that come up: organization of code, interface design, using framework X instead of framework Y. These are the criticisms that most often lead to aggressive back and forths.
But those are the most important and interesting discussions to have, so I’m not convinced we should shy away from that sort of feedback.
That said, they’re also the sort of discussions which should often happen before code is written. Decisions can be changed much more quickly at that point, and people take it much less personally when you talk about how the design of something can be improved when they haven’t spent the time to implement it. :-)
To some extent we knew that in-memory joins would eventually cause problems, but we were certainly surprised at how quickly Node memory usage became the bottleneck. Here's a little gist I used to test it a while ago https://gist.github.com/rgarcia/6170213.
As for your point about premature optimization, in my opinion a startup's first priority is getting something in front of users in order to start improving and iterating. The first version of the data pipeline discussed in the blog post was built when Clever was in 0 schools, so designing it to scale to some of the largest school districts in the country would have been fairly presumptuous.
Something to help teams rotate passwords on regular intervals. E.g. I want everyone to rotate their Heroku/Google Apps/Github/etc. passwords every 6 months, or maybe after a major security event (like Heartbleed). Right now it means sending out a Google Doc for everyone to check off.
Bonus points if it has some way of verifying that passwords were actually changed and the new password is strong.