The problem, however, is that no matter how detailed or well-written your spec is, it's very likely that it'll change, either as a result of business or product needs. This is especially common with startups. The best compromise that I've seen is to charge hourly, but make a solid estimate that you can stick to. If the specs change, you don't need to go back to the client or ask for permission to proceed. You just do the work.
I can think of one reason: `<` and `>` are comparison operators in Python, and they are easily overloaded with the `__gt__` and `__lt__` magic methods. So, if `List` was an instance of an object with those overridden magic methods, it's unclear what you actually want to do (i.e., `List<int` might evaluate to True, and then `True>` would lead to a syntax error). `[` and `]` have no such limitations.
"`List<int` might evaluate to True, and then `True>` would lead to a syntax error"
You are confusing parsing and reduction. In most languages - and almost certainly in Python - these are separate steps. "Parsing it wrong leads to a syntax error" is a good thing - it means you're forced to parse it right. It would be worse if there were multiple syntactically-valid interpretations (which there may well be).
Here's the problem:
`a<b>c` is already valid python. `3<5>2` evaluates to true, because python allows operator overloading. Because the < and > operators can be overloaded, there is no guarantee that an object of type "class" will not have them (this would, I believe, require some metaclass hackery, but still).
So, given that `object<int>()` throws a type error in python and not a syntax error, you can't unambiguously parse that.
I maintain a similar tool to manage large files in Git , but went the clean/smudge filter route. I think git-media gets into states where it can process files twice even though it shouldn't. From the Git docs:
> For best results, clean should not alter its output further if it is run twice ("clean→clean" should be equivalent to "clean"), and multiple smudge commands should not alter clean's output ("smudge→smudge→clean" should be equivalent to "clean").
So this isn't the fault of clean/smudge filters, just the way they were used with git-media.
We experimented with smudge/clean filters with our own implementation and they just didn't seem like the right solution for fat asset management.
The most frustrating problem was that filters are executed pretty frequently throughout git workflows, e.g. on `git diff`, even though assets rarely ever change. The added time (though individually small) created a jarring experience.
I'd also be curious how git-bigstore addresses conflicts. It seems like a lot of the filter-based tools out there don't handle them well for some reason.
We've build a Git extension (git-bigstore) that helps manage large files in Git. It uses a combination of smudge/clean filters and git-notes to store upload/download history, and integrates nicely with S3, Google Cloud Storage, and Rackspace Cloud. Just plop in which filetypes you'd like to not store in your repo in .gitattributes and you're good to go. Our team has been using it for a while now to keep track of large image assets for our web development projects. Cheers!
Lionheart Software works with startups to build out their iOS and Python/Django web applications. Our bread and butter is turning ideas into full-fledged, beautiful, functional products. We're three strong right now and are looking for some more developers to increase our capacity (at the moment we're getting way more work than we can handle).
I wrote a Git extension about a year ago that transparently stores data in S3 / Cloudfiles / etc. and doesn't store any of the actual data in your Git repo. I've used it with a few projects but I think it could be battle tested a bit more. It integrates perfectly with GitHub / Bitbucket. Pull requests welcome!
It's not just paid likes that people are worrying about--it's organic likes too...which brings me to my question. What is your response to the other problem the OP references: spammers liking thousands of unrelated pages to confuse your algorithms, which in turn diminishes audience reach across the entire network?
I guess I just don't understand why the responsibility is on the advertiser to "target the right people." And really, that's not the problem here. If I had a Page that I wanted to target in Bangladesh, how exactly could I go about doing that without having the majority of my likes be fake?