I never said anything about why. Even if you are just instructing, do you care to see a wordless video or read an instruction manual? Actually it's not even that...which one is easier to write? The descriptive power of language is great when issuing imperatives; it must have been very hard to organize before language was developed 50-100 kya.
You can still communicate spatial concepts better with pictures, and maybe a hybrid approach might work, structured edited languages are very popular as vpls, but think about code with interspersed diagrams.
oh, and what are we supposed to do, not listen to the good mudic people put out already, and not buy their CDs on amazon, and not go to their local concerts, and not buy their swag, because there are other, better artist who are not producing work because they haven't received a $250,000 advance on their contract?
No, those are all the right things to do. The problem is more about the indie musician who spends a few thousand dollars producing a collection of songs, only to be told to not even try to sell the cd, release the songs through a streaming service for hundredths of pennies per listen, and then expect to make their recording costs back through "touring and merchandise" while releasing their songs for basically-free as a "marketing expense", even while the market is saturated by other musicians that have bought into the same advice. The fan that actually makes a point to listen to the music, and buy the cd, and attend the local concert is a good fan, but the market forces are actually against that activity.
This really misses something... it's written as if the consumers are somehow entitled to the music from the songwriters. Right now, musicians think streaming rates are too low, and the streaming services are operating at a loss. That basically means they are unsustainable, and that musicians are not willing to release their music at a rate the consumers want to pay. Because of debt financing, consumers are currently under the impression that they're entitled to this music anyway, but the gravy train might very well end, and then easy music enjoyment might become more scarce. For a lot of musicians, this would be a welcome development, as it would give the songwriters more options in how to control their revenue streams.
I have seen a few examples of this sort of thing in the past. I think the term they tend to center around is "Argument Mapping" which is a technique that has a loosely-defined set of best practices. Judgments or Lemmas, counterpoints and rebuttals.
I also think they tend to be limited because while they display the structure of counterpoints, there isn't any attempt to actually be logical through boolean or causal logic - necessary and sufficient thinking.
Here is a youtube video showing an example of justifying a conclusion with actual causal logic:
The conclusion is trivial (whether I should buy a MBP 15") but the structure is similar to what is discussed downthread here - logical atomism, conclusions being premises for further conclusions, counterpoints, etc.
I've been working on my own time on making a web-based tool that has somewhat related features, but it takes time since it requires tracking actual truth states of contentions.
If any off-HN discussion has started up from this thread, I'd like to participate.
He doesn't know if there's merit to the claim that the offending comment was libelous. The offending comment said kisstrust was a "scam" when they apparently meant that the kisstrust offer simply added no value to a tool that is already available to consumers for less cost. It appears he does know/believe, however, that there is no merit to the claim that MMM is responsible for the content of a comment in his forums.
The way I see formal correctness proofs is that they'll be useful, but not a panacea.
If I have a method that takes a List as a parameter, and returns an Integer, then congratulations - I have just written a formal proof that it is possible to derive an Integer from a List.
However, the business need wasn't to derive an Integer from a List - it was something far more specific.
So that's why people like the idea of dependent types, in that you can make the type system (mostly) Turing complete, and actually make the types reflect the specification of the problem.
And then if you write an implementation of that specification, congratulations, you have proved that that implementation is correct.
Which is pretty cool, except that it overlooks the fact that a very large percentage of our bugs are not in our implementation, but in our agreed-upon specification. Almost every bug I encounter (that sneaks its way to QA or production and is reported as a bug) is due to the fact that I validly implemented a specification one way, when the business need actually wanted a slightly different interpretation of the specification, but just wasn't specific enough when they described it.
Anyway, formal correctness proofs won't handle that. Really all it means is that the hard part of programming - making sure we're actually developing the correct thing - will be offloaded from implementation to specification. That means that implementation will be more of a code monkey role, while the real art of programming - interpreting and anticipating needs, a role that programmers are vastly under appreciated for - will need to be accomplished by someone that is sort of like a product owner, but far more technical and exacting than is the norm now. I'd imagine many senior level programmers would move into that role - translating business needs into formal specification, and then leaving junior programmers to monkey with the proof itself.
I don't see formal proofs replacing acceptance tests. I see them helping you build leak-proof abstractions upon which you assemble the domain where your functionality is implemented. The line between those two positions is blurry and, despite the sound of it, your need to be comprehensive with a formal proof also varies due to possibility and value.
All of these things are just tools after all—you use them to produce value. I think you've described one potential future with this technology, but it, I feel, is very narrow.
One thing that does seem to be playing out is that for many interesting programs the establishment of the theorem is sufficiently challenging that the proof need not be relegated to "junior programmers" but instead automated entirely. Of course, that said, typically a primary tool in the search for a well-specific theorem is a long series of failed attempts to prove its poorly-specified predecessors.
Well I know it's a contrived example, but I don't understand the motivation behind mocking an external library's code. That library should have its own tests.
Say I have three layers of custom code: A calls B, which calls C.
If I want to test B, then I want to mock C, and have my test call B similar to how A does. I want to mock C because C is also custom, and if my test fails, I want to know if it's because of bad B implementation, and not because a buggy C might be confusing matters.
But if B also calls D from an external open source or vendor-supplied library, I don't usually want to mock D. That just adds needless complexity to the test, and reduces my focus on my own custom code.
An exception would be if this library code makes its own network call or something - then you might want to mock it to save time.
Anyway, mocked unit tests become far simpler if you maintain the right focus. Use test A to call B (passing in canned fixtures if necessary), while B mocks C only to maintain focus on B's implementation. If you start getting involved in trying to mock external library code, or even internal private methods that B calls, you'll have a bad time.
The advantage to maintaining that kind of focus is that refactoring becomes easier. Want to change the name of C? Your IDE should handle refactoring your test, too. Want to change the implementation of B? You don't even need to change your test at all, just make sure the right values are still there in the return value. Maybe you'd need to add a couple of assertions, but that's it. If you're looking at having to do a serious refactoring of your unit test in those cases, then it probably just means you're still designing your code architecture and things are still really fluid. And in that case, it would make sense that you might have to throw away your test, because by definition it means you are still deciding on what your specifications are.
The guys who came up with the mock object approach to TDD would say that you shouldn't be mocking external libraries directly. You want your own time abstraction which is probably far simpler than what you get from a library that has to satisfy everyone's needs.
I think that building that level of isolation between you and your framework or library is just basic good practice. The fact that you need time doesn't change, but the way that you get it might.
I'd rather have a program that does what it does in X lines of code, than a unit tested, mocked, codebase in 5X lines of code. Sure you have tests, for whatever they're worth (I'm somewhat skeptical of TDD in the first place), but you have so much MORE code.
It's just basic separation of concerns. I've seen too many development organizations brought to their knees by the fact that they don't have any layering between their logic and the libraries/frameworks they use. It's a very real problem.
That's interesting. I don't have much experience with it, but when I've seen similar stuff it looked like an anti-pattern to me. Why should developers need to learn your specific wrapper on top of a popular 3rd party component? The internal thing is most likely not documented as well, and common problems don't have answers on Stack Overflow. It requires extra work to use additional features of the library.
I'm similarly wary of convenience libraries that provide marginally simpler APIs on top of standard libraries.
I'm not convinced it's a good idea. I wish I had your experience. Any good reading material?
Wrapping a library in your own concept allows you to define what's right for your application. It has the effect of pushing the third party library out to the edges of your system, replaced with whatever you wrapped it with. This makes replacing it, for testing or any other reason, much easier than if it's proliferated throughout your code unwrapped.
Wrappers should be simple so creating them and documentation, beyond a few integration tests to understand how the library works, shouldn't be a huge concern.
But where design comes into place is to determine where and when you need to separate these concerns. There are always times to do this. There are also times not to.
For example (Perl example here), in LedgerSMB we layer some things. We layer the templating engine. We layer (now, for 1.4) arbitrary precision floats. We layer datetime objects. Many of these are layered in such a way that they are transparent to most of the code.
But there are a lot of things that aren't layered because there isn't a clear case for so doing right now.
(As a footnote, PGObject requires that applications layer the underlying framework because there are certain decisions we don't feel comfortable making for the developer, such as database connection management.)
I agree. Wrapping your library for testing is really pushing the boundaries of sensibility.
Low density code that doesn't provide application logic is one of my pet peeves.
There is a philosophy that started in the 90s (and Microsoft was proponent of it ) that adding more layers to an application would make it more malleable, but in a typical CRUD web app, layers only bloat the code and make it slower.
I'd suspect adding a wrapper to a date class just for the sake of testing is more likely to add bugs than remove them.
That sounds more like an argument for putting an interface in front of an implementation on the app side, and then mocking that interface in the test. Which is totally fine, because then you are isolating the custom implementation (a light shell to the external library). As opposed to mocking the external implementation in the test.
That's more along the lines of what a lot of TDD literature says.
Write adapters or facades that wrap external libraries, use those in your own code's unit tests. This makes you less bound to a specific library as well. Don't mock the outside world  for testing your adapters/facades/whatever, but do integration tests that cover your adapters using the real outside world.
You'd also do complete end to end tests where the entire system is used as if it were in production (acceptance testing). TDD makes a lot more sense if you think of it in those three layers: acceptance, integration, unit.
 Outside world means anything that's not your code -- networking, filesystem, external libraries, etc.
I've felt the same but the syllogism path is hard to explain. Basically, many of these services are founded the same way Amazon was founded - completely unsustainable and then hoping something magic happens to make it sustainable later. Amazon wouldn't have survived without the dotcom scene going crazy and they barely got through to the other side. The music services exist by seeking to convince end users that they are entitled to "mostly free" music, and they only reason they can supply that service is with debt funding. Their hopes is that the appetite they instill in the end user will give the business enough support to argue favorable licensing terms from labels and the government.
So when that happens, the party that hurts is the songwriter. You already have way too many people believing the canard that an indie songwriter should only expect to make money through touring and merchandise, and never from their recording efforts. That opinion is self-serving and pushed by those services "use your songs as a marketing expense!" when it is actually supposed to be the musician's livelihood.
So what happens when a musician gets less and less likelihood of getting any kind of financial remuneration from their actual songwriting? It's a disincentive to focusing on original craftsmanship, and an incentive to focus on churning out greater volumes of homogenous crap in an effort to capture a very tiny piece of a very large homogenous pie.
The end result from the perspective of the listener is that they never experience the counterfactual; they never know what they're missing. Sure, you might be enjoying that latest "indie" piece that is really just a max martin form with a decemberists diphthong singing tone combined with a couple of dubstep beats, but is that really original?
You've got some good points, but I actually believe it's a bit bleaker :)
I believe the role of the musician is too important to be simply cast as a typical capitalist job- e.g. do it 40 hours and get a salary. Although I'll accept this until the rest of sedentary culture reforms (not likely in this life).
The problem is our celebrity culture. We turn our efforts away from the 1000s of less famous musicians around us to the handful of celebrity musicians.
Classical music culture, with its reverence and deification of a handful of sponsored artists, provides the template for rock music. Hip hop culture successfully followed the template. Jazz mainly dodged it. Recorded media (sheet music and the record) catalyzed it.
Any technological aid to music culture can be judged by whether it leads to the (imo benevolent) fractioning of music - the re-directing of attention toward our immediate musicians - or whether it leverages and bolsters the celebrity culture. Any capitalistic enterprise is blind - or farcical - to these ends and will randomly support whatever ends lead to its own survival. However, this randomness must be biased toward existing resources, e.g. if not doing well, support the thread of celebrity culture to survive. This pattern continues for the individual musician, too, as you point out.
So, from this perspective, I appreciate services which connect individuals with musicians of their choosing, but it's not like connecting people to Primary Care Physicians, a geographically-constrained problem. Instead, it facilitates people following their own flawed decision-making processes, following the crowds. So it becomes an issue of individual freedom, which I adore, even though so many people will not use it optimally for themselves.
A functional democracy is related to the respect for reason - the ability to reason why things should be a certain way, and have one's arguments be respected even if they are counterintuitive. If the media markets are advanced enough to trump reason, or if the culture is such that they are still driven too much by superstitions and religion, then functional democracy will have a really tough time taking hold.