It occured to me that having the month before the day is the most practical solution because the most important piece in a date is the month as years change only every 365 days or so, and a day gains meaning only when associated with a month. In other words, the month, being the most salient piece comes first.
I disagree that the most important information is the month — it really depends on context.
Most dates I work with are near the present day. Scheduling something next week, referring to last week. Very rarely do I exceed the current month unless I am near a month boundary. The current month is nearly always known information.
In this case, the day is the first piece of information I look for (and almost always when I want to know today's date, all I am really asking for is "what day of the month are we on today?").
That seems a bit contrived. Whether the month is most salient depends on the context.
And (as a European) I'm constantly amazed by how weird and confusing it is to put the middle-level information up front. If there were different separators (e.g. if today were 12,16/2013) I think I'd struggle less.
What about timestamps from that logic ? I think there, hour is the most important datum, then month, then day, as you state it. Minutes come after that. Year is not that relevant, and seconds are very secondary. So, my comment is written at 11-12-16:59-2013:10 GMT+1. Not very readable for sure, but much more practical ;)
I still use Photoshop 6.0 circa 1999. It loads in a few seconds max, and it has all the core functionality necessary for web graphics work. In fact, even PS 5.5 would do fine because it was the first version that supported Save for Web.
If you are using the latest version of Photoshop for web work only, you are wasting time and money for tons of features you don't really need.
Agree, I don't think Adobe has ever really positioned PS as a web tool I always thought of it more as something for people to process photos with, including huge photo files which are intended for billboards etc.
It's inspired by Blockout. There is a lot to improve, but there is a little potential in such game. I was rather considering turning it into a tutorial, it's around 500 lines of code and may be a good, simple introduction to WebGL games. Would you like to read such thing?
I strongly believe that the following excerpt from the article directly applies to any line of work including software development:
"I don’t know if an artist can last by meeting the current public taste, the taste from the last quarterly report. I don’t think you can last by following demographics and carefully meeting expectations. I don’t know many lasting works of art that are condescending or deliberately stupid or were created as content.
Don’t tell me I’m a brand. I’m famous and people recognize me, but I can’t look in the mirror and see my brand identity.
Keep talking about brands and you know what you’ll get? Bad clothes. Bad hair. Bad books. Bad movies. And bad records. And bankrupt businesses. Rides that were fun for a year with no employee loyalty but everyone got rich fucking you. Who wants that? The answer is purity. We can afford it. Let’s go find it again while we can."
I have been following Fingleton, the author of this article for quite a while. His main thesis that "Japan's lost decade is an illusion" has not changed much over the years. In fact, he was among the group "Japan bashers" in the 90s, and their main argument was that Japan's Ministry of Finance (MOF) was intentionally trying to paint a grim image of Japan to gain advantage in trade negotiations. This argument holds little water these days: it is next to impossible to hide information about the true state of a country when access to information is so easy thanks to the Internet.
Fingleton also claims that Japanese are early adopters in technology, which is only partly true. Japanese, in general, prefer to excel in something by using manual skills and they take great pride in that. They resort to technology only when manual skill is not enough. Making good cars require industrial robots so they have plenty of them, but in other parts of the industry, adoption of advanced technology can be quite limited. In other words, Japanese companies often find themselves at local maximums as compared to their American counterparts. Just look at the current state of Japan's once great electronics companies: Sony, Panasonic, Sanyo, NEC, Hitachi, etc. Only Canon seems to be doing fine these days.
This argument holds little water these days: it is next to impossible to hide information about the true state of a country when access to information is so easy thanks to the Internet.
It's actually quite easy:
Step 1: recognize that gathering macroeconomic statistics is really hard. One example: if people previously spent $3 on salsa, but now they spend $5 on guacamole, what is the inflation rate for mexican dips?
Step 2: come up with a catchy title for a statistic, and make sure reporters know all about it. E.g., "the burrito index, which measures how much good mexican food costs."
Step 3: tweak the definition of the statistic until you get the result you want.
Want more inflation? All mexican dips are equal, so inflation is up 66%. Want less inflation? Guacamole is 2x better than salsa, hence dips have actually dropped in price 17%.
Actually, substitution is used in the CPI basket of goods. Fresh fish can be replaced by canned tuna, for instance.
More amusing is the application of "hedonics." This is perhaps best explained by an example. Consider that the cost of gasoline went up when certain additives were made required by the government. This cost is not fully reflected in the CPI, because there is a "hedonic thrill" associated with the perceived improved environmental friendliness (or similar nonsense) of the new required formulation of gasoline. Basically, the new gas should make you "feel better" about using it, so there is an improvement of quality that offsets the full increase in cost. Does this make sense given that fresh fish can be replaced by canned tuna? No...
Substition and hedonics are very useful for keeping the CPI from rising, which has a number of benefits for the government, as various welfare payments and wage structures are based off of it.
I understand what you are saying, but you've just shown how versatile substitution is. If wheat spikes they can substitute oats for the time being. If chocolate spikes they can substitute sugar candy. However, I think that when a non-economist thinks of the CPI, what they are considering is if they can maintain the same quality of life over a period of time for the same amount of money.
As you say, this does keep the CPI stable, but I think that the way an economist experiences the CPI is different from the way the average consumer does. For the consumer who has a relatively predictable pattern of food purchases, the response is "Wow, why does it cost so much more for me to buy the same stuff this year than last year?" They experience price inflation and their economic decisions are affected by it, but the CPI remains the same.
If we want to talk about how the consumers weather pricing changes, then substitution doesn't reallyconvey their experience. Yes, the average person will say, "let me get something else instead of peanut butter," but the CPI doesn't account for the "anhedonic pain" of substitution, to poke fun at their own terminology.
...the CPI doesn't account for the "anhedonic pain" of substitution, to poke fun at their own terminology.
Yes it does, or at least it attempts to. The hedonic penalties are exactly the hedonic benefits with a minus sign.
If guacamole is 2x as good as salsa, then if the price of guacamole doubles from $5 to $10 and consumers switch to $3 salsa, then mexican dips have increased in price 17%.
Look, I'm not saying a good job is done. In fact, I've argued many times that inflation is wildly overstated. I'm just saying that the hedonic adjustments and substitution adjustments are necessary if you want to have any sort of CPI-like measurement.
Of course, it's also the case that CPI doesn't actually measure inflation. To get a real inflation measure, you'd need to measure the price of a fixed basket of goods. But then no statistic like CPI would even be possible, since inflation would no longer be a rate - Inflation(1970, 2012) would not be equal to Inflation(1970, 1990) x Inflation(1990, 2012).
Joel claims that Trello can be used for kanban. This is not true because Trello doesn't support WIP (Work-In-Progress) limits without which you don't have kanban.
If you want to know your team's capacity, you have to limit the number of tasks they work on at the same time. Once you limit WIP, several interesting things will happen:
* A backlog of tasks will emerge.
* You will be able to measure how much time is spent on each task.
* Tasks will get finished faster.
The first two results are not very surprising because by introducing WIP limits, you have effectively eliminated multitasking, but how on earth, do tasks get finished faster?
Unlike computers with multiple processor cores, our brains have one or at best two cores. Without WIP limits, when there are too many tasks to work on, we spend more time on switching tasks than the tasks themselves.
Bottlenecks become visible. Since everyone is working on a limited number of tasks, some finish theirs on time, some get overloaded, and some cannot finish their work because they need input from those who are overloaded. Team members with free capacity can help those who are overloaded. Better yet, they can even come up with ideas on how to fix the newly discovered bottlenecks.
Disclaimer: I am the author of http://flow.io , a lean project management application based on kanban.
> Trello doesn't support WIP (Work-In-Progress) limits
To be fair, bulletin boards with index cards don't have WIP limits either.
It is unlikely a system light enough for frictionless kanban will have an accurate understanding of the work-width (units of work / units of time) of a given task. For example, in your tour, "58% complete" of "Custom CSS for homepage" is suspiciously precise.
Saying kanban is a task board with WIP limits is a bit like saying Lean Startup is about releasing buggy first version MVPs. It's cargo culting at best. Kanban is about keeping focus, creating quality gates, pull vs. push and so much more than just WIP limits. You can always limit WIP manually. How hard is it to simply choose to limit the WIP on a board column?
Your comment doesn't respond to what he said. He said that without WIP limits, you don't have kanban, while you responded to a statement that kanban is only task boards plus WIP.
The two are not equivalent. It is as if I said that without polite discourse you don't have a productive debate, and someone else called me an idiot for claiming that good manners and threaded comments are all HN needs.
Automagic WIP limits may not be necessary for imlementing kanban, but that's a deign debate, not a what-is-kanban debate. You may have a good point about whether WIP limits are necessary for a good "implementing kanban" application.
I disagree. I think WIP really needs to be a built in concept of a kanban app. The thing about WIP is it points out bottle necks and problems in your process quickly and concisely. It's pretty common for teams to go over WIP, and do it willingly, to allow the red flag the system produces to point out to the rest of the team the problem.
If you have to mentally keep track of WIP, inside each team member's head, you lose pretty much all of that.
Rigid rules means you can't adapt to reality. Adapting to reality is necessary for good software development.
Sometimes you just can't let something else wait until you are done with what you were doing. Sometimes you were doing something that can't meaningfully progress until something else has been fixed. Rigid computer enforced rules like that just mean that people will have to work around them, wasting energy and producing the wrong things.
WIP limits shouldn't be rigidly enforced. Instead a warning should show when you are over WIP, and some stats gathered in the background. If your team goes over WIP occasionally, it's usually fine. If you go over WIP a lot, your process is broken somehow. How, when, who and where you went over WIP can give you a lot of info on where your bottlenecks are and how to improve your process.
I've been on a Kanban team for the past year now and I can say with confidence that if our tool didn't track WIP, we would be less effective as a team.
I like trello specifically because I can create a list called 'Work in Progress (MAX 5 ITEMS)' and it will more or less do what you've described, without forcing process on you. Not sure how well this scales to big teams I'll admit.
I have downvoted you, on principle, because you advocated a religious method of software development and that you preach some psycobable to make it seem right.
Software development needs to be flexible because each problem is unique and different from all the others in important aspects.
Kanban is a tool on a long, long, long list of tools which software developers should learn to use, modify adapt and improve to become better programmers. Trello is one item on that list. And yes, it can be used for Kanban, if the team wants to -- just like you could use an actual whiteboard.
My favorite quote from the article: "Shrink your important code."
and he explains why:
"There was a paper recently that noted that all of the various code quality metrics correlated at least as strongly with code size as error rate, making code size alone give essentially the same error predicting ability. Shrink your important code."
Fair observation. But I'd like to know if the paper done research against static languages like C/C++/Java/C# or with dynamic languages as well...
Because I have a few people on my back that keep screaming that dynamic languages produce fewer lines of code and jumped into conclusion that "therefore it is better in terms of quality" while the code that these group of people produce seems to be similar to that of Perl => less code, unreadable (requires you to re-read intensively) if you go away for a few days and come back to work on it. Lots of meta-programming and prefer shortcuts over readability. Less bugs? hell no...
It's a fairly well-publicised result that the rate of errors introduced is proportional to lines of code, independent of language. Having said that, my googlefu is failing me and I can't find the a cite for it. I'm pretty sure it's mentioned in Code Complete - anyone with a copy handy to help me out?
But do they compare the exact system built using 2 different programming languages from a different programming paradigms?
i.e.: Java vs Ruby or Java vs LISP
At some point in time, the complexity of the system and the available tools/libraries provided more parameters to the formula of bug-rate calculation that may throw off the result of the paper.
Consider this: a fellow worker had to write something that utilizes eBay's API. There is an existing eBay Gems available and he used that first. He stopped after a few hours due to bugs and undocumented stuff. His other options? SOAP/WSDL. Now based on what we know, Java has better SOAP support than Ruby. We're not saying that Ruby can't do it, but we questioned the comfortability/usability of using SOAP and Ruby. Essentially, one must read the WSDL (treat the WSDL as the documentation) to figure out the data type in Ruby. Even then, what happened if the WSDL has been updated by eBay at some point in the future? more further WSDL-proof-read. Not so in Java, with the help of IDE and compiler, you can easily navigate WSDL objects and detect breaks if WSDL has changed (vial wsdl code-gen).
At this point, it seems that using Java is a better option as opposed to Ruby.
This is where such research tend to be questionable: "when all things stay the same..."
You are absolutely right that manually updating hours is a huge chore and not surprisingly it is universally despised. Making estimates beforehand in scrum is also problematic because it doesn't involve the time spent in previous phases as you pointed out. Kanban, however, solves the problems inherent in scrum, and virtually all digital kanban solutions automatically track the time for you in each phase of your workflow.
So, it is possible to measure productivity relatively painlessly in kanban as it limits work in progress (discourages multitasking, reveals bottleneceks in your workflow). Kanban also takes idle time into account (ex: time spent waiting approval from the management or another dept). See http://flow.io/how-kanban-can-help-you-measure-productivity.... for a brief overview of kanban's benefits.
Disclaimer: I am the author of flow.io, a lean project management app based on kanban.
I have always had an inkling that video games should be considered as applied math. According to Tarn, the author of Dwarf Fortress, who also has a Ph.D in math, making games "scratches all the same itches" as math. That sounds just right to me.
These results were expected by neuroscientists. Different senses are encoded differently in the brain. So, the subjects were unable to link them at first. The surprising thing here is the speed the association between tactile and visual encodings happened, and the article makes that clear.