I dislike this line of reasoning. If you add up the effects over many people then obviously you get a large number of minutes that were wasted, but those people collectively had many minutes to waste. You cannot claim that those minutes could be added together, and therefore you shouldn't invite comparisons to what you could do with 3 extra years. Brooks argued about the mythical man month in the same way.
It's an annoying, emotional statistic, used in the same way that my cereal box tells me they've saved 30 lorry-loads of cardboard from being produced. In the context of the millions lorry-loads of rubbish generated, it's not important.
I also dislike the 0.05% statistic (which is essentially saying Americans spend 0.05% of their time queueing at the DMV) as it's uselessly emphasized with "a significant amount considering the size of the GDP".
(I'm the author.) Thanks for the feedback. I agree that the statistics are emotional and not completely translatable to productive time, though I do think the numbers are still significant. Specifically, I think these types of savings compound, so that if you can save one minute of time on each of a 20 daily tasks, then at the end of the day you could have an extra 20-minute continuous block, which is much more valuable than 20 1-minute blocks.
Re: .05% -- I personally think mentioning GPD is useful because .05% sounds fairly trivial, but .05% of the GPD is still something like eight billion dollars, which is a very non-trivial amount.
Finally, I really like the philosophy of the blog post you linked to, but I think it's talking about a slightly different point. The examples I gave are about a single organization doing something that saves an aggregate of millions of hours or millions of dollars; they are not about trying to convince millions of individuals make a conscious choice to each save one dollar or a few minutes. In the language of the linked post, I would consider the latter to be a million minnows while the former is a single shark.
I think the author's purpose is to explore where value can be added to various processes. The first part is to ID areas where value could potentially be added. In some of the examples, the author doesn't seem to address the concept of concentrated benefits outweighing dispersed costs. Another is that one party's muda is often another's profit.
This is why I think the smartphone took off as rapidly as it did. If you look at a modern waiting room, 80% of people are staring at their phones, playing games, surfing the web or checking their email.
Previously, people might have a book or a discman on them but many people had nothing and were simply sitting bored. Now, they're at least at 50% productivity if not more.
The smartphone alone eliminates half of the losses that the DMV has caused.
Yes its like in the 70's when the UK had massive power cuts and the entire country worked a 3 day week - output did not decline by 2/5's
And say you take a half day off to visit the DMV your in 95% of the cases not going to get more or less things done that week - unless your a piece worker on a production line which for HN readers is probably zero
But, in seriousness, with those buttons you can see the reach of that post on each social network. It would be fairly trivial for you to verify those numbers are correct and to come to your own conclusions about whether my blog has "influence".
I'm not against badges per se - but I don't like an obfuscated way of calculating something.
Given that I know very little about Krout, perhaps you could explain why you participated in the first place. I don't wish to sound rude; I'm genuinely interested in what exactly you did, and why you thought it was worth doing.
Basically, can you expand on:
"A vaguely plausible "score" that you can use to justify your "investment" in tweeting all day long."
How come ?! This is blatantly illegal in Europe !
I genuinely really don't see how they could get passive consent unless FB and Twitter themselves give them access. Or they just scrap public profiles ?
In Europe you it's opt-in by default. You have to explicitly consent to the creation of an account as far as I understand it. So, to me, they could scrap content but not go as far as create automatically an account without your consent. Or I misunderstood something and would be glad someone explains it to me.
I don't think they create an account for you, they just compute a score based on your activity on social media accounts you already have.
So if you have a twitter account, they can generate a score for the twitter user using publicly available information and then if you sign up to klout, link these existing calculations to the newly created account.
If you ask me to solve a problem, I can intelligently talk to you about what a program that solves that problem is expected to do, and happily whiteboard a big picture idea.
If I don't know the language required for implementation, that's ok because I'm pretty confident I can work out the basics and what I need to code.
To meet your points:
A) I haven't really put myself out there in a while(over a year), I've been 'prepping'
B) My algorithm's are poor. While I understand the concept of Big O notation, I can't readily look at something and say "oh that's a _______ sort"
For what it's worth, I didn't have to whiteboard once during my several interviews. If you're interviewing with new SF or NYC startups, you'll find many of them care far more about practical* skills than pop quizzes. This certainly isn't universal, but it shouldn't frighten you away from trying interviews.
* (not that algorithm knowledge isn't practical, but for a lot of smaller, "web-interface-around-a-database" startups, they're not nearly as important as being comfortable around a full web stack)
Algorithms aren't really about recognising standard implementations - although in preparation you'll certainly learn that. It's about having a deep understanding of programs' flow, and being able to recreate and combine patterns when needed to solve a problem.
You should "prep" your algorithms knowledge - it plays a big role in a lot of interviews.
I wish I weren't so gullible! When this thread came around the first time, it seemed quite interesting, but the comments totally convinced me it was supposed to be a joke. Now I'll have to wait until 2014!
To me the "jump right in to a huge project" approach seems like a great way to learn a language, even if it doesn't necessarily produce high quality results (or even functioning results). Nevertheless OP is entitled to their opinion.
It's a great approach to learn a language, but you have to be careful what kind of "huge" you're dealing with. There's "huge" in the sense of technical ambition, and there's "huge" in the sense of "large number of implicated functional areas" and there's "huge" in the sense of "sprawling requirements that capture many different people's needs".
A C++ compiler is all three; it's challenging, like any compiler for a real language is; it's functionally dense (preprocessing, parsing, evaluating, code generation), and it's one of the most sprawling language specifications there is.
Building a native-code-generating Lisp would not be an unreasonable learn- a- new- language project.
"Building a native-code-generating Lisp would not be an unreasonable learn- a- new- language project."
Eh, I do not think it is terribly hard if optimizations are not a concern. Yeah, CLtL2 is rather lengthy, but the bulk of it can be implemented with macros and functions if you just want something that will work. I suppose implementing those macros/functions might teach you the language, but I think you would learn a whole lot more implementing something else (say, an email client).
The way I see it, Google set up a system that is easy to conduct surveillance on. Gigabytes of storage, no way to actually delete messages over IMAP or POP3, and in various subtle ways GMail discourages the use of encryption.
This is all probably inadvertent, but it indicates that protecting users from this sort of surveillance is not a priority.
If you don't believe that they delete it when you tell them to delete it, then using Gmail is already a non-starter. Also, yes, defaults do matter: the outcry from users from not being able to undelete an email would be much louder than those wanting instant deletion.
"the outcry from users from not being able to undelete an email would be much louder than those wanting instant deletion."
Seriously, I have trouble believing that anyone would complain about "delete" carrying any meaning other than "delete." I also find it hard to take such people seriously, given the existence of a Trash folder as a first stop for deleted messages, and All Mail as a second stop (and what is the default for deleting from All Mail? Having the message come right back to All Mail! Brilliant...).
Thanks for the tip on how to fix this behavior. It is an easy option to miss...
Using the default Gmail settings of (a) Enable IMAP and (b) Auto Expunge on, I just deleted an email via Sparrow (IMAP). The email instantly went straight into the Trash folder and is not visible in All Mail. I expect the email to be auto-deleted in about 30 days. I agree that the behavior you're describing would be weird, but I'm not seeing it. Can you please double check?
No, I don't have a citation, but is it so hard to believe that lots of people want an undelete? I accidentally delete emails all the time and I go into the trash and fish them out. Not giving users an undo seems ... unfriendly.
EDIT: I see, it applies to Custom Folders and delete only removes the label; doesn't move it to the trash. Here's what Google has to say about it. Is this a default IMAP behavior or Gmail-IMAP specific? https://support.google.com/mail/answer/78755?hl=en
I just confirmed the behavior: I deleted a message from my Inbox, and my mail client put it in my Trash folder. I deleted the message from Trash, and it was still in All Mail. I deleted from All Mail, and when I refresh the folder it is still there. I checked the network log, and the correct EXPUNGE commands are being sent.
As for "undelete," I believe the purpose of the Trash folder is to support that. I have yet to find the email client that does not, as a default, store deleted messages in the Trash folder. I am not disputing that people want that functionality, what I am saying is that I do not think people want the behavior that I am seeing.
The fact that GMail treats "delete" as "remove labels" is very problematic. IMAP supports labels, including client-defined labels. Treating folders as "labels" only breaks the abstraction IMAP presents. I suppose this was part of Mark Crispin's gripes with GMail.
The problem is that the default behavior for people who use an IMAP client is for "delete" to actually mean "keep a copy in All Mail." The server will also give a false OK status to the client following the EXPUNGE command (see the IMAP4 RFC for details), so the user is not alerted to the fact that messages are not being deleted.
I don't follow this stream of logic. If you have to build technical systems, the sole purpose of which is to sidestep laws, in order to be good then aren't the laws at fault here? I find the acceptance that governments will do whatever this wants, we will just use end to end encryption , TOR and Bitcoin defeats the purpose we have regulated currencies, governments and democracies. If there is an action item on the list, it should be to change unjust laws not to circumvent them. The latter is both defeatist and useless. Because if you are not keeping a check on the government they will keep bringing in more laws that hinder your sense of justice and your technical system keep moving its architecture like a fugitive.
"If you have to build technical systems, the sole purpose of which is to sidestep laws"
Nobody is talking about sidestepping laws. You are not breaking laws, nor breaking the spirit of the law, if your users cannot store many years of their personal communications on your server. Encouraging people to delete mail would have the effect of limiting the government's surveillance power, in an entirely legal fashion.
"aren't the laws at fault here?"
It is not that simple. The relevant laws were written at a time when mail quotas were commonly measured in megabytes, when people had to delete their mail in order to stay under the limit. Back then, if a few personal messages happened to be on the server when a court order was received, it was not such a big deal; those messages probably pertained to very recent things anyway. Now, when a court order for "all of Joe's email" is received, that will very likely include messages dating back years, long before whatever crime Joe is suspected of was even committed. The laws have not been updated in light of these new privacy implications.
"If there is an action item on the list, it should be to change unjust laws not to circumvent them. The latter is both defeatist and useless."
Why not do both? Why should we suffer while we wait for the deliberately slow wheels of government to turn? Neither action excludes the other, you can both circumvent unjust laws and work to take those laws off the books.
> You are not breaking laws, nor breaking the spirit of the law
I wasn't suggesting that either. Sidestepping is avoidance not evasion.
> The laws have not been updated in light of these new privacy implications. ... the deliberately slow wheels of government to turn?
So change the law. And speed up your justice system. Don't elect individuals who slow down progress deliberately.
> you can both circumvent unjust laws and work to take those laws off the books.
And fight the laws and its usage, which is what Google does. It was also the entity that disclosed the original order in this particular case, so that the person in question can also take some action if possible. It also publishes transparency reports and it was the first one to do that, which gives you an idea about how frequent usage of the unjust laws are so that you, as a community, has more information about what your government is doing. If I am not wrong, there have been 2 elections since the FISA was passed and the voting population of US was aware of the broad surveillance since the first election (Obama based some of his campaign on it). If he hadn't acted on it, then it should have become a point of debate in the reelection. But it was celebrated on Hacker News as much as anywhere else, without a single mention of his inability of reducing the scope of such laws. So guess what? Overboard surveillance didn't seem to be a priority for most of the people here and Google has been more transparent and more vigilant about it since longer than the PRISM episode. The gist is that it is easy to transfer blame in this case, but the root cause it solely the astoundingly broad laws and the undoubted trust that voters put in the current government that used them instead of removing them as it had promised.
It is not contrarian, I was replying to two separate points. The first was the claim that I was saying that Google should have set up its system to circumvent the law; the is not what I was saying. The second was the claim that we must either circumvent laws or change them; my point was that the choice is not exclusive.
As for the point of the laws being faulty, what I was saying was that the laws were designed with a particular communications model in mind. Google's system is designed very differently from that model, but despite Google's popularity and despite other services adopting a model similar to Google's the law has not changed. Just saying that the law is at fault is too simplistic; the laws may have good reasons behind them and may have made sense at the time they were passed (and had technology not changed, the laws might still make sense now).
Google has a legal team whose full-time job is pushing back on these requests. At some point, they have to either obey the law, or go to prison themselves. Get off your high horse.
It feels like some people won't be satisfied unless Google says "well, our existence has many benefits, but it also makes it slightly easier for the government to spy on people, so we're closing up shop".
The warrant should detail the nature of the crime being investigated, the dates under which they believe a crime was committed and the range of dates that are reasonable to investigate based on the aforementioned facts.
That's reasonable. A blanket warrant impacting all emails since the beginning of time is not reasonable.
If the cant name a crime and give a date or dates then they shouldn't be asking for a warrant.
Google has a legal team whose full-time job is pushing back on these requests. At some point, they have to either obey the law, or go to prison themselves. Get off your high horse.
Google has a business model whose entire premise is to push the envelope on privacy and to collect as much as possible and hold it forever--to make a few extra pennies per user. Get off your high horse. Google (and FB) is /are a menace to us all, even if only because they gather so much private data about us.
"betterunix's g-mail account" is plenty specific as a "place to be searched" and "e-mails" is plenty specific as "things to be seized." If you printed out every letter you've ever sent to someone and put it in a filing cabinet, it would be totally fair game for a warrant to get the contents of the filing cabinet. Or if you kept them on a hard drive, it would be totally fair game to get a warrant to get the hard drive.
In this case, the problem is not investigators overreaching the law. The problem is the amount of information consolidation modern technology enables and encourages.
"If you printed out every letter you've ever sent to someone and put it in a filing cabinet, it would be totally fair game for a warrant to get the contents of the filing cabinet"
Except that people do not typically do that, and it is reasonable to expect that a filing cabinet will contain only important or current documents. With GMail, you typically see personal messages that date back years, probably long before whatever crime the person is suspected of.
"The problem is the amount of information consolidation modern technology enables and encourages."
I think for once, we might agree. The relevant laws were written with a very different model of communication in mind.
Because a computer is a small self-contained thing that you can often fit in a bag? That's my point about information consolidation: the law is rooted in the physical (and necessarily so). You can wax philosophical about "virtual spaces" but at the end of the day a hard drive is a small physical object that is treated the same as any other small physical object.
You are evading the question: "Everything in the house" is obviously not specific enough. And here is where the evasion comes in: The problem is not the size of the container. Does a search warrant even ask you to describe the container? No. It asks you to describe the thing to be seized.
If H&R Block had Whitey Bulger's tax returns, would the warrant be required to describe those documents, or, would all of the small, baggable disk drives at their data center be subject to seizure?
No way would that be considered an over-broad warrant. If, e.g., you suspected someone of financial fraud, it would be totally okay to get a warrant for the contents of all the filing cabinets in his office.
They could have done what they thought was right in the interests of the Internet as a free, unregulated medium and by extension their business which depends on the Internet, even if that course of action (or inaction) was not clearly legal. Then they could have used some of their billions in cash on hand and their team of lawyers to defend themselves against prosecution. They, instead of some random sysadmin, could be the ones bringing this issue of Internet spying to the fore.
Now, before you say "What, do you expect them to break the law?", consider that they routinely push the boundaries of US law (securities law, copyright law, tax law, etc.) and lo and behold they are usually successful.
Now, I expect you might say "What you're suggesting is still nonsensical because resisting orders related to national security is, among other things, far too controversial and not in the interests of shareholders." And I would not disgaree with you.
My point is simply that Google, with its vast resources, is in a much better position than Ed Snowden to take on this fight for the future of the Internet and privacy of communications.
You could even argue Google has more skin in the game than Mr. Snowden, or any of us individually (aside from those who exploit others' private information for profit, of course)... because if this controversy brings about knee-jerk changes in privacy law, it could materially affect their bottom line.
I don't understand people's problem with estimating. It's a useful skill. Perhaps it would be better if the questions actually related to technology, rather than golf balls - but the principle is the same.
For instance - "how many hard drives does Gmail need?" requires a rough guess of how many users Gmail has (if you're interviewing at Google, you should know it's 1e8-1e9). How much space each one takes (probably nowhere near a gigabyte on average - let's say 1e8 bytes). And that the current capacity of hard drives is (1e12 bytes).
Then you can say that they probably need 1e5 hard drives, link it to redundancy, availability, deduplication, backups etc. You can comment that it's feasible to build a datacenter with that many hard drives.
No one cares that the actual number is 12,722 - but you've demonstrated a broad set of knowledge about the current state of technology. Saying "dunno - a billion?" is not going to get you anywhere, and with good reason.
I used to think estimation questions were useful. I still think that estimation ability is something a programmer needs for exactly the reasons you state.
However, I used to ask one estimation question (How many hours have you spent coding over the course of your life) on all my interviews. Over time, I lost interest in it because almost everyone got it "right" (took an acceptable route and arrived at a reasonable estimate). The only people that got it wrong were ones I decided to reject for other reasons (this being a semi-technical but mostly ask-about-experience phone interview).
So, although I agree it's a useful skill, I don't think it's worth askin estimation questions from my personal experience.
In which case this is lifted in part out of the McKinsey interview play book. A behavioural interview and a case study, with a resume review frost up. Keeping the case studies consistent across a set of interviewees for the best calibration. Making them realistic problems rather than academic exercises or quizzes means multiple paths can be taken to a variety of right answers.
Laszlo and many others inside his area are ex McKinsey. (As am I)
McKinsey don't operate above the radar, but are responsible for helping with an incredible number of major corporate decisions across the world.
Recruitment and development is McKinsey's core advantage. They attract and retain incredible talent from all sets of places.
Teams work with the top clients - the CEO and team as well as high flyers at more junior levels. Things get done extraordinarily quickly. And well. But make sure that the internal team is up to scratch, and that there is that mandate from the top.
There is no prescription for how to deliver a project, beyond the hypotheses approach.
There is an obligation to dissent, and a client-first mandate. When enforced well the client gets what they need to hear, not want they want to hear. High quality consultants don't want to work on projects justifying dumb decisions - and can choose not to.
Use McKinsey and other consultants to help with new issues, not business as usual. They are fantastic for quickly understanding and assessing important questions of strategy, for mergers and acquisitions, organisational design and so on. They help understand the context and set the new agenda, or validate the old one. Generally clients simply don't have enough internal capacity to perform this work alone.
Use them, decide what to do and start doing it - and then get rid of them. Consultants that are camping at a client are in effect wildly expensive employees.
Even if this idea had some merit, I think once the candidates start preparing for interviews by reading some idiots guide to brain teasers then it loses it purpose and further hurts the candidate who is actually skilled, quick on his feet and has good instincts.
I had a professor who used to say: a great engineer should be able to answer any question in 30 minutes, at some level of precision. Her example was deriving the equations for the dynamics of the space shuttle. You should be able to do it in 30 minutes, even if it's an extreme approximation.
Just yesterday someone asked me whether a particular disk array would be the right size or not. I didn't have any particular number in mind beforehand, yet I was able to say that the proposal was oversized by a factor of five. If someone told me they had a sweet new application for gas stations in California, I would be ready to figure out how much money each installation would need to make to cover salaries for a programmer and a DBA, even if it's just a hallway conversation. It's OK to be a bit off; it's not good to have no idea.
Your point that interviewers read the wrong signals from candidates' responses is a very good one, but it's not specific to estimation questions. It applies as well to straight-up programming questions and probably a lot more.
These estimates are completely useless in real life
Oh yes they are, for getting a grasp on what real life entails and what's possible.
With all the NSA scandal/hysteria going around, lots of people are approaching the issue with the presumption "gee, they can't possibly record everyone's phone calls". With a quick estimate I figured recording everyone, all the time, in CD quality, would take just 5% of the federal budget - making it doable instead of improbable or impossible, and making subsets of the scenario (i.e.: just recording phone calls) likely. For those of us who remember 10MB hard drives and 5.25" floppies, such a data scale is staggering - but it's a current reality, and a little estimating provides a reality check.
Likewise grasping the concept of, or even implementing, high-res "eye in the sky" drones. Gigapixel cameras seem like a novel futuristic impractical concept ... but then a little estimating involving HD-quality cell phone cameras, you can realize that a 24/7 flying 30fps gigapixel camera drone is in fact quite possible for a relatively modest sum (speaking in jurisdictional law enforcement budget terms). (Takes less than 200 cell phone cameras and a suitable multiplexer & high-bandwidth downlink BTW.)
I had an epiphany about accounting (!) when touring a billion-dollar timeshare (hotel/condo) project. Wanna make a billion dollars? Pick some large expensive project, then estimate your way down to a plan for pulling a few dollars out of a LOT of wallets by dividing, dividing, dividing away into manageable chunks people are willing to shell out a few bucks for.
Such estimates are exercises in how to mentally manage very large scale money, personnel, opportunities, and processes. Wanna make a billion dollars? Charge a buck profit per window to wash a thousand windows for each of a thousand businesses every week for 20 years. Don't laugh, there's some really rich people who made a lot of money charging a buck at a time - because they estimated their way into a profitable vision.
In my experience, the rare quality is not the ability to do these estimates, but the ability to recognize situations where such estimates might be valuable. Most educated people can come up with reasonable estimates if posed the question, but far fewer realize when the question is worth posing.
Imagine a world where car2go does not exist yet. I would expect most of my friends to be able to roughly answer the question: "how many cars do you need to start car2go in City x?", but only a handful, if any, would have the imagination/insight to realize the potential and ask that question in the first place.
Perhaps these kinds of questions would be met with less "wtf" looks if they were asked in reverse, as in: "My manager ordered 10 000 000 hard drives for GMail. Do you think we'll need them?" It's much easier to judge an estimate when you see it than to come up with it (especially in an interview where the emphasis is usually on whether you're right or wrong), at least for me.
The underlying data might be different but the process is the same, you need to figure out what are the contributing factors, how they relate and establish an upper and lower bounds for the values you're assuming.
Once you have data you can make corrections to those bounds, but other than that the process is the same.
It's a skill that a lot of first time startup founders lack. They have no-idea how to estimate the market size for their startup, you need to understand the process of how to build an estimation model.
In every situation I know of, it's not the accuracy of the answer that is being judged but rather the thought process and basic math and assumptions chosen to get there.
In any case it's one of those things where you tend to do well if you've prepared for it and do pretty poorly if you face it for the first time. Probably a better fit for interviewing management consultants than programmers for sure though
You're a startup in the cheaper-fuel business. You work on the assumption that most of your users will be mostly going around a single city's metro area, which heavily impacts your UX design.
You now need to make decisions about UI (map or list? what's a good default map zoom level?) and infrastructure (how much gas station data will a single user need in a single request? how does that impact my storage?) and a whole lot of other places.
You need a nice representative metro area that first a realistic worst-case scenarion, say LA.