Keep in mind that a traditional degree program does have a huge advantage over a strict MOOC: accountability. It sounds good to say that anybody can go push themselves through one of these courses. Try pushing yourself through ten, and actually writing all the papers and implementing all the code, while working full time and having a family. That grade looming at the end of the semester really does wonders for your motivation. Plus you can get help from live professors and TAs, and the Piazza forums for OMSCS are full of smart, curious students who love talking about the subject at hand. There's a richness to the degree experience that I don't think you get with scattered classes.
(Obvious disclaimer: I'm a current OMSCS student)
For example, in Berlin you spend less than 300€ per semester, a CS master takes 10 semesters regular time. Add two semesters to make the time more realistic, and you end up with 3,600€ which are roughly $3,800.
Oh, and it contains a full time ticket for public transport (which would otherwise cost 970€/year, i.e. 485€/semester). In other words: University education is cheaper than regular public transport, even though it contains a full time ticket.
Oh, and if your parents don't have that money, you can get half of the university costs + half of the living costs + half of the rental costs from the state. 
And note that Germany is by far not the best one in Europe regarding education, in universities as well as all other types of schools. It is regularily and heavily criticized for cutting educational expenses more than is good for the country.  However, after reading statements like the parent comment, I suspect it is still pretty good.
 More precisely, you get a debt called "BAföG", from which you have to pay back only ~50% after finishing - either in rates or all at once.
 For example, this forces universities into projects financed by third parties (i.e. companies), which adds a strong bias to the research direction and even more so to the results. Even worse, if this research contains business internals (which is easy to claim by any company), this leads to results being only partly published, or not being published at all. To be fair, the latter is more a problem of the law and not of third-party project. There should be a law that demands everything that is fully or partly paid with public money is Open Access as well as Open Source.
Yes there's some education conferred along with the schooling, but credentialism is a huge part of it and even more so in Germany than most countries.
I agree that there are not-yet-mainstream concepts for schools/universities/etc. that should be covered by public money as well, at least partly. Currently, this exploration happens entirely in the private sector, which is simply inadequate (read: too small and too slow) for the society to move forwards with its educational system. A society should actively invest into improving their education the same way they improve on science, and that investment is clearly lacking in Germany and many other countries. We advance the educational topics, but not the educational system itself.
But: Investing into classic schools and universities is still better than not heavily investing into education at all, which is the only real-world alternative I've seen so far. (And I would be glad to be introduced into a real-world third alternative.)
> Yes there's some education conferred along with the schooling
Some? Maybe I was just lucky, but at school and especially at university I got a very good foundation and always felt well prepared to educate myself later on (through books, websites, technical manuals, and so on). Contrary to conventional wisdom, I learned more about critical thinking and judging sources in school/university than anywhere else. Not in all lectures, but in enough lectures that I would otherwise have missed. Without that initial foundation, educating myself later on would have been much harder.
To be clear, I don't think there's a single one-size fits all solution for education. One of the core problems I see in formalized schooling is that by its nature it pushes large numbers of people through the same curricula. This may have been good in the early industrial era, but in today's world most job-related skills that can be commoditized are either outsourced or automated. Non-job related skills are also of great value, though it's not clear that it's best for people to build them in an factory-line style either.
Self-study: I put over a thousand hours into foreign language classes while growing up and got pretty bad results. It's an ancient discipline and curricula have had centuries to adapt, but it's just not well-suited to formalized schooling. I've met literally thousands of people with advanced degrees in English language study who don't speak that well. I've also met a lot of foreigners who graduated with degrees in Chinese who don't really speak or read comfortably. Though I've hired people for positions in which English language skills were important, I've never even considered looking at their related credentials rather than evaluating their results. Foreign languages are very learnable through self-directed study. This is even true for one's native tongue—most really good writers have gotten there through voracious reading and practicing their craft, not generallythrough advanced degrees.
Work experience: Another discipline I've seen schooling fall down is in sales. It's a core business skill, but those I've met who have excelled in it have come from a variety of backgrounds, not necessarily business schools. Almost invariably, the people who really know how to sell have gotten that way through work experience, either for themselves or on commission for someone else.
The third alternative, that of direct mentorship, is probably the most powerful I've encountered. Especially in music, athletics or other extremely competitive fields, there's nearly always a mentor behind the top performer, and often there is a series of several mentors over different stages of the learning process.
Now at this point, I suspect you're thinking about the fact that there are two types of educational goals—getting really good at something and getting to minimum level in all the core skills. Though my three examples were related to the first goal, schooling often fails in the second goal as well. It can succeed, but there are still a lot of people who do what must be done to get the credential they want and little else. On the other hand, it's exceedingly rare to meet someone who reads broadly and doesn't end up with at least a decent education.
May be you meant 'government' instead?
This is just another way of saying that the society cares about this issue!
To put an analogy: When a society cares about parenthood and children, non-parents pay some share to parents. This is how financial solidarity works. How else should that work? By telling parents that they do a great job, that you appreciate what they do for society, but not giving a single cent to them? That would be hypocritical, not solidary.
> By telling parents that they do a great job, that you appreciate what they do for society, but not giving a single cent to them? That would be hypocritical, not solidary.
It's not hypocritical if you compliment someone without paying them money.
You could say that about any tax founded expense?
But, yes, I am in favor of a small, limited government.
The reality is lots of people study subjects that don't result in higher salaries but political correctness insists that people be able to study the arts easily, so governments "have" to subsidize education.
I know someone in particular who went on to get a law degree (and was steadily employed in her field from her time of graduation) but was only managing to pay in the double digits toward her loan's principal on a monthly basis -- the rest went toward interest.
My first post-grad school salary was $112,000.
My education was an investment. It paid off.
But definitely not for education.
I hate that sort of terminology, money only exists for you to trade because you are a member of a society that collectively decided it was a good idea.
I completely agree, the motivation and environment that it gives you, the reinforcement that you're actually doing something official and serious as opposed to doing something that very few people outside of the software industry take seriously, the structure it gives you, the fellow students you meet and befriend, it's all huge. Additionally, looking at the Udacity/Coursera material, a lot of it is shallow, poorly taught, and would not have taught me nearly as well as my master's program has taught me.
What courses are you taking?
I imagine there are multiple universities around the globe offering the same sort of educational entry.
While it was shown in the article it doesn't matter at interviews, we (at my company) do require technical masters degrees. Though it does not need to be a CS master. The form of individual thinking combined with analytics and methodology we found to be far better at those completing (any/a) Masters degree than those without. Especially self-reliance, which is important if you do not have groups of 10+ devs working together.
I wonder if this is a contributing factor to the success of those who completed MOOCs. There was zero accountability but they STILL manage to complete the coursework and assignments.
Accountability is overrated.
I need to be able to self study at my own pace according to how my free time allowes. I'm also not that interested in formal tests and exams. And even course projects.
I want to learn what I need for whatever project I'm currenty working on or going to work on next.
I found out about the OMSCS after I started my work on Udacity's "Self-Driving Car Engineer" nanodegree course, which I am currently taking. My plan (hopefully) is after I complete the nanodegree, I'm going to take some kind of online BS program (maybe accelerated if I can find one), then hop over to the OMSCS (hopefully it or something similar is still running). I mainly want to do it to prove to myself I can do it - I believe that I can.
I made some early life choices that have led me to where I am at, and while it hasn't hindered my job or career prospects, I have always wanted what I should have done in the first place.
This sentence plus the inverse correlation between experience and "interview performance" shown there. Makes a big smell about how biased are those interviews to themselves and not to real technical interviews.
From the data it looks like the questions asked using that service are the ones you might learn in university and after many years not using them, that knowledge fades away because you're not using it.
This is reinforced by MOOCs being the 101 of the subject they're dealing with. It would be interesting to see if there are trivia questions from 101 courses.
The most obvious bias is in the clickbait title. Those 3K interviews are in a specific platform, meaning they're done in a specific way.
So after checking their results it seems that interviews done using that service benefit people with fresh university or 101 lessons knowledge.
What worries me more is the lack of improvement and perhaps the moral superiority of ending the article with a "these findings have done nothing to change interviewing.io’s core mission". It feels like the entire statistics game shown there was to feed back what they already knew.
I ask "algorithmic" questions, normally expressed as a legitimate business case (invent a real world problem, solution is implement some algorithm or use specific data structure). My warm up question typically is a simplistic "find the subset in a given collection that matches this specific criteria" (with a subtle implication of "do it efficiently"). The average coder should be able to solve this type of thing, on their own, in about 10 minutes max, 15 with some feedback on improvements.
Yet, 80% of my candidates take nearly 45 minutes and cannot deliver a workable solution without massive handholding, and I don't even get to my higher order, "real questions". The scenario of a coder who can't solve my warm-up being let loose on code I actively maintain makes my stomach churn.
Until I see the average bar for problem solving go up, I'm going to keep asking basic CS questions in my coding interviews. The job is to solve complex, typically ambiguous problems. Coding is one of the tools - and I want peers who understand the theory behind using those tools.
(I should note, I tend not to pay attention to credentials on a resume. I care more about ability to do the job then past history - though if a candidate has a masters in some field of CS, I might delve into it a bit out of curiosity... they are an expert afterall)
I'll take this guy's word over yours I'm sorry. Your supposedly simple use case is given in a different environmental surrounding than a typical day on the job.
My opinion, of course, but a "good programmer" can think through the solution entirely in their head without hands touching keyboard. Code is the byproduct of the person's thoughts. I'm looking for people who think like programmers. If that bar is too high, I'm rather disappointed with the state of the programming community.
I'm not sure I agree – in my experience there is at least a direct correlation between 'knows about scalability, performance and maintainability' and 'knows about data structures and algorithms'. Maybe they aren't going to be able to tell you about the performance edge-cases of different sorts or whatever, but I'd probably still expect to see at least awareness of a difference existing.
>Yet, 80% of my candidates take nearly 45 minutes and cannot deliver a workable solution without massive handholding, and I don't even get to my higher order, "real questions".
You need to ask yourself why you believe the "average coder" should be able to solve that because clearly your beliefs are not founded in reality.
This is what I cannot understand about interviewers who are constantly frustrated with the population's skillset: You obviously have higher skill standards than the average. That is fine. Just accept you will have less hires because none of you are capable of fixing the entire population's skill levels.
The problem he describes is trivial, and something that you'll encounter as an entry-level web developer on a regular basis. If you can't solve it, you're absolutely not up to the job. In fact, I'll go further: if you literally cannot find an efficient way to filter a list of stuff based on a criteria, you're not even a programmer yet. It doesn't matter if you've "written" a dozen toy webapps by stringing together NPM modules -- not knowing these basic things makes you a danger to any team that hires you.
You can't judge the quality of a test exclusively by the number of people who fail it. If you resume screen for "has written code before" and 80% of your applicants fail that test, is your standard set too high?
(In case you're wondering, that's not a hypothetical example.)
I wish that was the resume screen criteria at all the places that ignored my applications.
The more I read these threads the more I think resume filtering is part of the problem. It makes some sense; if the bad applicants have to apply to hundreds of jobs to get hired they have likely learned how to game the resume scteen.
A huge problem is people list technologies as keywords on their resumes and rarely indicate they know what those words even mean. This makes screening next to impossible at scale (think 1000's of resumes coming in for a single position).
There's a lot that's broken with the hiring process. One of the toughest thing as a potential candidate is how to properly tailor a resume to fit the bill without "gaming" the hiring process. There is no standard - only methods that work better than others in most situations.
I describe the projects I've worked on, what the problems challenges were, and how my work helped solve them. The format I use tends to work itself out as a simple narrative outline, and I tailor every resume to be the most relevant to the position I'm applying to, and I submit a cover letter specific to the job. This isn't "gaming" the process, this is doing your homework on the company/position and selling yourself as a viable candidate. If I'm weak in a desired skill, I call it out in the letter and demonstrate my past experience as an example of how I can learn new technologies quickly (something vital for any programmer, IMHO).
That said, after a certain level of experience, relying on your resume to get you in the door isn't going to get you the job you want in most cases. You want to meet people directly and apply via recommendations or requests. Direct contact with peers working at the company you want to apply to matters a lot. Even communication with a recruiter is better then blind submitting to a job post.
Because I have never done that. That's different than just running through a list and picking out items that meet a criteria.
"find the subset in a given collection that matches this specific criteria"
So basically, a loop through a single table. That's as simple as it gets. You can make the problem more complicated, of course (e.g. "write a method to find the minimum and maximum ages of the male users"), but it's still pretty simple stuff.
A slightly less trivial "algorithm" question that should be equally easy for any decent programmer: I give you a document of english words. Write a function that counts the words, and returns the top ten words seen, by frequency. Now...solve the same problem when it's not a single document, but a stream of words of unspecified length. Don't run out of memory.
The only thing I can think of is something like the output rewriting an array on update over and over.
var AGESORT = rows.bubblesort("AGE ASCENDING")
var AGESORT1 = rows.bubblesort("AGE DESCENDING")
var MALE = "None"
var MALE1 = "None"
for row in AGESORT
MALE = row
for row in AGESORT1
MALE1 = row
print "Min age: " + MALE
print "Max age: " + MALE1
For the "find the min and max of a set" in particular, a lot of folks start out with terrible solutions.
In particular, the case where I want the top 3 words, you don't know the length of the stream, and you get random permutations of the same 4 words until I stop emitting them (where I will end by emitting 3 to break the tie).
That's not to say your problem isn't interesting -- just that while specifically constructing a problem as an example, you created one that's unbounded in required memory (in both storage per value and potentially number of values to store) and then demanded a solution that doesn't run out of memory.
I think most interview questions are similar nonsense.
It's probably a good idea to be careful with your words when you admit that you don't know the answer to a question.
First off: your example (random permutations of the same four words) doesn't require much memory at all. So if you think it does, you're wrong. You might overflow your counters, but that's a different problem.
A stream of random gibberish is certainly more challenging. But the cardinality of the English language isn't infinite (the OED has about 230k words, and that's with lot of words that nobody ever uses), so even a naive solution doesn't require "unbounded memory", as long as you take the problem statement seriously and don't do something ridiculous. That would be good enough to pass an interview.
But OK, let's say you do have a stream of random latin-encoded gibberish. What then? The problem statement is that you have to determine the top-10 words (or in this case "tokens") by frequency. The cardinality of the set is infinite, but the probability of duplication per token is small, and the output set is tiny. Do you really think you need unbounded storage?
In any case, even if you think a problem is "nonsense", it's probably true that the interviewer has thought about it more than you have. The part that frustrates you is highly likely to be the bit worth probing. A bad candidate will bomb out immediately; a decent candidate will provide a solid, if not perfect solution; a great candidate will solve the problem, see the broader theoretical aspects, and investigate those as well.
This is an example of a trap that interviewers run into when they try to arbitrarily reword questions.
While I could be mistaken, an exact solution to the top-k problem requires O(N) space where N is the number of distinct items. I can trivially think of a stream of tokens that would defeat any reasonable computer in both available memory and general "storage" (ex: 1 quintillion distinct words, then the next 10 words are a duplicate of an existing word, then end of stream). Since you asked for an algorithm and not a heuristic, it's clear you are looking for an exact answer. So the answer is "Yes, I would need unbounded storage for the new question as asked". Since this is an interview, I'd expect that'd you'd want me to give you the technically correct, and accurate answer.
I've tripped up enough interviewers who have tried to slightly reword questions, but ended up changing their meaning.
An exact solution to an unbounded sequence of purely random text probably does require unbounded memory (I say 'probably' only to hedge my bets here). But depending on the definition of "random", you can put pretty tight bounds on it and only be off by a little. I haven't done the math, but my gut says that a bloom filter, followed by incrementing counts only for positive hits from the filter, would scale well. There may be simpler approaches that make use of word length and the size of the latin alphabet (e.g. "all tokens of size N have 1/C(26,N) probability of colliding if letters are chosen from a uniform distribution, therefore...")
But again, if we actually had this conversation in an interview, there'd be no danger of not passing. Unless you were a jerk or something.
There's no way to solve the unbounded stream case correctly with finite counters as you admit.
So you didn't actually present a bounded memory solution to the problem.
Being pedantic about this point doesn't get you closer to a correct answer. Read dsp1234's responses. They are correct.
Again, I'm aware of how to solve it in practice and that the problem as stated is not what you meant, I was pointing out that as stated, it didn't have a memory bounded solution.
And that it's commom for people to self-assuredly misdefine interview problems, then not understand when the interviewee points out their implicit assumption.
Though it could be still be saved by flipping the question back onto the interviewer's phrasing. They said "don't run out of memory", not "don't run out of disk space". So you could also solve this problem by, for example, writing the data into a database, then using SQL group by and top X functions to solve it once the stream has ended. But as an interviewer, I probably wouldn't be amused.
The only thing I'd add is that you can probably come up with a pretty simple heuristic or three to solve the problem for the case of truly random gibberish, without the need for elaborate data structures. But yes, this would be the answer of a great candidate.
Your solution is incorrect because it fails to handle arbitrary length streams as specified -- as do the other glib answers.
My point was the specification didn't seem to align with the intended problem, and that interviewers tend to make such mistakes on the description, and then get mad when you ask them to clarify if we're talking practice or theory.
Similarly, you don't account for arbitrary proper nouns, which could theoretically be unboundedly introduced to an arbitrary sized English corpus. I guess you can debate which of these are "English" (itself a proper noun!), but interviewers often treat you as dumb if you don't divine their intent in such underspecified situations.
You only need one counter per word.
a quick sketch of the basic algorithm is:
1.) Create a new list of type <string, number> into COUNTS
2.) Read a word from the stream into WORD
3.) if WORD exists in COUNTS, then increment the number for that entry
4.) if WORD does not exist in COUNTS, then add WORD to COUNTS with a number of 1
5.) if not end of stream, goto 2
6.) traverse through COUNTS keeping a running list of the top 10 highest entries (Note that this and step 4 can often be combined to keep a running total of the top 10) into TOPCOUNTS
7.) print TOPCOUNTS
Step 3 of your algorithm either requires unboundedness of count (ie, count can use arbitrary amounts of memory) or can overflow on arbitrary length streams of words (and hence, has cases where it produces the wrong output).
As I explained above, my question is a warm up - meant to break the ice, calm nerves. But it's surprising to me how effective even a simple list traversal is at identifying weaknesses in a candidates programming ability.
E.g. - "functional" is simply something like:
subset = filter( predicate, superset )
But, yeah, I can see how it would get people to start up talking somewhere, or else go into "deer in the headlights" mode.
But then I take away the library/method and re-state the problem. Because the point is to see if they can understand the problem, and devise a solution.
Now, I'm generally against asking hard-core CS questions in a live interview... but a simple bozo filter is probably a good idea.
It is a low bar skills wise, and it tells a lot within a very short amount of time.
The people that weren't successful during the interview today? They are going to have more interviews tomorrow and next week too. Doesn't mean they are representative of the average coder (most of whom are not being interviewed, because they have jobs already).
The population is skilled enough, it's just that of those who are left, most are not (so it takes more time to fill positions).
The thing is I have an arsenal of useful information and skills that I have learnt over the years that will not be asked in my next interview. I have personally been developing a list of interesting issues that I face on a day to day basis. I share it with as many people as I can some who participate actively in the interviewing process. Many do not agree with me and I do not hold them for it. I have started small and hope make a change.
Can we quit using this ridiculous example? I'd be willing to bet that less than 1 in 100,000 technical interviews have asked someone to implement an r-b tree considering a full implementation is 100s of lines of code.
The most selective of my hiring clients over the years tended to stress intellectual curiosity as a leading criterion and factor in their hiring decisions, as they felt that trait had led to better outcomes (good hires) over the years. MOOCs are still a relatively recent development and new option for the intellectually curious, but it's not much different than asking someone about the books on their reading list.
Unfortunately, demonstrating intellectual curiosity often takes up personal time, so someone with heavy personal time obligations and a non-challenging day job is at a significant disadvantage. One could assume that those who have the time to take MOOCs also have time to study the types of interview questions likely favored by the types of companies represented in this study.
Thanks for continuing to share your data for the benefit of others.
Except I'm convinced it's not really true. It's something that is horribly subjective and really self-selective. It's funny, intellectually curious people often have exactly the same interests as whomever is doing the interview. I find that nearly everyone loves to learn, you just have to find the thing they're interested in learning about.
The signal that I have found to be a great indicator of success on my teams isn't about curiosity at all. It's about attention to detail. In the world of scatter brained developers who never seem to really follow through on anything, it's those guys that are the real unicorns.
Our interview process is now designed to bubble that to the top. Vague programming problems with poorly defined requirements provide a platform by which we can see how someone digs into problems. I'll ask for them to send me a couple of things after the interview, it's a really good signal when they pull out their phone and add it to a to-do list.
Those guys may not always be the "smartest" or the most interesting, but man when you're going to spend months working down a really large project they get stuff done.
Because it's just one of many possible courses of action for staying organized. For all you know the guy added it to his to-do list on his way to the car, or maybe he employs some other mechanic. While you could argue that an interview is a showboating environment where one is expected to signal certain desireable traits, I personally resent that these attributes seem to manifest in recruiters as specific actions that reinforce their perceptions of a good candidate -- if someone is unconventional then how is looking for specific signals going to tell you anything useful about them?
The key thing to remember here is that we're asking this very much in the middle of the interview, and I just want to see that they have some sort of system in place for remembering these things.
In my experience, most of the time, simply going "sure, I'll do that" and depending on memory to actually get it done is a negative signal. Maybe they are particularly gifted in terms of memory, but in my experience that is rarely the case.
In hiring we're willing to accept false-negatives, but never false-positives. You are correct, I may let a perfectly good candidate walk, but I'm willing to accept that as the cost of hiring a bad one is just too high.
IMHO the real problem is the strict barrier between people who are part of company and non-company. Imagine if we had a more permeable membrane with different departments hiring firing as they see fit (a faster flow than this structured hiring). This would test the candidates on working in the real environment rather than the whole magic and mysticism of made-up work scenarios in interviews.
In fact I wish we had a more libertarian system inside companies with more free markets rather than all the central planning which happens now
Unless your request was very specific and strange ("send me the completed k3-45 and t9807 forms"), in which case I'd have to jot /that/ down.
Should you not rather evaluate whether your request was fulfilled?
Again I do want to stress: This is a small detail as part of a larger interview. Screwing this one thing up isn't going to sink you, but we combine it with a bunch of other signals and observations to make decisions. I've been doing this for several years, and I'm very happy with the results.
I've done a few hundred technical interviews over the years and this is absolutely true. It's these guys who make production ready systems possible more than anybody else.
This is true, and one of the things that tends to stack disadvantages against people who are older in the industry. There's a good correlation between free time and age in some ways, as people get to mid-career they tend to marry, have kids, and so on. Maximum free time comes during college and immediately after, for many people.
Valuing intellectual curiosity also tends to mean that this curiosity is valued over direct experience, which is what the older more time-strapped people have to offer relative to younger people who are doing the MOOCs.
Of course, nobody has ever done any experimentation to prove such a hypothesis so that it could be used as a lemma.
I often wish there was some sort of prediction market of I/O-psychology hypotheses that you could just throw a random prediction onto. The stakes on the most contentious results would then prompt the investors to leverage their bets with research organizations into grant-funding to study those questions.
As developers get older, they need to be conscious of the relevance of their work in regards to current trends, and if the day job is largely irrelevant they may need to find access to projects (often unpaid) that will serve to impress new employers.
I don't think that the value of experience has truly diminished, but I think what is now classified as 'experience' may have shifted a bit.
Asking a person to traverse and modify a linked-list (a very common problem) or to implement a B-tree are not 'tech outside their job'
Most of these interviews are more like math problems than anything a practical developer would encounter.
Shit, if these interviews were "What do you think of node?" or "What is Go's concurrency model?" I think experienced people would do fine.
No. No one in their daily jobs implements a linked list or B-Tree. There are predefined libraries in the languages or a user created library that people use. Do you really think every person in the company has his/her own version of a linked list in the same code base? This is a fallacy perpetuated time and again. (I'm not against these type of questions, but they don't represent someone's true ability)
Building a product, app, project requires application of this knowledge: eg., I need to design a distributed data store, should I use hash map? What about collisions? Should the collisions be resolved by chaining (linked List) or another map? What kind of data am I storing? Can that be exploited to make this more efficient, synchronization of data across nodes (?), etc..
Many of these interviews completely overlook someone's design ability and harp on some straightforward (and some obscure) topics which IMO has little to no co-relation to someone's ability as a software engineer.
Why is everyone stuck on getting someone to reinvent the wheel as against getting him to use the said wheel and get things done.
I find the processes in a large company rather robotic (note: we're a startup, so my views might be biased).
A person who can only use a pre-made wheel is a technician (regardless of how ostentatious their job title may be).
A person who can invent the wheel on demand can also usually invent any other type of rolling mechanism one might encounter a need for and that makes that person a full-fledged engineer.
No value judgement of either person is implied; it's simply a statement of capability. Which type of person is needed depends on the company and the position but it is becoming increasingly clear that the two roles in software are separate, just as they are in other engineering disciplines.
Counter case: HFT. Those guys really do implement this stuff. Again and again.
Yes, it is.
More to the point, it's built-in bias toward recent college graduates. Once someone's actually been out in the real world programming for several years, any space in their brain's metaphorical working set that was ever dedicated to remembering how to do this stuff has long since been paged out in favor of the actual day-to-day knowledge they need to do their job. Implementing basic data structures and algorithms is not part of what they need to do their job; that stuff's already provided by the platforms they use.
So when you get to the interview, if you just put someone on the spot and ask them to do one of these things, you're inherently biasing toward an inexperienced recent college graduate who will have this stuff close to the top of their head (on account of having just done it in school and not yet swapped it out of working memory through time on the job).
Which is in turn why there are now books you study -- as an already-qualified programmer -- in order to freshen your memory and ability to quickly regurgitate problems that have nothing whatsoever to do with your day-to-day work, in order to be able to perform them on command in interviews.
Also these days the trendy interview problem is longest common subsequence, which ends up testing "how recently did you review dynamic-programming techniques", not "can you code".
I'll be honest, this is indistinguishable from "I am good at X and I'm great therefore X is a good measure of ability".
Maybe worth rephrasing
If the point of the exercise is a TRIVIA question, leave the interview. On the other hand, if the point of the exercise is not to get the right solution but see how you think and how you use basic programming blocks, then you're in the right place.
I've used the linked list question. And the goal is not for me to check if it is "not difficult" for you. Because I might not be able to teach you what a linked list is in 5 minutes while you're nervous in a technical interview. But to work on technical problems and see the following:
- Can you speak your mind so your peers know what are you doing?
- Will you ask for help?
- Will you accept help and feedback?
- Can we have a discussion about something that there are many "right answers"?
What you're describing is a confidence bias filter, dressed as an objective test.
But in practice it's merely one of hundreds of things that might get instantly sprung on you as you stand before a whiteboard. In addition, developers with actual jobs can't stay in a continuous cycle of "exam cram" to the same extent that unemployed recent graduates can.
I've never used Go, don't really plan to unless I have to, nothing wrong with it just not my area of expertise and I'm busy with other toys although I'm sure its a very nice language, and if I went up against a non-technical or technically weak interviewer (aka almost all of them) I could probably BS my way thru based on what I do know about concurrency in general. Although after decades of experience, the reason why I'm not still programming in 6502/Z80 assembler is I'm good at that whole "BS my way thru till I'm an actual expert" thing, so maybe I'm talking myself into agreeing for non obvious reason!
BSing skill is under-rated. If you can't BS your way thru something that you don't entirely understand (because you're lazy), then when you're at the cutting edge developing or debugging you'll have trouble BSing about stuff that no one on the planet understands. Not all the worlds problems have a stack overflow question or a script reader at a support hotline, and how you deal with that gut check says a lot.
(I listened to a training video given by a Marketing Cloud consultancy at my work. Someone asked "what is FTP" after seeing that as an option to upload files. Lots waffle including "the cloud" and no actual answer. The person asking the question was no wiser, and my opinion of the consultant went way down).
It is true that many managers, recruiting agencies (almost 100%), and HR, will look for the keywords of the technologies the company is using. So if they use the new Angular 4, they will look for that.
Because tomorrow, today's current trends will be tomorrow's trends. Companies looking for people that can adapt say a lot about what kind of work place will you find there.
I know enough about relational databases and NoSQL to know that unless you have a specific use case, relational is likely a better choice. If you are not scaling beyond one server, and want to view your data from different angles its almost certainly a better choice. As such I haven't really got a great deal of NoSQL experience. I will build a better application than someone who does resume driven development, but won't get past many HR drones.
If you don't have a relational query in mind a relational DB is a bad fit, or at least premature optimization.
Maybe "to do the thinking for them" would be more accurate.
> sound like a good thing to me.
You did what they do - you assumed based on some your usual workloads that a DB and the overhead of using another language, and a non-compile-time checked language, with poor integration (type mismatch), is the solution for the problem I'm working on.
What I expect them to do is ask questions. How many values are there? Are they complex types or something that would map to tables well? What are the querying requirements? Are there any relational elements to the data?
> Its usually way faster than doing stuff in the application code
If shoving things across two sets of pipes, to a general-purpose app not tuned to your workload, is faster - you're writing your main app incorrectly.
> the declarative nature of SQL usually means less bugs.
A declarative language usually means less bugs. But adding a second mismatched language usually means more. Especially when it comes with a whole new system that takes experts to properly tweak.
Since for programmers at least, such information is not hard-separated from the course of their normal work, they will usually have the opportunity, if interested, to be at least partially informed about such things and have some sense of the overarching zeitgeist. For instance, a C# developer would be fully justified in reading about the changes slated for the next iteration of C# during his/her day job, especially as that iteration neared and achieved release.
On top of this, professional engagement and awareness IS something that deserves out-of-work nurture time, even when you have a family or other demanding non-work obligations. You don't have to have an impressive free time project like a hand-built CPU, but expecting someone to put in a few hours a month to tinker, learn, or explore new things related to their field is not unreasonably demanding IMO, even for experienced/busy professionals.
Obviously "participated in a MOOC [Massive Open Online Course, btw, for those curious]" may indicate curiosity (though it may also indicate someone trying to short-circuit their lack of skill by gaining Yet Another Credential to ride off), but it's far from the only signal.
The best employee is an experienced person who has retained their intellectual curiosity, even if they don't have the time to fully exploit it.
Anecdotally I'd assume that intellectual curiosity often increases with age, although that might be attributed to the availability of learning materials now compared to years ago. Some people here on HN would likely be shocked as to how many people in the industry consume almost zero content outside of what is necessary for their jobs.
Seriously. People significantly underestimate how much better the average HN user is than the average developer.
The median software developer sees programming as just a job and spends absolutely no time on developing their experience outside of work.
I think it definitely does, depending on the person. Many, many attributes of the person change over time. That's what growth and evolution is about!
Some people take a career path where they become expert at a narrow topic. As they master their field, I think they lose a lot of curiosity because there's simply less to be curious about unless they're on the forefront of research and not just a practitioner.
Other people simply develop other life priorities.
It's 100% an individual thing.
Thank you, and thanks Aline!
Often the research may be tangentially related to their work, but it doesn't have to be in order to be 'credited' as intellectual curiosity.
Outside of CS, demonstrating curiosity can be almost anything. A non-CS person learning a technical topic is actually a decent example. Non-programmers that buy Arduino and do some home automation project.
It can be almost anything I'd think. Many people don't dedicate much personal time to learning. Demonstrating that learning can be challenging.
I have clients that require interviewees to do a presentation on anything at all during the hiring process (both non-tech and tech employees). Some have demonstrated game strategies they've studied.
Your reading list on the other hand is entirely up to you. It's likely an indicator of something to the person asking the question. People who don't like math (or aren't skilled in math) aren't reading advanced math textbooks for pleasure.
1. From the perspective of a job seeker, the interview is what gets you the job, so it's in your interests as a job seeker to learn how to look better as a candidate based on data about how interviewers judge candidates.
2. The purpose of an interview is to be a predictor for job performance, so while this article doesn't address this part of the question, you want interviews to predict job performance, and how to make interviews do that is interesting.
I'd be much more interested to see performance on other questions e.g. Google's typical curveball for new grads is an architecture question like "How would you implement YouTube?".
It is weird that so much effort is spent on interviewing and so much effort is spent on performance reviews. But AFAIK, hardly any organization tries to use performance evaluation of current employees to inform their hiring decisions.
If I am looking for a job, I care a whole lot about what would make me better at getting jobs.
I decided in my last change of positions to try interviewing for jobs like most people without referrals, since I had several other referral positions open.
And I was horrible. I was ridiculously bad, and couldn't get a job at jobs that paid less, and were for less skill. It was quite hilarious.
Thankfully, all of the referral jobs were begging for me, and that was fine. There was no real loss, but it was a bit concerning that if I didn't have an extensive connection network that I would be in a significantly worse role.
I've been interviewing a lot of developers lately, and while I'm fairly happy with my process (I've managed to avoid bad hires so far), I always have to wonder if I'm turning down people I shouldn't.
When I interview for jobs, I open up to any of the possible things in my background, of which it will take me perhaps 2 weeks to get back up to speed. But I am not going to take that 2 weeks for each job I interview for.
I have trained my brain to learn things, and discard them keeping only the current tasks in memory. The rest I know how to obtain again.
This means that I get asked questions I know that I've done repeatedly in the past, but can only vaguely explain without a refresher.
To the interviewer, this means I am underqualified at best, and at worst, lying about my past skills.
A lovely example of this during the process was that I was asked a fairly basic SQL question that I failed to explain when I had just landed from my second flight of the day. I didn't get the presales role for working with the database. (non referral job.)
However, I did get offered a job working on development for that database, as well as product management for the database.
So I couldn't get a presales role, that I've done in my past, because I couldn't answer a fairly basic sql question. I knew exactly how to answer it, the venn diagram of sql joins, a half second google, but in an interview, I blanked on the answer.
That is indicative of my experience with non-referral positions. I've given training on sql joins a good 30 times in my life, and couldn't explain the join in an interview.
Just the way it goes for me, thankfully it's not actually impactful in my life.
I'm perplexed as to how people don't think it has any relevance at all.
The entirety of every industry is based on 'interview performance' as being one fairly important factor of future success.
Interviews are obviously not perfect predictors, but they are pretty strongly correlated with outcomes.
Someone who does very well in a technical interview is more likely to be a 'good engineer' than someone who fails miserably - though I totally accept that is not always the case.
There's no doubt we could use a little more science to our approach, however.
1. Setting non-significant bars to 0 seems fishy. Leaving them and putting confidence intervals on everything would let them speak for themselves.
2. Calling something effect size is ambiguous. That's like saying you measured distance in units (and the wiki article on effect size linked makes clear there are a billion measures of effect size).
I'm guessing their measure of effect size were the beta coefficients in a multiple regression?
It's just amazing to see how many positive comments a post like this gets without even the hint of methodology.
Absolutely agree that some MS degrees are pretty much less rigorous cash cows by now, that allow students to skip the fundamentals such as data structures, operating systems, and compilers.
However, many CS MS degrees actually do require this as a background, to the point where some programs have emerged to prepare non-CS majors for MS degrees, kind of like those post-bac premed programs. It's hard to believe that those MS degrees, which require a decent GPA in those core courses, along with high GRE scores (sorry, but we are talking about interviewing skill, which may be more related to exam taking ability than job performance), wouldn't result in a similar profile to people with CS degrees from top schools.
This is fully acknowledged in the text of the article referenced in a link, but unless people follow it, I do think the message may be a bit misleading.
That's an aside, though. The value may very well be in the prep for these degrees (ie., the post-bac CS coursework required for admissions to a reputable MS program). If you can get that through online courses (udacity or coursera) through genuinely rigorous self-study? Yeah, that might do it, for far less money. I've audited a few of them, and they're the real deal, that's the real coursework there.
I felt like my knowledge definitely cured during the process with graduate level data structures, architecture, operating systems and networking. The 2nd pass on some of these areas (despite getting mostly A's from a generally-considered challenging but now well known program). I took the opportunity to craft my program of study more than I would have been able to as an undergrad (both due to lack of knowledge and how undergraduates are given a lot less leeway) to include electronics and automation courses. It also gave me the opportunity to work with and mentor other students which was rewarding and I think has benefited me in and out of the workplace.
I wouldn't argue for doing it purely for career reasons and its not something to do just because you've been in school so long you don't know how to do anything else, but I found it worthwhile and you just need to consider what you want out of it.
At what point do we not consider operating systems and compilers "fundamental"? What percentage of CS/programming jobs require deep knowledge in these arenas?
By definition, these are the subjects that allow you to keep going when your abstractions leak.
To say a job "requires" them is pedantic, but what's the point of calling something a CS degree if it is not at least signaling an understanding of these fundamentals.
On one extreme, people see CS as a kind of trade school housed in a research university, teaching only what is useful on the job. Some people see CS as professional degree, kind of like law - you must teach theory for students to understand the field, but in the end, you're producing practicing lawyers, not abstract thinkers. And then, at the other end of the distribution, you find people who see CS as an academic field, a branch of scholarship, where the question "what percentage of programmers need to understand compilers" would be akin to asking "what percentage of math majors who become actuaries use real analysis?".
My freshman year physics class had us building computer models of the solar system from Kepler's laws, constructing vocoders by manipulating raw sine waves, and discovering the equations that governed op-amp circuits through experimentation and a little deep thinking.
At my last job, one of my first tasks was to take raw data from truck engines computers (a data point ever 300ms) and turn point measurements like "instantaneous velocity" into data like "how far as the truck traveled over the last week"? Trivial. By combining that with engine speed and fuel usage, you could also say things like "what gear is the truck in?" "Has the engine performance declined recently?" "What percentage of the time has the driver idled in the last month?" And roll all that data up into an efficient database, build a super-flexible API around it, and create a series of web apps to display the useful stuff that managers wanted.
Having a deep understanding of the physical relationships between measurements, understanding what the data can and can't tell you, being able to spot potential problems in a proposed project (for instance, something that relies on an impossibly accurate GPS in a subtle way) -- these are all things that computer scientists absolutely should be able to do.
Yes, they should understand their computers from the electrons on up -- operating system included -- but they should understand the world that the computers interact with, as well.
I agree that knowing fundamentals is not that essential for a large number of programming jobs. They're still fundamentals though.
Now you'll say that I'll make a mess if I don't know what the compiler does. That's what best practices are for if say I pick TypeScript. Also, optimization is the compilers responsibility.
This leaves open the possibility rhat some categories of MS degrees might correlate positively, even though MS degrees overall correlate negatively.
>>>No surprises here. I’ve ranted quite a bit about the disutility of master’s degrees, so I won’t belabor the point.
is literally all she says about MS degrees. Wow the bias of this site is incredible..
Is what she says about MS degrees in another post. No data or analysis is shown, just opinion.
My guess is that pure MS would correlate positively from CS programs that require a background in more advanced math and algorithms. Info sys degrees typically don't.
I'm not knocking those degrees, keep in mind we're measuring performance on a technical interview, which often leans very heavily toward algorithms, trees, sometimes graphs, and occasionally numerical analysis or computing.
If the MS degree data includes a lot of programs that dont include this background, then this woukd not be a surprising result.
"No surprises here. I’ve ranted quite a bit about the disutility of master’s degrees, so I won’t belabor the point."
And provides a link to another post the author has written. Look for ("disutility of masters degrees"). That page will contain more of the content we are discussing.
This is what I was referring to when I wrote:
"This is fully acknowledged in the text of the article referenced in a link, but unless people follow it, I do think the message may be a bit misleading."
I would agree that no data is provided here, the author simply states MS degrees are an indicator of poor technical performance "in my experience" and goes on to list reasons why this might be the case.
I'm sure there's a reasonable case to be made that an MS in CS isn't worth the time, effort, and money, even from a top school. But I don't think that grouping these degrees with MS degrees in Info Sys (among others) and then reporting experiences without that distinction provides much useful insight into the topic.
I remember when I graduated from a "Top School" and interviewed at "hot startups" from the valley. I aced a lot of the interviews - why? Because I had just taken classes on LinkedLists, Binary Trees, HashMaps, etc... So when they asked me to whiteboard a "shortest path algorithm" it was just rehashing what I did in school.
Years later, looking back, I fail to see the relevance in most of the technical questions. In fact, if I had to do those questions over again today I would probably fail miserably. Yet, I have been in the industry for a while now and have worked with countless more technologies and have accomplished far more than my younger self.
Just because someone performs well in a technical interview doesn't mean they will do a good job. That is the data that really matters. I've interviewed hundreds of candidates as a hiring manager for some big startups, and from my experience technical interviews are not a great indicator of success.
I'm saying this coming from someone who has gone to a "Top School" and done multiple Coursera/Udacity/etc classes.
Yes, someone might be able to whiteboard a random forest or write a merge sort, but do they know how to engineer a system? Can the candidate:
> Communicate well with others in a group?
> Solve unique technical problems?
> Research and learn new technologies effectively?
> Understand how to push back to product owners if there's scope creep?
These are all things that are not really analyzed in many technical interviews.
As I'm reading this analysis all I can think of is that it is pretty useless - if not dangerous for the industry.
What I've found is that it is critically important that someone knows how to code at some basic level. But their ability to code and explain algorithms on the fly, while probably relevant in academia/research, is such a minor part of the day-to-day of a programmer - At least from my experience.
- the wealthiest/most financially supportive parents/relatives
- upbringings that are conducive to academic success
- the most free time
as those are the ones who, by a large margin, attend top schools, work at top companies, and have time to spend on self-learning. Another data point of confirmation of a well-studied idea.
Assortative mating: http://www.economist.com/news/briefing/21640316-children-ric...
Few poor at rich schools even all these years later: https://www.nytimes.com/2014/08/26/education/despite-promise...
Why people care about elite schools: https://medium.com/@spencer_th0mas/why-america-cares-about-e...
The data here is perfectly compatible with a contradictory hypothesis: upbringing does not matter and top schools/employers are good at selecting future high performers.
I'm not endorsing either hypothesis, but the data here supports the above just as much as your hypothesis (which is to say, not much).
That doesn't sound like 'tons'.
I worked really hard to be able to get into an elite university and to get the scholarships to pay for it. When people automatically assume that my education means my family is wealthy, it discounts the enormous amount of work which I (and other low-income students) did to get there.
You should state ".. and I had to work my way in there - my folks aren't wealthy!" as a bi-sentence every time you mention your education. I bet most people immediately would understand what it implied: This is an achiever!
Even if fewer poor students get in, that definitely does not mean you can make the reverse assumption: that everyone who attended an elite university is wealthy. For all we know, these results are dominated by poor students who worked hard enough to get into an elite university (which, by the way, wouldn't be a crazy assumption: my CS classes had a lot more income diversity than, for example, political science ones).
"The MOOC Phenomenon: Who Takes Massive Open Online Courses and Why?"
For many reasons.
I find some creators put a lot of ego into their products, not that they are egoists so much, but doing something on your own can be emotionally challenging. I don't mean 'ego' in a negative way - just that when you are on your own you have to make so, so many decisions. APIs, compatibility, etc. etc. you're often a little overwhelmed, you often have to 'just do it' and 'go with your gut' in many areas. Which may or may not be a good practice/attitude for a Google employee.
They might have a stronger sense of the 'outcome' (i.e. what it does) than the capacity for the underlying code. Maybe it 'does the job' but is poorly written? Which again, might not work in Google environment.
... and working on a very large team is night-and-day from working on your own.
I think we all want to 'feel it is unfair' that such an important contributor somehow 'can't get a job at Google' ... maybe he's worthy, maybe not, but I can totally grasp someone mighn't be a good contributor at Google.
Maybe it's easier to see it from a non-technical standpoint: it's often very difficult for founders of their own companies to get along in a place like Google. I think that's easier for us grasp. The same thing can apply in the technical domain.
But when it comes to an interview, I cannot get a job several levels below my own (without referrals).
With referrals, I am offered the moon and the stars, complete pick out of a tremendous number of amazing jobs.
But any random interview, and I totally bomb it. I wonder how many out there are like that as well. It feels rather strange from my perspective, but maybe it's fairly normal?
Look at how many failed or poorly designed products and libs Google has released.
Maybe this is what happens when an industry is built around worshipping whiz kids. If CS in SV was about venerating NASA/aerospace code, less flashy but very very stable, well-engineered stuff that lives are dependent upon, it'd look a lot more like EE and other more traditional engineering fields.
There is nothing wrong with MOOCs, but they are almost always beginner-level. If you put them on your resume it kindof implies you don't have a lot of experience beyond that. Putting the Coursera Machine Learning course on your resume would be the equivalent of putting Java 101 on your resume for a Software Engineer.
I would recommend anyone to put projects on your CV instead. Even if you don't have a lot of work experience, just put side-projects and school projects on there.
But then this article seems to be measuring interview performance, not actual ability on the job. So is any of it actually relevant at all?
For instance, an interview I took last year came with a premise that I found downright bonkers. It was based on code the company had in production, but it stopped being a good problem to look at years before. I was having trouble coming up with the right tradeoffs for the implementation because all my experience was telling me that the entire approach was misguided in the first place, so the problem should not be solved. I passed, but it was a far rougher performance than I would have liked.
There's also how being far from college makes the least important knowledge gained fade away, and the least important thing I studied was memorizing algorithms. I write new algorithms at work sometimes, and I implement off the shelf ones too, but I don't have to recall them off the top of my head. Nobody has to implement distributed consensus algorithms under time pressure, or write HyperLogLog from memory.
So ultimately, there's what easy to measure, and then there's what is valuable and important. We go with what is easy, and those are things that are taught in college. Understanding the right level of testing or designing a system for observability are far more valuable in the long run: It's crazy how much downtime in well known companies comes from people not learning those things in college. But since we are bad at measuring those things, and kids right out of school don't know them, we don't interview for that.
And sadly this is why we all end up hiring by network so much: We can't tell if someone is good in a day, and we can't really ask people to dedicate two weeks to work with us in a probationary period if they have real jobs, but we sure can recall quality former coworkers and ask them to join in.
It seems reasonable that a person who took a MOOC might have prepared in other ways as well while people who didn't probably didn't prepare much at all (since watching a few Algo lectures seems the most accessible refresher.)
A top school is a good signal for how much time someone spent studying in high school, except for affirmative action students who get into top schools with much worse scores and GPA.
While this is true it highlights a major misgiving in the attitude with regards to the reality of socioeconomic status. If your poor, yes you theoretically have the ability to utilize those resources, but what good are those resources if you don't have a conducive environment to engage in study?
Lower socioeconomic status is correlated with lower stages of Maslow's Hierarchy of Needs. Self-actualization i.e reaching one's potential is more dependent on the satisfaction of basic and psychological needs, which are generally more accessible to those of privilege.
It's not just about resources being available, it's resources being available and making sure people have the opportunity to engage in those resources.
This dismissive "quota tokens" attitude really irks me. It's one thing getting in, it's a total different ballgame surviving and coming out the other end with a decent GPA. I've seen people of all socioeconomic backgrounds fail and some from remarkable poor backgrounds do exceptionally well.
For the truly competitive schools, the affirmative action students still need very high scores and GPA, they just tend to be given a little more of a pass on the extracurricular activities and essays in the application. I'd argue that for the most part, students of any demographic have to do some pretty crazy things to get the attention of top schools. The probabilities of an affirmative action student being granted admittance are just much higher than for an equivalent non-affirmative action student, but that doesn't mean that affirmative action students can get away with having poorer grades and scores. I'd argue they just have to be less well-rounded.
Assuming all high schools are the same.
There are virtually an infinite number of variables that contribute to university admission and it's extremely reductive to suggest that it's purely a function of the amount of time a student spent studying in high school.
Given that many environmental conditions to which exposure (in many cases, particularly early childhood exposure) demonstrably adversely impacts intelligence are more likely to be avoided with wealth (that is, both inversely correlated and with a clear causative mechanism for that correlation), this isn't at all surprising.
Can you cite a source for this?
I can see how it might follow from the assumption that intelligent people are more likely to succeed in their careers but keeping the GP in context, it should also be noted that not all intelligent people succeed because of various factors.
But there does seem to be a correlation between IQ and income. https://thesocietypages.org/socimages/2008/02/06/correlation...
This latter correlation could be explained by the wealth of the parents, as IQ may be a cultural artifact and income is correlated on education.
With this data you're just biasing towards people who interview well, which, I don't think you actually care about.
Well I mean I guess you do if you're a recruiter (if you're a moral recuiter you care about both), but not really if you're an employer.
My experience comes from several decades developing software and from time to time, hiring people. The people that worked out best, either as colleagues or hires, always seemed to be learning new things and were ahead of the curve trying out new techniques or tools before they became popular.
If you understand how a tool/technique becomes popular as the mass of software developers wrestle with new problems and finally find a way to master them, then it makes sense that constant learning makes some people stand out of the crowd. They happen to be the first ones to learn the new tool/technique and if they do not introduce it to their development team, then when management does make the decision to introduce it, the folks who know how to drive it have a chance to excel and appear to be rocket scientists.
The long version: I recently landed a role after some time off, having changed from mainly back end Php/Coldfusion to C# in the last year. I was able to make the switch in my last role. For me, moving to C# was a big transition; as well as guidance from a (fantastic) mentor, I used Pluralsight to learn C#, asp.net and DDD - e.g. from Jon Skeet, Scott Allen and Julie Lerman, to mention but a few.
Being completely burnt-out on the old stacks, I was set on making my next role a C# one. I've come to love what Microsoft are doing with Core, open sourcing etc, as well as the strictly typed C# language and ability to use NCrunch with live unit tests. So I signed up for a year after relinquishing my corp subscription, kept doing their courses, and found the training material highly accessible with great quality content. Each interview was a learning process, when I didn't know something from a test, I'd go and study it so that I'd be better prepared for the next role. One of these was the study of data structures and basic computer algorithms, where I was lacking. I might not have had years of experience, but the experience I had was mostly best practice.
During my search, I typically got great feedback on the fact that I was doing Pluralsight courses, and it was a significant factor in being hired for the new role - it showed cultural fit, in addition to passing their tech tests (which happened to involve structures). My company had interviewed a lot of candidates, struggling to find the right talent. Just possessing technical skills is one thing, having the right attitude towards learning is another.
At any rate, I'll keep using Pluralsight to raise my proficiency in my new stack - even as an old timer, I am having a newfound level of enthusiasm towards my whole profession which I haven't felt since I coded in assembly on the good old Amigas. I would be interested in knowing why Coursera / Udacity might be better or more accepted in the marketplace though.
"...only 3 attributes emerged as statistically significant: top school, top company, and classes on Udacity/Coursera."
Having done Udacity and also taking a couple CS classes at Cornell, this doesn't surprise me at all. People who take Udacity classes are doing so voluntarily on their own time, so they are going to generally be smarter and of higher socioeconomic status.
If you look at people who take CS classes over the summer at college when they don't have to be and when there are no student loans, you're going to get a similar population.
If anything the opposite is true. MOOCs have enabled vast swathes of economically challenged people to learn from high quality video material, whereas before they were limited to textbooks.
The key is that these kids are absolutely determined. And similarly if you really want that job at one of the Big Four, you might also consider soaking up as much prep material as you could.
So this is unsurprising. The thing people should really be thinking about is why, "how to pass a tech interview" is considered a separate and unrelated skill from "how to code", given how many people claim their interview processes are supposed to separate people who can code from people who can't.
(spoiler: it's because those interview processes don't accomplish their stated goal)
"I need to pass an interview where I'll be asked about specific technologies/algorithms/etc. I will take a free online course to memorize/rehearse that information before my interviews."
Universities don't teach programming, so it's not surprising you don't get applied skills out of the box with a degree.
'Good courses' are focused directly on teaching specific programming skills.
I think they are a good idea because learning programming by 'osmosis' (as you go along) can result in 'not knowing' a lot of key things, even when they are right in front of your face the whole time.
Most importantly - people who are going to take the time to specifically learn new skills, are displaying the kind of conscientiousness that you want in your talent - learning the actual skills is another benefit of that.
I disagree with this statement. Some universities teach programming, some not so much, some only need a Java 101 to graduate.
There is a huge variance. We'd need a statistical analysis accounting for the quality of education given.
1. You have an undergrad degree in liberal arts
2. You pay as little tuition as possible
3. You take no time off and continue to work FT
These apply to me -- my undergrad was in English, I paid 6k total (27% of the 21k total cost) and went to school at night over 4 years while my career continued to progress.
Most of the people in my program couldn't write a FOR loop if their life depended on it, they viewed it (incorrectly) as a jobs program while the school needed the $$ to keep the dept afloat, so I'm not surprised they fared poorly in technical interviews.
But that doesn't mean the degree isn't useful. If you're already a programmer, it helps get your foot in the door at many places. HR managers/recruiters feel more confident forwarding on your résumé, they can't parse your GitHub repos.
The degree is icing on the cake, it's not going to magically turn you into the Cinderella of Programming if you have no real-world experience. I got my master's with a QA and a paralegal and today? They're still a QA and a paralegal.
That being said, timed technical interviews are almost universally asinine, IMHO. When in real life do you have 10 minutes to figure out a problem? Or are prevented from Googling the answer? The measure of successful programmers is how efficient and professional they are in problem solving, not how much useless information they can keep in their head.
Things I've never had to do in 'real' life:
-Never had to split a linked list given a pivot value
-Never had to reverse a string or a red/black tree
-Never written my own implementation for Breadth First Search
Personally I'd rather see take-home assignments that roughly approximate the type of work you'd do, which in my career has been churning out new features or applications. Does knowing the time-complexity of radix sort vs heap sort really have a material impact on your effectiveness as a programmer? No.
They are not asking you to split a linked list given a pivot value or reverse a string without using a builtin function because that's what you going to be doing on your job. They're doing that so they can see how efficient you can be given a new problem. That's why companies have interviewers sign NDA's - so their interview questions escape as little as possible, so it's not just problem memorization.
How is it not?
Grasping the fundamentals of algorithms and data structures: important.
Memorizing the best greedy algorithm for traversing a linked list: not important.
Any super specific answer to a problem is likely not worth committing to memory because 1) it'll probably change as platforms evolve 2) Googling specifics lets you save space in your brain for things that actually matter.
It turned out to be a very expensive, but very fulfilling decision, and it paved a route for a very successful past four years.
Compared to my first master's, it was less theoretical and much more project-based. In that sense, it was fantastic preparation for career work, because every semester, I had to conceptualize and ship 4-5 different projects in all sorts of subject areas. The value of that shouldn't be underestimated. It also directly led me to cofounding a startup that had a brief lifetime, but effectively converted me to a full-stack engineer.
Today, I don't use much of the subject matter I learned in my day-to-day, but I draw on the creativity, problem-solving skills, and work patterns every day.
My Princeton program was great too, but I thought I'd share about the NYU program, as that was the more outside-the-box choice. There's something special to be said for a master's degree, when it's interdisciplinary and let's you focus on the intersection of engineering skills and subject matter expertise.
For people who attended top schools, completing Udacity or Coursera courses didn’t appear to matter. (...) Moreover, interviewees who attended top schools performed significantly worse than interviewees who had not attended top schools but HAD taken a Udacity or Coursera course.
Possible explanation might be that people going through regular degree typically spread themselves thin over many subjects (digital electronics, compiler design, OS theory, networking etc) while MOOC folks sharply focuses on exactly the things for interviews (i.e. popular algorithms). Its like interval training for one specific purpose vs long regime for fully rounded health. The problem here is not academic system but how we measure performance in interviews. I highly doubt if results would be same if interviewers started asking questions from all these different subjects instead of just cute algorithm puzzles.
We toned down the CS type questions since they tend to take too long. We still ask a few basic tree and string manipulation questions to weed out the people who have no idea how to program and get insight into how the person thinks.
I still feel at the end of the day we could flip a coin on accepting an interview candidate once they have shown basic competency and have the same results.
I have been telling candidates that a public github repo with a nice commit history carries much more weight with me then a CS degree since we have been burned so many times before.