"Given a string S and a set of words D, find the longest word in D that is a subsequence of S."
Found under "Foundations of programming" -- this is exactly the type of problem I'd expect as question one under this section. When it's made by Googlers, that is.
I make a lot of cool stuff day to day, and usually that requires a lot of code and knowledge about programming and topics that are rather advanced (currently I'm building a distributed collaboration system with event sourced data in Microsoft Orleans) -- but questions like this are just about the furthest thing from what I want from a developer. We're gonna be in a lot of trouble in the future if every developer is learning Google sanctioned code problems at Google University on their Google Chromebook in Google Chrome on Google.com. And I'm not even speaking about how disgusting that sounds -- just tell me how that's going to breed diversity in thought?
If I came across this problem in my "Foundations of Programming" course equivalent when I first started to learn how to code, well I'd probably be enjoying my life as a woodworker right about now.
(Disclaimer: I work at Google). This is going to sound like a humble-brag, but it isn't, I'm trying to give some life advice based on my experience: I have used "CS" algorithmic thinking on multiple projects, I've even used facets of abstract algebra and number theory from my Math degree to further my career and it lifted me out of poverty.
Here's an example. A few years ago, I was researching ways to crunch down the size of GWT (Java-to-JS compiler) output more. Besides obvious compiler theory work on optimizations, I noticed that the code had a lot of common chunks that could possibly be gzipped better. Knowing how bzip2 worked with the BWT transform, I wondered if sorting JS code could improve gzip output, the result was this (http://timepedia.blogspot.com/2009/08/on-reducing-size-of-co...) and writeups like this were one of the ways I got noticed by Google to get an interview. In other words, solving "useless puzzles" landed me a dream job.
The gist of that article is by using edit-distance/levenstein distance to 'cluster' JS code fragments, you can improve the efficiency of GZIP by bringing more code into its 32K window OR optimize the huffman codes it uses. However, edit-distance doesn't know anything about the LZ algorithm's mechanism.
It turns out optimally computing the LZ Distance Metric requires efficiently solving the Longest Common Substring problem, which can be solved with a generalized suffix tree in O(N + M) time.
In other words, if my only experience was doing web programming or standard forms + database enterprise apps, I probably would have never been able to even imagine this, because I wouldn't know what I don't know, whereas spending time actually thinking about data structures and algorithms in my past, prepared me for that moment, exposing me to stuff I would recall later when I needed it. Moreover, trying to solve this problems exercises memories you learned in college CS, so you don't forget them, especially the ability to read papers and consume mathematical notation.
Another example: I have long had a love affair with number theory. It's why I love cryptography, error correcting codes, block designs (See one of my own puzzles to see if you can solve it: http://cromwellian.blogspot.com/2006/09/puzzles-you-cant-sol...)
This led me to get into crypto. Early work on crypto during the cypherpunks mailing list days in the 90s, designing anonymous remailers, anonymous internet proxies, and digital cash algorithms, got me noticed, and led to a job at IBM TJ Watson research, my first, real, non-consulting job (before that I mostly wrote 6502/68000 assembly for games, and FastCGI+Perl). From there, I applied HashCash proof-of-work in the mid-90s to anti-fraud in hacked clients in 2-tier systems, patented an early re-CAPTCHA-like algorithm, and ended up getting acqui-hired by a California company. Now, I was born and raised in Baltimore City, grew up pretty poor, and had to start my first 2 college years in a community college. This was a fantastic dream come true for me, and I mostly credit it to serendipity made possible by being exposed to a lot of stuff.
Later, internal to Google, I came up with the idea of applying Broadcast Encryption used in copyright-protection systems (e.g. Blu-Ray AACS) and turning them around for a good purpose: creating a serverless, peer-to-peer group messaging social-network. That design was heavily based off of number theoretical knowledge I had to dust off, and helped facilitate an internal transfer to the team I'm on today.
There's nothing wrong with straightforwardly engineered apps, screens, forms, databases. Most of the world subsists on this. But sometimes, in fields of graphics algorithms, AI, mapping, bioinformatics, AR, VR, you encounter situations that aren't straightforward, and then it becomes valuable to be able to be confronted with a super-frustrating, mind bending puzzle, and bring all the weapons to bear on it. Can't you just Google-search it? Sometimes. I'd argue more often than not, you'll "flesh-based search engine it", by tracking down your local Jeff Dean clone and asking them for hints.
But IMHO there's no harm in being challenged like this anymore than being asked to complete basic training to be in the military, even though you might just be a tech-support soldier. You never know when you'll need those brain muscles, and if you're lucky, like I was, it could be a life altering opportunity. I don't think Google interviewers are interested in you solving the problem and getting the correct answer, they are interested in seeing how you work: how you think, how you communicate and ask questions of the interviewer, in other words, how you'd work as a member of the team. There's no shame in asking the interviewer questions, it's what you'd ask a coworker if you're trying to solve the problem.
>There's nothing wrong with straightforwardly engineered apps, screens, forms, databases. Most of the world subsists on this.
Most of Google subsists on this. Interviewing everyone for skills that only come up when you are pushing the boundaries of CS is pointless. 99% of engineers at Google fling protos and do menial translations on them.
Have a separate "CS" group if you want and apply the CS questions to them. For the rest, interview instead on their ability to write maintainable, testable, scalable software. Google would be worlds better off for it. It's shocking how incompetent the org is as a whole at delivering and maintaining quality software.
I don’t think product excellence is a function of the interview process, it’s mainly a function of what you are promoted for, and rightly or wrongly, launching new things tends to give you a better career trajectory than fixing or maintaining old stuff.
That's an unfortunate consequence of the OKR process. If the company has to be chasing near moonshot projects all the time. Then they also align resources towards it. That means money is chasing those projects.
There is a huge incentive to keep launching new things. To keep doing something big. Its the classic throw-pizza-at-wall-see-what-sticks approach except the experiment is run among a huge group of people. So eventually some Pizza sticks.
But even in that case. I hear '20% time projects' culture have long been dead for now. So that means the projects have to come some one from the top to get some resources.
This is a "Google Tech Dev Guide: Grow Your Technical Skills…", not a "how to interview" guide. Particularly a "foundations of programming" section, in this case.
I don't know if it's a good interview question or not, but that doesn't mean it's not part of learning programming.
There's more to learning to program than learning to do well in interviews, just like there's more to learning in general than doing well on standardized tests.
I think it's odd that so many immediately jumped to discussing the _interview process_. Google's, or anyone's, interview process may or may not be effective at finding talent or being fair. (I have no particular opinion on Google's, but there's LOTS written on the difficulty of creating a successful interview process for programmers, and various different opinions and approaches).
But why do we in this discussion seem to assume the only point of this Google site is helping people _pass interviews_? It doesn't say that's what it's for at all.
I think it's actually a pretty marvelous free site for learning CS and programming of various sorts, for people of various levels and domains.
> That design was heavily based off of number theoretical knowledge I had to dust off
The fact you had to "dust it off" implies that it was not top of mind, which is precisely the argument people have been making; that it's not top of mind, and need not be because it can be "dust[ed] off", and that investing months in prep just to pass an arbitrary hoop is a waste of time.
I'd argue that if you don't regularly dust it off, you'll forget it. The only reason I remember most of that stuff is because from time to time, I go back and read crypto papers, and then need to refresh my memory.
Think about any foreign languages you learned in high school. If you have never used them in 10+ years, it is very unlikely you'll be able to 'dust off' your second language as languages acquired late tend to require regular use before amnesia sets in.
But he's not that unique. One of my 20% projects recently required reading up on specialized parallel parsing trees/regular expression generalizations. I was able to sketch an algorithm structure that I thought should work, and some quick searching brought me a to a paper that formalized it (I wanted a 1-pass way to match a set of subtrees on a large tree, where each subtree could be vaguely regular-expression-y, such that I could apply the `*` or `?` operators to a node in the needle-tree instead of a to a substring of the needle-string). A generalization of aho-corasick seems to work here, though I haven't had time to implement it yet.
My day to day work involves lots of statistics[1], despite being an unsexy role (SETI), and I'm of the opinion that most people at Google should have a better understanding of statistics, despite by all accounts, the average statistical knowhow of my coworkers being well above average.
A coworker is working on code generation and parsing, so his automata know-how is coming in handy too.
And that doesn't even touch on the need to have some of this foundational knowledge when dealing with tooling like spanner or many of our other tools for dealing with data at scale.
I'd generally agree that one person need not know all of it, but a familiarity with the basics in all of these areas is necessary, especially if, as another user mentioned, you want any level of internal mobility.
Right, I’m pretty mundane actually, I don’t think I’m all that special and there’s lots of people at Google I’m envious of able to solve problems I can’t begin to crack. Some of the people working on build systems at google are using some dizzying graph algorithms for example.
Consider how Bazel must build and test the world’s largest(?) monorepo at scale incrementally on every commit.
Perhaps this is an artifact of Google’s size, but a lot problems turn out to be “well, this would be simple if our input was a million items,
Unfortunately it’s a trillion items” and now all of a sudden asymptotic complexity matters.
> It turns out optimally computing the LZ Distance Metric requires efficiently solving the Longest Common Substring problem, which can be solved with a generalized suffix tree in O(N + M) time.
Imagine you walked in to your Google interview, with all of the experience from improving the size of GWT output fresh in your mind, but before you "ran into the 'LZ Distance Metric'". They sit you down and say "LZ Distance Metric is related to your past projects, we want you to solve the LCS problem. Use this whiteboard, markers are to your left, you have 45 minutes."
You don't have time to read some papers, research, prototype things or sleep on it.
No doubt it takes experience, curiousity and exposure to algorithms to solve but in order to solve it under Google interview like conditions it mostly requires that you've done it before and memorized it.
I agree with most of your post, except for:
>But IMHO there's no harm in being challenged like this anymore than being asked to complete basic training to be in the military, even though you might just be a tech-support soldier.
I would agree if the big companies making the demands on these candidates assumed the burden of training. You don't have to spend $60k for a degree, spend hundreds of hours grinding leetcode, and have a portfolio of side projects to join the military.
> don't think Google interviewers are interested in you solving the problem and getting the correct answer, they are interested in seeing how you work
I think Google (and other companies) think this is what they are doing, but unfortunately there's no way to ensure that this is actually what HMs/interviewers are screening for. For every one good interviewer that cares about thought process, etc. there are three interviewers that want to see the candidate fail to prove they are superior and justified in having the job they do.
I enjoyed your story, and am glad things worked out for you - but as others have pointed out, you are most certainly the exception to the rule. Why are we using exceptions as the baseline for 'standard'?
It's just another example of how CS (Read: Silicon Valley) culture is geared entirely towards the 20 - 30 year old male with no family or time commitments. Willing and able to spend hundreds of hours grinding leetcode and a portfolio of side projects.
Their culture problems that have become publicized of late are a knock on effect of this. It definitely establishes the fundamentals and makes you "better" than the rest (to a point), but at a terrible cost.
I have worked on similar kinds of problems where obscure details about sparse matrix algorithms were required. My team ended up writing a large in-house search engine and ranking framework that combined image and text search, all in Cython, using various types of bit packed indices for fast filtering and our own implementation of a series of sparse matrix algorithms sort of ripping out CSC and CSR internals for a handful of operations related to applying latent semantic indexing and some in house vision model neural networks to the sparse data in a way that was both fast and let us store hundreds of millions of documents in memory on fairly normal hardware.
I would say that tech interviews which assess this type of skill are deeply irrelevant and fail even at the very task they attempt to solve. Not to mention how arrogant it is to reduce emphasis that should be placed on straightforward communication about real work experiences as a more important measure than performance on any type of interview coding tasks.
What you say just falls flat. You don’t give any reason to think that this stuff needs to be an interview-ready skill to be tested in a time-sensitive way. It could just be picked up later when you have a project where you need it. Most engineers can quickly grok this stuff from a book or online tutorials or talking to someone else or even just writing the problem down and locking themselves in a room for a few hours to think.
Any assumption that it needs to be already in your brain cache all the time prior to an interview or every day at a job is just harmful.
At worst it’s a thin excuse for promoting a hazing style of oneupmanship as a barrier to jobs, killing diversity of thought and at best it’s misguided reward for a certain type of extreme premature optimization thinking (that carries a lot of antagonistic culture with it too).
I don’t think it’s hazing, Google internal culture frowns on that kind of alpha male coder oneupsmanship.
its mostly likely a residual of the fact that people fresh out of school actually don’t have a lot of projects or experience to talk about.
If you ask me what the perfect interview is? I’d say, send me a bunch of github repos you participate in and let me review your code, bugs, and discussions with collaborators.
I’ll learn a lot more than any 1 hr FtF interview.
My overall point is there’s this kind of attitude “calculus? I never need it.” Prevalent in society where people rail against subjects they were forced to learn and then claim their day job never needs them.
I think whether or not you actually use anything you learned in CS courses highly depends on what you want to work on. My interests happen to be in compilers, computer graphics, and cryptography and I find I need to quite often revisit stuff I learned in college.
Even when I was writing 8-bit games and demos on the Commodore 64 I found some need sometimes for these skills like replacing expensive functions with approximations.
I’m not saying interviews need to question you on these things, I’m saying these things are useful to study and learn and stay abreast of. Perhaps I’m biased because I kind of love math and find an inherent beauty in these things and so I find these kinds of puzzles fascinating. (The Shannon Fano plane is a particular example of a beautiful construct that I just adore for reasons I can’t explain)
I can’t speak to Google in terms of the alpha mentality, but it’s still irresponsible of the world’s most effective computer science company to support it, advertise it with study guides, etc. I expect more self-awareness than that, and less lazy thinking that you need such time constrained trivia to keep an artificial prestige barrier (because it very sincerely is not about detecting engineer effectiveness). Take some responsibility for the cargo cult copycats you are fueling. The same could be said about stack ranking and various other policies as well.
Regarding interviews, everybody’s a comedian. You say GitHub but some of the best engineers I know do not participate in open source projects or are prevented from doing so by their employers. I say just treat software engineering like every other professional field on the planet and just conversationally interview people in a technically detailed way, focus on their past projects, work history, ability to get specific about their work. And of course that would not be as good for people just out of school or who forgot ancient work project details.
I would be fine with a holistic average approach, where each of these different ways is applied, but each counts for little weight individually. So you can be forgiven for a code test flub if you’re sharp on your work experience technical details.
But we exist in a state far away from that... where things are overwhelmingly based on isolated, time-sensitive trivia given 100% of the decision weight.
Calculus probably isn't the best example, because you really don't need it. Real engineers need it, but if software engineers got a different sequence in discrete math I think they'd be better prepared for the job market. I ended up doing a math concentration so I got both Calc 1-3 and some nice discrete courses like number theory, combinatorics, and abstract algebra that were super useful, and in my opinion we could definitely stand to replace memorizing integration techniques with studying some of those topics.
This really is not true. Using a little calculus to derive a closed form solution to a problem can reduce computational burden. A team where I work just had to do exactly this for a geographic distance optimization problem. At first glance it seemed like you needed dynamic programming and it was expensive, but drawing it out and modeking it with equations, we actually set up a Lagrange multiplier formula for the problem and found it had an exact solution. This was not a team of mathematicians or anything, just database engineers working on a specialized in-house search optimization problem.
If you do work in statistics, then calculus matters quite a bit, especially for transformations of variables and deriving the formulas for the probability distributions you’re working with. In Bayesian methods, even if you use autodifferentiation tools to automatically sample from your model, you still often need to do a bit of calculus here or there.
Likewise, if you ever find that you need to implement your own model training algorithms, you’ll probably need to calculate gradients (for example, the gradient of custom log-likelihood functions) analytically before writing them as software functions.
Calculus is pretty pervasive in professional software engineering. You don’t need to be an expert, but should have working knowledge and ability to dig into references and work through it.
Yeah the statistics and machine learning applications are a bit hard to get away from. But I would still replace the calculus sequence with discrete math, assuming the number of math credits had to stay the same. At my school (University of Maryland, College Park) most CS majors just took calculus 1-3, statistics, and one basic discrete math course from the computer science department. The size of calculus seems massive (3/5 of the total math load) compared to the benefit. That precious time could be spent digging deep into linear algebra or combinatorics or graph theory etc, and that's useful for all developers who write algorithms, not just in specific applications' niches.
Why do you think they are mutually exclusive? Generating functions, geometric series, difference equations, integrals over discrete probability distributions, elliptic curve cryptography, general linear programming and optimization, eigenvalue analysis in graph theory, stochastic optimization, all these things involve calculus to solve discrete algorithmic questions.
In fact I’d argue it makes calculus more important than what is typically understood as “discrete math” because in order to have the mathematical maturity to really understand the meaning of major discrete math results and theorems, it requires understanding calculus first.
If you “just” study discrete math mistakenly believing “that’s where the value is” then you’d end up with a shallow understanding of discrete math, lacking ability to extract value from it except in some narrow sense from rote discrete math application-specific coursework.
> … applying semantic indexing … to the sparse data in a way that was both fast and let us store hundreds of millions of documents in memory on fairly normal hardware.
Fascinating. Would you mind sharing more, some links / white papers?
We did something similar with ScaleText.ai (as a step-up from Gensim), I'd be curious to compare notes and perf numbers. Drop me an email at radim@rare-technologies.com if interested. Thanks!
The underlying issue is that programmers/developers/engineers need to be split up in terms of technical difficulty.
By not doing this, companies don't clearly know how to assess your skills because as we've found, there's not an accurate and fair test that tests from the lowest level of putting HTML on a web page to cutting-edge AI and distributed systems that fits in an interview format.
There is ample, ample work to be done that doesn't concern much of CS at all.
This is the same for a lot of other more mature industries that have been around for decades or centuries -- people pushing the limits of agriculture science are different from people who run farms are different from people who work the fields. They shouldn't get the same interview, but here we are trying to test everyone as a CS graduate.
This split would be separate from salary. You can deliver $millions in value by never touching anything beyond CS-year-one and you can subsequently charge clients for that as long as you can demonstrate it.
Did your work on that project occur under a 45 minutes deadline on a whiteboard with someone watching you?
I see "It's the process not the result" all the time, but almost always from people who already work at Google and have for some time. From candidates and fresh hires I usually see the opposite.
tl;dr: sometimes algorithmic puzzle thinking comes in useful.
I don't think anyone is debating that. You don't need to write an essay worth of anecdotes. And your anecdotes don't really address the complaint: these algorithmic puzzles just don't present themselves at the majority of everyday work for the majority of everyday people. Not even at Google.
Preparing for a Google interview is like preparing for an exam on an arbitrary subject that's only loosely related to your field of work.
What this kind of interview actually tests for, is whether the candidate has the time and desire to do all the prep in the first place. It's heavily skewed towards younger individuals, especially recent graduates.
> The gist of that article is by using edit-distance/levenstein distance to 'cluster' JS code fragments, you can improve the efficiency of GZIP by bringing more code into its 32K window OR optimize the huffman codes it uses. However, edit-distance doesn't know anything about the LZ algorithm's mechanism.
That sounds really cool, and even though I couldn't understand the technical details I understood the gist of it. But, because there is a "but", the issue with tech- and algorithm-focused companies like Google is that they don't look at the grander scheme of things, the same way as some other less tech-focused companies/people would have.
To go back to your example, that solution meant that Google could "serve" even more JS code to clients' browsers, and as I assume other googlers came out with different but related solutions that meant that even more and more JS code would "make it" to Google's web products. That's how we got to things like the latest GMail interface, which is a mess in terms of JavaScript, because Google just couldn't say at any one point: "No more JS!" presumably because it was getting easier and easier to "input" JS into that thing. All of this could have been prevented if at any one point a Google higher-up would have said "YAGNI!" or "KISS!", which are also very valid programming principles but which I didn't notice as being emphasized on this Google Tech Dev Guide page.
Everything has gotten bulkier in Gmail, not just the JS, but th CSS, the DOM, the fonts, assets overall because Gmail is an app that’s over a decade old and features have mostly been added, not removed.
This is the fate of all legacy software, OSes, native apps, they all eventually burst under their own weight, and are rewritten clean. Inbox was that dream :)
In particular, Gmail has been engineered to mostly be served as a single page app that functions offline as well. SPAs were a bad architectural trend now being somewhat reversed by code-split PWAs.
The same JS tools like Closure Compiler that enable large SPAs also enable efficient code splitting into tiny pieces (look at photos.google.com as an example)
A hammer is only as good as what you use to build. Web apps got native app envy after installables and app stores took off. I think we’ve woke up from that now and you’ll gradually see it improve.
Happy to hear that and happy to hear that in the end there were people at Google that managed to impose a KISS-like view on one of their most used products.
> Web apps got native app envy after installables and app stores took off. I think we’ve woke up from that now and you’ll gradually see it improve.
Yeah, the hype was big with SPAs, I also hope that Google diverging from that path will set an example for the rest of the industry and things will become more "normal" again.
Funny, I use this kind of thing all the time when I'm programming. I wrote a comment citing the pigeonhole principal to justify a test case not thirty minutes ago. And I'm not even doing anything wizardly or revolutionary, just improving some concurrency code in a random worker binary.
As for your comment about "diversity of thought": These problems seem sterile when they're presented in the most general possible form with all the formal trappings. The subsequence problem is at the core of diff(1), for example, and it feels more interesting in some way to dig into that past reading and applying your diffs like usual. But if you try to present it as "it's diff(1)" you will only get people that already know what that means. "It's a simple spellchecker", "reassembling DNA", "disentangling two books that have been mixed together", etc, these specializations all introduce additional dependencies and restrict the availability of the lesson. Anyone can read "a string S and a set of words D". It may not be optimized for you, but who cares about what you think is boring or not? You already know this math. Personally, I gleefully soaked up everything my undergraduate algorithms classes offered, even the AVL trees and all that other random circles-on-a-whiteboard stuff, because I was learning something in its purest form. And if I needed more meat? Whatever I was learning from would have mentioned, as an aside, that it was also diff and DNA and spell-checking, and I could pick and choose from those as appropriate.
Finally, would you have reacted anything like this if the page had been spat out by your favorite Ruby super hacker wizard greybeard man? If it'd been an artisan webpage hand-crafted in the hipsterest coffee shop in oregon? If it's been on microsoft's recruiting page?
I have a degree in pure mathematics from a highly ranked institution. I published research as an undergraduate. I went to multiple REUs. I participated in the Putnam and similar competitions. I read CLRS in 9th grade and didn't even bother taking an algorithms course in college. But even I find that these sorts of questions are more gatekeeping than anything else. That's not a perspective that I had when I was coming straight out of college.
The revelation for me came from working at a successful company that manages to hire great developers and build great teams without needing to ask these sorts of questions. It turns out that these sorts of problems are relatively rare in real software development work, and many developers who wouldn't necessarily ace them in 40 minutes under tightly controlled conditions can nevertheless come up with perfect solutions in a normal work context, should the problem actually arise.
The reality is this sort of question selects for personality just as much as it does programming ability. It selects for the sort of person who likes puzzles and brainteasers and spends a lot of time immersing themselves in that culture. It reinforces a lot of cultural stereotypes deeply embedded in the programming world.
There's also a lot of perspective to be gained by working outside of a major technology hub. If developers are rarer to come by, if you're in any sort of hiring position you realize just how careless it would be for you to throw away a candidate because they didn't ace this sort of question. It'd sort of be like a family without much food throwing out all their bread because they saw a fly land on it. Relying 100% on this sort of question reflects an overabundance of applicants more than it does any underlying universal quality in the interviewing strategy.
Beyond this, I think there's a little irony in your comment. You mention diff(1), but who's to say the author(s) of that tool would've necessarily solved this problem on the spot? You're actually making that assumption, which I find to be unjustified. There's an entire class of thinker and problem solver you're ignoring -- the deep thinkers, the great ponderers who soak their mind so thoroughly in a problem that they come out with something wonderful. But you might need to give them some space and a day or two.
I interview for my company, too, and don't ask these kinds of algorithm questions because the reality is these problems aren't typical of day-to-day work for the positions we are hiring for.
I'm interested to know what your interview process is. If you wouldn't mind sharing, how does the interview process work at your company?
We have the candidate sit at a workstation and write a functioning program to solve a relatively realistic but self-contained problem using the language of their choice. It usually involves extracting and working with some data from a few big CSV files -- we have pretty legitimate datasets set aside just for the interview. There really aren't any restrictions during this: the candidate can use any resources they would normally use, including Google or Stack Overflow.
We don't have the interviewer watch everything they're doing over their shoulder. They're allowed to just work on the problem on their own.
>>You mention diff(1), but who's to say the author(s) of that tool would've necessarily solved this problem on the spot?
I think this is the gist of the all problems facing the current interview process.
There is a big difference between inventing a solution to a problem, and learning the solution to a problem which some one invented.
The latter shows nothing about the ability or even the way in which a person thinks or even works. It just shows the person was able to scale multiplication table memorization skills to other things as they grew up.
It would be interesting to know how much time and practice it took the inventors of many of these algorithms, to actually create them. When you think about it... that's basically what candidates are being asked to accomplish in these kinds of interviews, unless they have the algorithm (or a very similar one) memorized already.
Would we be surprised to learn that most of the smart guys behind these algorithms probably didn't come up with them in 40-ish minutes on a chalk/white board?
I ran into this case studying for interviews a while ago. The "maximum subarray problem"[1] is a common interview question, you are generally expected to be able to come up with the O(n) solution.
Well has a great passage about the origins of the problem and how multiple excellent algorithmists could not improve on a O(n^2) run time[2]:
> Grenander observed that the cubic time of Algorithm 1 was prohibitively slow, and derived Algorithm 2. In 1977 he described the problem to Michael Shamos of UNILOGIC, Ltd. (then of Carnegie-Mellon University) who overnight designed Algorithm 3. When Shamos showed me the problem shortly thereafter, we thought that it was probably the best possible; researchers had just shown that several similar problems require time proportional to N log N. A few days later Shamos described the problem and its history at a seminar attended by Jay Kadane (a statistician at Carnegie-Mellon University), who designed the linear-time Algorithm 4 within a minute.
The solution is elegant and seemingly "obvious" after looking at it, but good luck coming up with it yourself :)
It's much more important to explain your thought process, work through the problem, and come up with any solution than it is to come up with the optimal solution in these interviews. I think that's what people who rail on standard coding interviews don't understand.
It's not about memorizing the answer to algorithm puzzles. In fact - if a candidate seems to know a problem by heart, that interviewer's feedback will often be discarded. It's about putting the candidate in a position where they have to think through a problem critically and seeing what trade-offs they make, what constraints they assume, and how they communicate their intent.
I have recommended hires for candidates who don't find the canonical solution many times.
I've had coworkers at Google tell me that they've informed candidates verbatim "don't bother giving me the O(n^2) solution" right when they start to go down that path. Hard to get much more of their thought process when you shut them down immediately.
(Also a Googler, not GP commenter. Speaking just for me.)
Yes, I have. Generally this was a case where the candidate had a good grasp on an approach to the question by the end of the interview but wasn’t able to write it due to time constraints.
I see interviews as a dialogue between me and the candidate where I give them hints in the right direction when required. Very few candidates require 0 hints.
Counteroffer: if you know that it's being asked in an interview, it's much easier to come up with a neat and elegant answer.
The test-taker within me pattern-matches this problem to "interview question with a single-pass linear-time constant-space algorithm". Single-pass constant-space algorithms are so heavily constrained that they're much easier to find. The big hurdle for the maximum subarray problem is knowing that a single-pass constant-space algorithm exists; finding it under those conditions is not that hard. The original inventors of the algorithms you mention had a much harder job than an interview candidate does: they did not know what optimum to aim for.
What does the above problem have to do with the pigeon hole principle? I’m all for CS foundations, I learned the pigeon hole principle as an undergrad, I never used it (in work or during my phd) and have since forgotten it, but I’m glad I did that. But the above problem....just feels like a Googly interview question, whether it is just diff or not.
What really gets me riled up is that these companies don’t care about my specialty or actual experience at all during the interview even if this is why they want to hire me at all in the first place. Nope, it’s all brain teaser coding questions that have nothing to do with what they would actually want me to do if hired. I get why they do it (one size fit all interview process), but I don’t think it is at all better.
There are now whole undergrad courses at universities like Stanford devoted to gaming the interview process at google and Facebook and other big techs. I don't think this is desirable or sustainable.
This seems related to Goodhart's law: "When a measure becomes a target, it ceases to be a good measure."
If the hiring process for some tech companies has become so dysfunctional that they actually consider brain teaser coding questions to be meaningful then we should expect that competitors with more results oriented hiring processes will eventually beat them in the market.
Perhaps the tenacity and work ethic needed to study a couple of months for an interview selects for people who would do the job well anyways? Maybe that is what it means to be Googly as a culture fit?
Anyways, practicing solving clever small programming problems at least isn’t boring (though I’m beginning to burn out on it), it reminds me of prepping for a high school or ACM programming contest. As long as it eliminates enough false positives, they’ll keep up with it.
It isn’t just if you can solve these problems, but can you also write straight lines of text on a white board? Also, do it while talking and and standing on one foot at the same time.
I sometimes wish people explain what they mean by that, because I'm confused by the phrase. I assume it's some Americanism that has a broad range of meanings. Cramming trivia for interviews doesn't sound like the "work ethics" I see when I google the term, but then again, I recently had a German student explain to me that they understand hard (but dumb) work as what this phrase means.
I agree it can be hard to put your finger exactly on what it is.
My best sense of it is that it is something approximating your ability to follow through on some task that a) was never, or has seized to be interesting or otherwise stimulating or b) is sufficiently difficult that many people would allow themselves to quit. Often related to the idea that there is some form of delayed gratification to be had.
Stripped of the religious connotations the phrase as commonly used (eg. "having a good work ethic") essentially means "feeling an obligation to consistently work hard for your employer".
Yes I would take this as behaving ethically at work eg not sexually harassing co-workers, not stealing from the company by abusing the expenses system, hiring hookers on the company amex which happened at one company I worked for in the UK - and so on.
To use an example from the military "officers eat last".
That would be my default interpretation too, but I've seen this phrase used plenty of times to mean "working hard", or "enduring work drudgery" (without any relation to ethical issues you mentioned), so I'm confused about the correct meaning of this term, and every time I see it, I have to infer from context what it is that a commenter is talking about.
The American meanings of work ethic relate to working hard or diligently. They don’t have anything to do with ethics. That would be under “work values” oddly enough.
Maybe I should have put scare quotes around it (work ethic)? It means whatever the hiring company wants it to mean, it isn’t a very well defined concept.
Translation: have enough free time so spend months preparing for an interview after work hours or have no actual work ethic and do it at your current job.
Basically, fuck people with families, commitments, or already mentally demanding jobs.
>Finally, would you have reacted anything like this if the page had been spat out by your favorite Ruby super hacker wizard greybeard man? If it'd been an artisan webpage hand-crafted in the hipsterest coffee shop in oregon? If it's been on microsoft's recruiting page?
You work for Google and you want to defend the org, we get it. Get over yourself. This really detracts from the rest of your post so I would consider removing it if you can still edit.
The entire point was that testing for these low-level fundamentals really has nothing to do with software engineering. It would be like hiring civil engineers based on being able to do some equations from memory that they would normally reference a book (or the Internet) for in real life. Not only is it not measuring their ability to engineer so it lets terrible engineers in, but it also rejects amazing engineers who don't have optimal solutions to things memorized that you would almost never be using in day-to-day engineering.
I got pissed off at the "Google this, Google that" sentence because it's bad faith debate technique. The point could have been made just as well with a comment about company towns or cyberpunk megacorporations. The hysterical repetition contributes nothing to the part of the argument that matters in real life and serves solely to incite unthinking, involuntary anger.
And, yeah, the complaint probably should have gone in its own post, and had some more explanation. Double-posting still feels wrong somehow.
> The entire point was that testing for these low-level fundamentals really has nothing to do with software engineering. It would be like hiring civil engineers based on being able to do some equations from memory that they would normally reference a book (or the Internet) for in real life.
You don't ask those questions when interviewing a civil engineer because a licensed civil engineer has passed the FE and PE, eight-hour open-note tests that permit only basic scientific calculators. So not only are civil engineers tested on their ability to handle the basics from memory - you know as well as I do that writing the notes for a serious-business open-notes test is half the difficulty all by itself - they're tested in conditions which impose far more constraints and pressure. I don't think I've done a single interview where I didn't go back and forth with the interviewer to get hints and clarify requirements. Eight hours locked in a room with pencil and paper, fuck that.
I agree that Google's interview is stupid, but I've gotten into this argument dozens of times over the last decade and every time I try to think of an alternative I come up with something that has its own weaknesses. For example, one of the reasons I went for a CS degree instead of a software engineering degree is that it's absurdly difficult to pick up formal CS theory on the job but you'll be learning software engineering every day. Take-home tests are unscalable and derided as insulting wastes of applicants' time. Contract-to-hire is too easy for employers to abuse. I can't even imagine how badly a licensing test for CS/software engineering would go.
The best I can come up with is to do the same style of interview but on a computer. Which I think Google might even do these days, but I went with the whiteboard one because I don't want that much riding on my laptop working that day and I run at a greater disadvantage coding on a loaner laptop than I do on the whiteboard.
FWIW I work at Google and despise these kind of interview questions and think they have 0 relevancy to my job -- until they do, and then I look them up and research them -- which BTW is not a skill that this interview style tests for.
It's to the point where I've done interview training here twice and both times thrown up my hands and chosen not to interview, because I can't imagine giving an interview that I myself would just walk out of.
So don't give that sort of interview; give one you wouldn't walk out of where, if it goes well, you'll be able to write a compelling argument about why the candidate should work at Google, and if it doesn't go well you'll be able to write about where the gaps are.
Feel free to ping me on IM (username is in my profile) if you'd like to talk about how to do this in a way that helps hiring committee make informed decisions.
In case you think these questions are not actually asked in Google interviews, I should add that I was asked this very question in Google SDE interview just 6 days ago. I failed to answer this and was consequently rejected. Also, the interviewer asked me nothing other than this question. Nothing about the breadth of work that I have done in different sectors, my interest/passion, personal projects etc. I was so upset after the interview was over and I cried a lot after reaching home. :(
>I was so upset after the interview was over and I cried a lot after reaching home. :(
Yep, this is a side effect of the image Google portrays combined with the idiotic interview process. They cultivate an image of being prestigious both internally (the NIH syndrome is extreme) and externally. They claim that all external experience is irrelevant once you start at the great big G (which is bullshit, but that's beyond the point) so you will just be interviewed on "fundamentals".
So you go in and do you interview, and all of the interviewers ignore your 20 years of developing guidance systems for rockets at NASA and have you write algorithms on a whiteboard to find palindromes or overlapping rectangles while the interviewer emphasizes that the code will be photographed (so don't fuck up the syntax!!).
At this point, an org you think is prestigious has convinced you that your only relevance in the big leagues is these whiteboard questions, which you fumble on because you haven't been grinding on leetcode for the last 2 months. REJECTED. A snap judgement based on your public performance skills has deemed you incapable of handling the very fundamentals of engineering. So what is it you've been doing your whole life?
Wow, an interview should never be so traumatic.. Your experience shows how much the process is broken, one-sided and blind to a person's value and potential. I know it's no consolation, but recently I saw a site with many brilliant people sharing their rejection stories: https://rejected.us/ It just goes to show, if a company cannot see your real worth, don't let it get you down, keep going and prove them wrong.
The market for IT folks is excellent and there are many many great companies out there, most of them not very famous. If you've gotten an interview at Google you will definitively find work elsewhere.
If you couldn't solve that question, maybe you shouldn't work at Google...
def valid(S:str, word:str) -> bool:
s = S
for l in word:
pos = s.find(l)
if pos == -1:
return False
s = s[pos+1:]
if not s:
return False
return True
if __name__ == '__main__':
D = ["able", "ale", "apple", "bale", "kangaroo"]
S = "abppplee"
in_s = {len(w): w for w in D if valid(S, w)}
keys = list(in_s.keys())
sorted(keys)
print(f"longest is {in_s[keys[-1]]}")
Yeah, whenever I see a question like that one, I think it's pretty straightforward but then I wonder if my straightforward solution isn't the "brute force" solution and the interviewer isn't looking for some tricky "elegant" solution. Still, if I were asked that in an interview, I'm sure I'd put together something like your solution; I can't think of a "better" way to solve it.
I was trained by fb for interviewing, the phone screen, sw part. You start with a simple enough problem. See how fast the candidate can come up with a solution. Then complicate the problem to find more stuff. Like, what if you have unlimited memory? What if you have very little memory? etc. You are supposed to look for signals. Can the candidate improve the solution based on some random constraints/hints? Is the candidate good enough with the language they picked (eg, append vs extend in python, or why i copied S into s). We were not looking for tricks and the entire idea that big companies are looking for tricks in their interviews was spread by either people that never been in such an interview or were bad enough to be rejected.
That's certainly your opinion, but the subsequence problem is a very general, abstract question that offers the candidate a number of ways to arrive a progressively better solution by following their intuitions. It's not the greatest interview question, but it's certainly not the worst. This website appears to be for people who already know how to code, and offers many different paths as well.
It doesn't. This seems like a typical Google interview question. If you didn't know how to solve it before the interview started, you aren't going to figure out anything other than a brute force solution in 45 minutes. And brute force solution will not get you a good grade in a tech interview at Google. It's idiotic: once you do get hired by Google, easily 80% of your work is copying one proto buffer into another, and the remaining 20% is mostly yak shaving and figuring out how things work in the absence of documentation. Dynamic programming should be the least of their concerns.
That's probably the issue with algorithm solving is not software engineering and if you mostly select on the first, you do not necessarily get good people in the second, broader and more important area. Then you end up with a terrible system to work with.
It's like hiring construction engineers only based on their performance in structural analysis. Sure, it's important that the house doesn't collapse, but this will not get it build.
> If you didn't know how to solve it before the interview started, you aren't going to figure out anything other than a brute force solution in 45 minutes.
I'd never heard this problem before and it took a few seconds to think of something better than brute force. Many colleagues I've worked with would too, and could probably improve on their first non-brute-force idea over the span of 25 minutes.
The problem here is the only non-bruteforce solution worth pursuing here involves DP with less-than-straightforward memoization rules, which the engineer is unlikely to actually use before or after the interview. So she has to waste a month studying _specifically for the interview_ and "refreshing" the skills she won't actually need on the job. It's like you're hiring a welder, but you want them to be good at juggling as well, just for the interview.
Don't get me wrong, I'm not saying DP is useless, it came in handy a couple of times over the course of my 20+ year career. I'm just saying that you could always look it up, and the amount of signal you get out of someone solving this problem is very close to zero, because it amounts to: "this person solved this problem before the interview", whereas the question you really need to answer is "can this person solve the problems we need solved".
Actually looking at their "brute force" solution, what I was referring to as brute force was their "greedy" algorithm, and the first optimization I came up with was the most optimal solution. I'd call that a very natural strategy to take -- try doing the words simultaneously instead of separately, and you see it.
I don't even see that as DP.
Not everybody will be as quick as me, but there are plenty of people that don't spend a month practicing for a Google interview. If somebody is practicing solving data structures problems for the first time in their life, that's fine -- maybe they went to a substandard school. But if they have to spend a month refreshing any time they want a job, it means their brain drops skills they learn that are closely related to the job they do. That means their experience is worthless -- they aren't retaining it, long-term.
If the skills were closely related to your job, then it doesn’t make sense that you’d need to refresh those skills - as you would have been applying them at your job...
Sounds like the skills are the useless things, not the work experience.
That's not the point. I just automatically assumed a subsequence problem will involve DP. I only know this because I studied for interviews in the past, not because I ever had to solve a subsequence problem at work, and I've done some _very_ advanced stuff.
I posit that ability to solve such problems is completely irrelevant to one's job performance, at Google or anywhere else. Google's own test of this hypothesis (hiring a control group of people irrespective of their interview scores) seems to bear it out.
The point is you are claiming that people need to study up a month for a Google interview. My point is that there are many people who do not need to study up for a month, or at all. They walk in, get asked some fun little data structures problems, and solve them.
This idea that some people like to push, that these interviews are all about memorization, is a false one.
>But if they have to spend a month refreshing any time they want a job, it means their brain drops skills they learn that are closely related to the job they do. That means their experience is worthless -- they aren't retaining it, long-term.
DP isn't used the vast majority of the time. That's not retaining "experience".
> only non-bruteforce solution worth pursuing here involves DP
What's wrong with starting with a hashtable of substrings of fixed length? For inputs with low autocorrelation, that'll get you a good average-case speed up.
Have you studied cracking the coding interview before? If so, then I think you’ve probably seen similar problems.
I find these problems very annoying and I code a lot of clever algorithms in my research. But none of it is have the strong scanning variety, I mean, except lexing in a compiler.
And that’s how I would solve this problem in any case: I would just construct a scanner for D that would simply add to the state set rather than transitioning. That isn’t even in the canonical solution set, and anyways, would be considered too trivial as a test of compiler construction knowledge.
I've never specifically studied for programming interview questions, or read that book. Of course, when somebody asks a fun problem, online or something, such as here, I'll try to figure it out. And I also find "string problems" to be the worst, I think they're a particularly bad genre of algorithms question.
My claim was a very weak one, too. It was that somebody who never heard the problem before can get better than the brute force solution in 45 minutes. As m0zg intended it, that means better than 2^n*|D|, which is easily true. As I intended it, that people could do better than the "greedy" algorithm in 45 minutes, well, my image of a canonical developer is going to get the greedy algorithm right away, and then have 45 minutes to improve on it. And, I think they'd be able to accomplish that.
I guess if you are used to solving these problems, they become easy. Anyways, I would have never seen the greedy solution to this problem, and am just lucky that it is basically a compiler one if you look at it hard enough.
But if you aren’t in the mindset at all, these problems can be complete WTFs. For example, I just started interviewing recently and on one my first interviews someone gave me the house painting problem. I was completely stumped without knowing where to start, wasn’t familiar with the structure for solving these kinds of problems, and anyways these never really came up in my area of expertise. Ask me to write an interpreter, compiler, text editor, live programming system, reactive programming framework, I know all the tricks, but if you ask me to minimize the paint on a bunch of houses...
I'd imagine you might want a bit of practice get refreshed. I know what you mean. But I'm guessing it's not a month of practice, and you're not thinking you have to memorize the problems beforehand.
For me, it's a similar background (interpreters, compilers, text editors) and for any algorithms interview, what always got me have been dynamic programming problems and remembering that hash tables exist.
Edit:
Funnily enough, one time on a phone screen, fresh out of college, I got asked directly what data structure I'd use for a text editor. The next words out of my mouth were, "a pair of finger trees," and it turned out that it didn't go well, I didn't get a follow-up interview.
Maybe not a month, but many have done it for a month to get that dream job at google, the possibility is there and that could be the competition you are calibrated against.
Ya, I think we are sometimes actually better off with the generic brain twister questions because the more specialized questions are harder evaluate. Like, my idea of writing a code editor might be completely different from yours because I’ve dealt with different requirements. And if the interviewer is junior enough (or not versed in the specialty), they might not be ready to have a serious discussion about trade offs and design spaces.
It does seem to me, reading online the past few years, like there has been an inflation in difficulty of a certain category of interview questions, as a result of people studying.
Usually, when interviewing people on an algorithms interview, I've asked an straightforward warm-up question, followed by a more interesting second question, that might be difficult for somebody that hasn't thought about anything interesting in 10 years. I don't think there was ever a situation where somebody got through the first just swimmingly, and then hit a wall on the second. The pain always starts on the warm-up question.
> that might be difficult for somebody that hasn't thought about anything interesting in 10 years.
Coming from doing research for the past 10 years and now looking for a dev job, I have the opposite feeling: all these interview questions (even the hard ones) are way too boring.
>> I've never specifically studied for programming interview questions, or read that book
You've also never been through a FANG interview, according to your LinkedIn profile. I've been through several (successfully), and conducted well over a hundred. It is very unlikely anyone can pass them without extra work to prepare.
As someone who's done a ton of tech interviewing for a blue chip silicon valley firm, viewing this as a homework problem to which there is one correct answer is exactly not what I'm looking for. I want to see your problem solving skills and I'm curious about your knowledge base, but if you don't know any of the specific techniques and/or don't get to the specific optimal solution I'm looking for I really don't give a shit.
This is the reassuring story every interviewer has to tell themselves to feel good about their decisions... and its the story every interview consultant tells the tech firm. But I tend to think its probably a just-so story, most of the time. All this work is done in the tech world to eliminate the subjectivity from interviews - this is how the subjectivity is re-introduced through the back door.
> this is how the subjectivity is re-introduced through the back door.
Exactly. You'd think that the large tech companies would insert "mystery shopper" candidates into the pipeline to uncover and measure the objectivity/subjectivity/bias of the interview process.
Everyone says "it's not about the solution but how you go about solving it", but it's never been my experience as an interviewee. Every time I've been interviewed with one of these godawful trivia questions I've been denied even though I do my best to work with the interviewer to solve it. On the rare occasion I get feedback it has been because I didn't solve the problem optimally or some other nonsense.
> figuring out how things work in the absence of documentation
That would be great interview test actually. Given a bunch of unknown code, add some simple functions, not breaking it apart. Keeping the style (not only syntactically) would be a bonus.
Yes, I actually implemented such a thing at a company I worked at after Google. We'd give the candidate a fairly simple and self contained codebase, and ask them to add a simple, incremental feature in the span of an hour. No limits on legitimate internet access, stack overflow, etc, and using their favorite text editor (we had several already set up). This worked great for us, as it selects for people who can solve practical problems in code they're not familiar with, which is bulk of the work of any software engineer.
While I'm not in a position to test it, I've thought for a while that companies should save their bugs, make an app that is similar enough to the work that these saved-off bugs can be put into it, then compare the candidate solutions with what your existing people came up with.
> If you didn't know how to solve it before the interview started, you aren't going to figure out anything other than a brute force solution in 45 minutes
You're assuming that it's important to actually solve the problem. I'm not sure that's the case - it's really asking whether or not the candidate understands that brute forcing is a poor solution and determining whether they're capable of even looking for a better solution. A lot of developers find it very hard to look past the obvious first solution they find to a more efficient algorithm instead. Specifically companies like Google want to hire people who can do that deeper work.
The real issue with problems like this is when much smaller companies hiring a dev who'll be bolting together APIs and won't ever need to know an answer start to copy the technique believing "it's how you hire devs".
Yes but Google wants the right answer, regardless of the way you think and regardless of the fact that you came up with a proof for P=NP on the interview
I've asked a few different interview questions in my time, and the number of candidates who have gotten the "optimal" solution currently sits at 0, to any of the questions.
Despite this, I've suggested that we hire some of those "wrong" answer candidates. And given the interview feedback I've seen, I'm not the only one like this.
I agree with you that a non significant number of developers need this kind of problem-solving ability for their daily work, but I would disagree that they never need it. And when they do it can have huge impact.
There are many applications of these fundamental problems in system building. They are usually not as obviously in your face as the coding interviews and it is often not possible to let a library do that stuff for you. That's why it's beneficial that every developer who might have to do this stuff, is capable of doing it.
E.g. it's a common problem to have a list of objects A and another list of objects B and you want to find all elements in B that match an element in A.
The naive approach is thinking of 2 boxes with balls of different color. You pick one from the first box and compare him one by one with each ball from the second box. This is equivalent of nesting two loops and has a runtime complexity of O(n*m).
But this problem is absolutely solvable in O(n+m) with added memory of O(n) (or m, pick the shorter list), by creating a hashmap and then iterating once.
The difference in real world performance is huge. I optimized a small system dealing regulary with about 20mb of translation data from a 20 minute runtime to a few seconds. Not for a tech giant, but for a small webshop with about 100 employees.
My thought process: 1.) this is an intersection problem, 2.) what does a quick search have to say about efficiently solving an intersection problem in $language? That thought process does not require any algorithm knowledge. That knowledge is already out there in abundance, and there are enough people smarter than me focused on those problems that I am wasting everyone’s time by implementing a solution myself. (Btw the answer in C# is LINQ, if you wrote an algorithm yourself I’d call you a bonehead and tell you to rewrite it. I suspect the answer is similar for any other language that has a standard library.)
You have two lists of objects and the thing you want to compare against is a property nested two levels deep in one kind of object and one level deep in the other object. Are you sure you still want to pass this to a library and not write this simple and basic approach yourself?
1. Pick shorter list
2. Create hashmap comparable -> object
3. Iterate longer list and check for each item: comparable in hashmap?
Done.
That doesn't need a library. Sure, let's say in Java, you could maybe pass your custom implementation of comparable to the library, but is this seriously your interpretation of better code? In fact in many cases the library can't help you, because your structures are not as basic as the library would need them and transforming is a huge and unnecessary overhead.
That's the thing, you usually don't need (and want) to implement an algorithm on basic strings. But the kind of problem, where you need the same approach but don't have strings appear a lot. Especially if you enjoy to work in system infrastructure like I do.
And what if you don't have a standard library ? You're working on a new device, on a new language ?. Frameworks breed programmers with less adaptability, foundations breed problem solvers that can adapt across problems.
The fraction of engineers at Google to which these conditions apply is tiny. It's absolutely absurd to interview based on some fictitious need. The real reason Google interviews the way they do is not because they need a high bar. Their interviews are structured to preserve the egos of the interviewers and because, like the gaming industry, they can simply due to the sheer number of applicants.
Ego abounds everywhere. However, your point about applicant volume motivating the setting of a high bar for entry is spot on.
This is happening throughout the corporate world. Too many candidates for too few high-paying and elite positions.
My girlfriend works in finance and recently interviewed for a post-MBA position at (arguably) the top company for the work she does. She talked with other candidates, and discovered it was not uncommon to pay thousands of dollars on interview preparation (books, coaching sessions, etc.), and to spend, say, over 100 hours prepping in total... for interviews with a single firm... which you weren't even certain you'd be interviewing with. This trend is only accelerating.
The less pessimistic stance is to just say this is a consequence of greater specialization. The less optimistic stance is to say we are seeing professionalization for the sake of professionalization.
Google's requirements also allow easy mobility between teams. If all I knew was one framework, I'd always have to prove myself again when I move teams. The hiring bar ensures that most candidates can adapt due to their strong CS base.
No, these interview questions do not signal for that kind of adaptability. They're a signal that someone studied (crammed for a few weeks, often) a small slice of CS. They're a poor signal for the work engineers do there, or anywhere, in almost every case.
"Their interviews are structured to preserve the egos of the interviewers "
You just stereotyped about 100k people. You have no idea about how many people work to make the interview questions as interviewee friendly as possible. You may hate the company but don't judge the people without knowing them.
I'd argue that someone with CS fundamentals doesn't necessarily tell me how adaptable someone is. Many times, the depth isn't there to strongly indicate one way or another. Adaptability is a difficult thing to try to assess.
I say this as someone working for another FAANG whose HQ is not too far away from Google's, and as someone who spent 4 years in a math PhD program.
And what proportion of HN readers are in that situation ?
Not that where I worked wound not have built things from scratch - still whish I had had the budget to use ML for one project but with hard ware costs of £250,000 just for the hardware in early 80's it wasn't cost-effective.
The problem with "just use a damn library" is that software is full of abstractions and they leak. If all you know is libraries, and you don't invest in knowing how underlying technologies and algorithms work, you're going to get destroyed and look completely lost at some point. Sooner than you think.
I'm 15 years into my career and have yet to be destroyed for not taking the time to learn the details of every abstraction I utilize. That seems like a massive wasted effort when I could be busy adding value to the products I'm building vs becoming an expert at all things.
People litigate this endlessly. You're still at the second tier of the Expanding Brain Meme of this problem (the first is to do the problems and get the job).
What does it say about a person who's smart enough to know that the problems are meaningless but they study for them anyway? Why might that particular kind of conformity be valuable to a huge corporation?
>currently I'm building a distributed collaboration system with event sourced data in Microsoft Orleans
I know it's easy and fun to use frameworks that abstract all the gory low-level details (the equivalent of calling a built-in sort function on a list of numbers), but somebody has to write those frameworks at some point. Don't get me wrong, I'm certainly impressed with the work that the Orleans team did and it would be great to read a write-up on the design of this framework. I suspect a write up on your 'distributed collaboration system with event sourced data in Microsoft Orleans' would be less interesting because the best part would read 'and here I used Orleans to do all the hard parts'.
That's precisely my point. There's a handful of guys at Microsoft building this awesome framework. Anyone else interacting with it needs to understand some details of the framework but it's mostly abstracted away from them. And those developers can be exceptionally well suited to their jobs without understanding the underlying algorithms that make Orleans work. Calling the algorithm design "foundational" is like saying it's foundational to understand how an intake manifold is designed in order to build a car. You could have the best intake manifold designers in the world and still build a shitty car around it. Foundational programming advice and tutorials should focus on making developers the best at building cars well, and leave the "manifold design" to the 1% who need to do it.
I think my issue is an attitude that celebrates ignorance.
>Calling the algorithm design "foundational" is like saying it's foundational to understand how an intake manifold is designed in order to build a car.
Here's a big picture view, the most talented developers I know have a very good mental model of low level details, and also tend to be pretty good at algorithmic questions. I don't think that's a coincidence.
I know that day-to-day programming tends to consist of solving rudimentary problems and worrying more about high-level code organization and architecture rather than micro-optimizations of some function. But it has also been my experience, that even in those cases there will be times when you will run into a problem that will require you to put together an algorithmic solution with a data structure you learned way back in uni that is analogous to the kinds of puzzles you disparage.
As a challenge, why don't you try to do one of those puzzles per day for a month - I guarantee you will find value there.
So you like the algorithms and data structures used to build a distributed collaboration system with event sourced data in Microsoft Orleans, but not the algorithms and data structures used for string search.
Honestly I don't care about the algorithms and data structures used for string search, or event sourcing for that matter. I use that code to build stuff. There's a vanishingly small portion of the population devoted to these problems, and indeed most of those people are probably working on search at Google or something similar. Calling them "foundational" is a joke. They're important, absolutely -- we all use them daily. But expecting every developer to be able to implement a search algorithm? That's the kind of dumb interview question I'd walk out on. Programming is much more than algorithms. Algorithms are the tools we use to build problems. You don't expect a tradesman to build his tools himself, do you?
Algorithms aren't foundational, in my opinion. Once you understand the syntax of programming, you can start learning about the applications of that syntax, which for 99% of developers rarely ends up in the shape of an explicit algorithm. It's not all bad, debugging is foundational for sure.
I'd place a person's debugging skills, their ability to predict bugs, system design, knowledge of common (applicable) libraries (more so knowing when to use them, not method signatures), and perhaps even ethics above algorithms. Probably many more but there's a couple to start. Having never needed to build my own sorting algorithm in 14 years of coding, I'm pretty confident I don't need an engineer who can do that, either.
Foundational means the core set of principles/theorems/abstractions underlying everything else. Saying that debugging is foundational is like saying that knowing how to use a TI-89 is foundational for math. You don’t need a calculator or a computer to do math or computer science.
In the end you choose what you spend your time learning and if you decide that all this computer science is not worth it then that’s fine. But you will objectively become a better programmer if you understand these things. Even if you never use any of it.
> Having never needed to build my own sorting algorithm in 14 years of coding,
Neither have I. What I have had to do is recognize when I could do what I needed to do without sorting the array, understand various requirements when I'm writing comparison functions, understand why std::list::sort exists when std::sort is right there, debug a stalling mapreduce job, recognize when a library I'm using has done a stupid and written an intrusive data structure that sorts in N^2, etc. Are you seriously claiming that you'd be able to do any of that if you didn't know how sorting algorithms worked? If those things aren't in the documentation, or if you run into the intersection of that thing and some other issue, you're hosed unless you know the math.
For that matter, why are you so focused on sorting? You know what I have to write all the time? Tree and graph traversals. Why does it matter if I use a stack or a queue if my traversal will visit every node anyway? Why can I get away without maintaining a set of visited nodes in one case or the other? Hell, I have to implement something very much like a toplogical sort once a year or so. That's not something you'll ever learn by groveling over for loops in fifty languages.
I agree that you rarely need an explicit "algorithm". What you do need, regularly and consistently and on a basic level to write correct code that is usable in the real world, is knowledge of algorithms. In a perfect world everyone would know from reading the docs and from staring at stack frames that their code gets slow when they use particular data structures and libraries. We do not live in a perfect world. The engineer that cannot write a single sorting algorithm is not the one that just uses library functions for everything and is fine in the end. The engineer that cannot write a sorting algorithm is the one that writes a jumble of for loops and you improve runtime from eight minutes to eight seconds by replacing it all with hashes. The engineer that cannot write a sorting algorithm is the one that designs an API that fundamentally requires server-side session state that grinds to a halt at ten QPS. These are people that I have worked with. In every case the fundamental issue was that they had "started learning about the applications of the syntax" and never studied formal CS theory.
> The engineer that cannot write a sorting algorithm is the one that designs an API that fundamentally requires server-side session state that grinds to a halt at ten QPS
I've come cross more than a few "can write a sorting algorithm" engineers that nevertheless design APIs requiring server-side session state, or that have gone ahead and implemented a sorting algorithm embedded in the server-side HTML template.
Of course it's not sufficient. I don't even think it's necessary. But, generally speaking and in my experience, formal algorithms knowledge is incredibly useful, and you are at a significant disadvantage if you do not have it in your toolbox.
I was pointing out that a specific correlation you mentioned wasn't particularly robust, and certainly didn't amount to the causation you were implying.
Perhaps I should have made my point more explicit by noting the existence of counterexamples - engineers that design and implement good APIs without any particular knowledge of CS or the implementation of sorting algorithms.
You actually are conceding my point when you admit that knowledge of algorithms isn't even necessary.
Depending on you work you don't need to use it again. I have not used it for a decade. I somewhat remember the theory. For interview i can just memorize it.
You seem to have a nonstandard definition of "foundation". It doesn't mean "introductory" or "beginner"; it means "what everything else is built upon".
Ok. I'll tell you that I want my API to never take longer than 100ms as part of our service level objective. Code you contribute to our system is consistently failing to meet our objectives. Make it faster.
We're designing a game engine. Our objective is a smooth 50 frames per second. We want to do dynamic lighting, destructible voxel terrain, the works. Code you're contributing is consistently dropping our frame rate. Make it faster.
We make financial trades in some secondary market. Our goal is to ensure all transactions are atomic and take less than 100ms to complete. Code you're contributing is consistently failing serialization of the transactions leading to failed trades. Fix it.
These are but a handful of examples where algorithmic knowledge is useful. There are more. Without algorithmic knowledge you'll simply be rediscovering everything on your own, making up new things that don't make sense, and struggling to stay useful to your team.
There are programming tasks that clearly don't require it. Most web development jobs for SMEs are competing on price with other contractors for a commodity resource: time. For those jobs a pragmatic knowledge of the programming language, tools, and libraries is sufficient: the problem you're solving is valuable but low-effort where time is of the essence. The faster you can pop out a web app and move on to the next one the better.
These things belong to the same class of skill: programming. However I think algorithms are still foundational because even in the latter case performance and correctness still come up in less critical areas: time to first paint, real-time updates, etc. Even a cursory understanding of the difference in magnitude between O(n) vs O(n²) is useful if you understand that using the right data structure will solve 90% of your problems.
> Once you understand the syntax of programming
Syntax is trivial. semantics are what matter. Take Hy as an example: it's a lisp syntax. But a Hy program is only a Python program. ReasonML takes this idea even further allowing programmers to work in whatever local syntax they find useful.
And yet "I open any textbook and look for the matching algorithm" is not an accepted answer to this question. You're supposed to either invent the algorithm from scratch or memorize interview problems and be lucky.
that is a classic textbook dynamic programming (or suffix tree, depends on the size of the set) problem. behind the puzzle, it is the thought about dividing one bigger problem into smaller ones, and hence solve them with code efficiently. maybe that does not sound too far from your day-to-day coding.
the site probably should have presented these puzzles from a different perspective, so that it is easier for people to recognize the message about what kind of programmer skills and mindset it is looking for.
I would argue your perspective boils down to what is the best standard way to teach programming skills. No one should stop learning from as many valid sources as possible. And it's not an easy problem for anyone to solve.
The underlying issue with the problem is that there is probably no one size fits all approach. Individual variability might as well be immeasurable. The best you can do is put out as much valid information and schemas as you have in approach to a solution to the problem. Tricks of the trade, heuristics, best practices. A full fledged scientific understanding of how the brain assimilates coding knowledge hasn't been established. It may not even be known if our knowledge of the brain is adequate enough to begin that research.
As the history of computing has changed over the decades, the nature of coding changed as well. More and more abstraction has been layered and now it's easier to code than it ever has been. It's like all these macros have been set up so that the programmer only has to work at one level.
Google is naturally going to promote its own system, but it is not Google's responsibility or role to claim to present some final statement on how to approach it.
Regardless, so much knowledge that used to be stored only in people's heads has been offloaded onto software, and the nature and requirements of coding have changed with it.
On top of it all there may simply be too much of it (computer science, mathematics, programming languages) for any single individual to learn it all in a strict completist sense.
I'm glad you do cool stuff day to day and not need this. But I would disagree that what you do is foundational, whereas string matching is absolutely foundational.
Because APIs and languages change every few years. Concepts and foundations don't. If I move to a company using a new language, I am good enough to pick it up because the design of my sofware is more about the data structures and their interactions.
The point is that simple exercises like that develop your algorithmic and computational thinking. You can't start out building an ecommerce site, you have to learn to program first. It's like... Karate Kid "paint the fence", it wasn't because Daniel's martial arts career was going to involve fence painting, heh. But because it developed certain skills.
It would be a problem, I guess, if people found it so boring that it deterred them from learning (I don't know what the Karate Kid analogy here is; I think miyagi MEANT to make Daniel do something boring to prove himself, which is probably not appropriate for learning to program), then more interesting exercises might be beneficial. Everyone learns differently, but when I was learning, I found exercises like that fun myself. Because I enjoyed programming, the same way some people enjoy crossword puzzles (I don't). Of course it was a long time ago, perhaps "kids these days" require more flashier puzzles, I don't know.
I'm a better programmer for having taken a 'compilers' course, but I've never written a compiler since and probably never will again (but who knows).
There's a reason "foundations of programming" are foundations. It's not because you will do those tasks again in your career. But because they will teach you concepts and methods and skills you will use forever.
Citation needed, please. Give me data. I may sound like a prick, but honestly, making such a claim without data just becomes a 'his word versus mine' situation which is anecdotal
>>If I came across this problem in my "Foundations of Programming" course equivalent when I first started to learn how to code, well I'd probably be enjoying my life as a woodworker right about now.
This. Many times.
When I first started learning programming. I picked up 'Learn C in 21 days' By Peter G. Aitken- Most other friends at college picked up more technically intense books.
I agree I took a good 2 months to get through the book. But I was the only one who travelled 2 months with the subject. By the end of the 2 months I had written several programs, gained a lot of confidence that I can code. Many people quit a lot early with their books.
These days when I want to learn to some thing new. I use the same approach. I pick the dumbest possible books and solve the dumbest possible problems. And then when Im months deep in with the subject, I move towards enlightenment.
This is the CS equivalent of "maths are useless since I'll never use them in real life". Even if you are a software engineer who works exclusively on Javascript you need to understand the basic of how things work.
Remembering the chain rule is useless sure, but concepts like how integration works, Newton's method, Bayes rule, and a comfort with calculus-y concepts help me in my job all the time, when reasoning about problems and understanding systems.
So tell me how learning how to make a hammer (or the physics behind it) is beneficial to the average woodworker or construction worker’s daily job. Sure, using prior knowledge of physics to optimize your hammer swings might be worth while for the guy swinging a hammer a million times a day, but most of us are happy just picking the hammer up and using it to get the job done.
I think it depends. Are you looking for someone who knows say the "perfect" answer? Or do you want to measure someones analysis / problem solving skills?
It's not always about the ends, sometimes it can be about the means. Maybe not the case in the context of Google, but still worth mentioning. I would think.
To give a counterpoint, the lack of basic algorithmic knowledge of a developer in my team cost us a nasty production performance bug.
I agree that whiteboard algorithms are not exactly our typical day work, but I'm fine with them being "foundations", things at back of your mind and you can conjure when needed.
If a single developer was able to cause a 'nasty' anything, your engineering process is completely wrong.
It should have been caught at architectural discussions.
It should have been caught during performance benchmarking.
It should have been caught at code review.
Didn't do any of those things? Don't blame the developer. No single individual encompasses all knowledge required to adequately address all of your tasks. Someone with the foundations you want could just as easily have fucked up something else.
The retort to this is usually along the lines of “we are moving fast and need people to write 100% correct code”. Perhaps that’s the case, but the solution is to put guidelines down (api review, unit testing, etc) instead of demanding people be infallible.
In this case a reasonable test case beyond “Did it compile?!?” Would have caught this.
In this case the two points where we should have caught the issue would have been:
* Integration testing with a specific data set that was different from actual production data.
* Code review (they're not systematic yet but we're working on that) because the faulty code had a sketchy section and the conversation around it would have lead to a better version (that's what actually happened when fixing the bug)
My intent was not to blame the developer, I'm aware of the context they work in and as you have guessed correctly our engineering process is fucked up in many way.
The developer writing the correct implementation was just the earliest point where it could have been avoided in this case, I didn't want to frame it as the developer's fault, but simply wanted to say that some algorithmic knowledge still comes in handy.
My original comment wasn't clear enough about this I confess.
My first job out of school was writing a spell checker/auto completion system. If you want to weigh predictions by the expected number of keystrokes saved. This is a bit like the opposite problem, find the longest word D that is a supersequence of S (You want to find them all, to weigh by probability), but would essentially be solved with the same data structures. I assume that this sort of problem is not particularly rare, and is unlike sorting in that it's not quite as universal, and the one line function doesn't live in the standard library, or any package that I could find. The fact that the algorithms were still fresh in my mind helped immensely.
You sound like the kid in school that always asked "are we ever going to use any of this in real life?" in math class.
Truth is I never needed to derive a function outside of school or college. Was learning about derivatives in high school a waste of time?
I can't explain why, but I want to say no, it wasn't.
Also, many times this puzzles are useful in tech interviews to see if the candidate has good analytical skills, which is what is needed.
These are pattern matching questions. They test how well one remembers a pattern and apply it. It is a skill but not sure how useful it is in the age of google. This is how IIT JEE (test for engineers in india for darkening IITs) was cracked by brute force pattern matchers who start at kindergarden. The resulting product does not bode well from my anectdotal experience interviewing and working with them.
> I make a lot of cool stuff day to day, and usually that requires a lot of code and knowledge about programming and topics that are rather advanced
I'm not entirely sure who this guide is written for. But if we're playing the 'nobody does this in real life' card, I wonder which would be a tighter filter for new CS grads: this dynamic programming questions or writing a regex with capture groups.
Huh. I was thinking it seemed like a fairly reasonable question. You sort D by length, and then iterate through it doing a kind of string matching. Both of those operations are pretty "foundational" in programming, right?
I'd be curious what you think a better question would be. It's not for brand new programmers - it says the pre-req is a couple of previous computing courses.
Maybe I'm off here, but sorting of D should not be necessary. You add O(|D| log(|D|)) to the runtime complexity (with |D| being number of elements in D) while a single linear run through D is sufficient. The complexity of checking if a single word d in D is a subsequence of S is lienar, thus O(|S|).
While sorting might work out in the best case for a long String and a few words of different length so you can terminate early, you just made a problem that is solvable in average in O(|S| * |D|) to O(|S| * |D| * log(|D|)). If you don't realize this and the implications, I would assume you just failed your interview.
Sorting D is a one time operation of n log n, so the overall complexity is n log n + n * m, which reduces to O(n * m) where n is number of words in D and m is length of S.
On another note: I absolutely do not understand why we substitute the international mathematical symbol for cardinality (|A|) with variables we have to explain.
Computer science has a math background, so we all know set theory. At least I also learned using cardinality with Big O notation in university. But for some reason industry prefers to use variables here.
I'm not sure my math classes ever actually mentioned that particular notation. I agree it's more elegant to formalize the analysis, but it's not particularly elegant on the webpage -- the kerning feels off.
What is a subsequence then? I thought you are looking for words that show up in the long string. I visualize sliding each word across the string until it lines up with the same characters.
The subsequences of a string X are any strings Y such that Y is X with zero or more characters dropped, with the condition that the original characters of X must remain in the same order. As an example, the string "help" has 16 subsequences: ["", "h", "e", "l", "p", "he", "hl", "hp", "el", "ep", "lp", "hel", "hep", "hlp", "elp", "help"]. Note that the set of subsequences are isomorphic to the power set (assuming repeated characters are unique), so for a string with n unique characters, there are 2^n subsequences.
Okay gotcha. So for each word in the length-sorted set, walk the string "picking up" characters as you go. If you complete the word, that's your answer.
I'm sure there's a faster or more mathematical answer. But that would be my napkin python.
You probably don't need a dict; just a list of candidates. And you should clarify that you should sort your list by reversed word length and keep another list of indexes into each word.
EDIT: You shouldn't sort the list beforehand. You should compile the list of matched candidates after walking through the whole string and pick the longest match. (Or just represent the matched candidates in a list of bools). It's possible that your answer only happens to match the last few characters of your haystack, so you must walk the entire length of the haystack.
EDIT2: This solution also makes a bunch of assumptions and you can find a better runtime in certain cases. For example, if you have many more candidates than the length of your haystack, it might be worthwhile to generate all O(2^n) subsequences and store them in a hashmap/bloom filter. Then you can do (almost) O(1) lookups for each word. This would also lend well to a distributed compute approach: you could have k machines sharded in this manner: the first machine computes and stores in your hashmap/bloom filter the first 2^n/k subsequences, the second the second 2^n/k, etc. Then you can either 1) in the case of hashmaps, fan out each request to each machine, succeeding if any machine succeeds or 2) merge the bloom filters on a central machine (assuming the same hash functions and modulus constants) and do O(1) lookups. (Copying hashmaps is O(data stored inside); merging bloom filters is O(bloom filter size).)
I guess it depends on profiling real world behaviour. If you walk the list only a few times because long words are present, that's better than walking the list once and for each character, walking your set of words. Or if most words aren't a match early on, you'll cull the candidate set pretty quickly.
Yeah I'm convinced I wouldn't bother with something beyond my basic idea until I profile.
I think it would have been much more useful for me if I had learned to code this way. I've compared the CS curriculums at top schools and lower tier schools like mine - the difference in content and breadth is very real, and a significant part of why folks from top schools have the careers they do.
I have been putting off a Google interview for a whole year now.
I have real experience building Cluster Filesystems, Distributed Caches, TCP/IP Control plane software and low latency Ad platform for more than a decade.
Sounds good on paper, but I can guarantee I cannot solve most of these puzzles without actually solving them beforehand. Why? Because I do not have cycles or time to solve these on my own. I have a system to build, Engineers to help and a product to ship. Family obligations do not help.
Seriously, Is this normal?
What does this tell me? I should not appear for Google interview and waste their and my time.
Some of the replies here suggest that Googlers just push protobufs? Damn, that is one hell of an interview to push protobufs around :(.
Glad, atleast they are upfront about puzzles, so that's good.
I don't work for Google and i haven't interviewed with them. In our company (low level systems engineering) we ask similar question, with the following rationale:
I don't want to test if you can build systems, your resume already says so and there is but enough time to assess this properly anyway.
But, can you be bothered to try to solve problems outside of your comfort zone? Will you humor me as an interviewer for one hour with something that is maybe new to you?
If you rise to the challenge and attempt to solve these problems,i extrapolate to the systems engineering part.
I find this amusing because lots of people put bullshit on their resumes that one has to filter them out.
I already have my hands full trying to filter out the posers and assessing for culture fit. I don't have time to ask a candidate for some questions that aren't related to the role at hand.
Fair enough, maybe we (rather small company in central Europe, but with gift profile customers) are an outlier here but I'm my hiring experience there weren't really any imposters. Usually they come from companies we know well, it are new grads in which case the question of experience is moot anyway.
There are no “puzzles”, but problems to solve. I expect that things like this string search are entry level / new grad kind of question. Someone more senior with more experience should have different kind of problems to solve.
I certainly didn’t have a bad experience when interviewing at Google and Facebook. One question were not very concrete, but it was interesting to walk through and solve.
I didn’t just “know” the answer, I had to explore a few directions before getting the right one, but I’m sure that explaining my thought process, interacting with the interviewer, validating and debugging my algorithm in a principled way and fixing my bugs were all the elements they are looking to see.
Googlers push protobufs around the same way you just push packets around with all your expertise. It's very easy to be reductive when you decide to ignore the complexity that pushing around packets / protos hides in real world.
You've put your finger on it right there. These type of interview questions are biased toward candidates with plenty of free time and few obligations. This effectively has ageist, sexist, and classist[0] consequences.
[0] Racist as well, to the extent (which is considerable) that class is entagled with race.
That inevitably seems somehow biased (I'm trying not to use a worse word) at its root. Hiring non-white non-men does not lead, inevitably, to worse hires. To imagine so is to paint entire demographics as inferior?
I checked out the "cloud infrastructure" section and there's no challenge questions or code snippets. It's less of a "Tech Dev Guide" and more of a "Here Are Some Bookmarks I Found, Check Off The Ones You Read". It's not a terrible idea, but more of a suggested reading list than a "guide".
There's at least 10 years of such articles and videos piled up on the internet. Going through them all will take a long time, and even if it covers working with Google products the way Google wants you to, it's not necessarily the way you're going to use Google products at your company.
What I really want is a single place where all of this can be documented, first as overviews, and then deeper knowledge that isn't exposed by blog posts or books, or even official documentation. I want an authoritative tech encyclopedia built on knowledge shared by a community of experts.
I'm trying to cobble something like that together right now, but it's surprisingly hard to organize in a way that isn't just a list of quick start guides for tools. And then there's what you present first, and how. Do you need to know Git merging strategies before you learn build pipelines, or can you just make some assumptions and wait til later?
I've just checked the Cloud Infrastructure section and couldn't see your point there. There are readings for fundamentals, then basic practices, then online courses in udacity and kube-bootcamp, then step-by-step guides to deepen the knowledge.
I think they have accomplished a good thing here: a curated list of sources to get into the cloud computing/infrastructure world.
I don't think they can talk about every step in the development lifecycle, it would be very very opinionated. It leaves the reader/developer no space for elasticity and freedom of choice to what to learn/practice. After acquiring the fundamentals, we should all choose our own path by discovering.
I'm impressed by the coding interview resource guide [0], which for once seems to include questions of realistic difficulty for a new grad interview (if slightly on the easier side).
Indeed, and even stranger arrangements. I know one professor who comes to work for us every summer (with hilarious HR boundary condition consequences every year).
why? university faculty isn’t all that impressive? they are usually just people who made the choice to stay in school longer while others wanted to do something useful right away. many google engineers are phds. but once again this doesn’t mean anything
On a similar note, Google used to have a "Guide to Technical Development" page for students. It now redirects to this link, but you can still find it archived [0].
Patterns and paradigms can be taught easily on the job, and can be overly specific. The "advanced programming" course teaches genuine problem solving. Breaking down a problem into it's mathematical components to develop an optimal solution. It builds you into a good engineer, not a programmer that can spit out patterns to fit a problem.
Tried out cloud computing.
Went to the first lesson, clicked the link for the YouTube of the first video class, described as "What is the Cloud?
Watch this lively video for a quick overview of the Cloud landscape. Though it's a couple of years old, it's a great intro."
But got hit with a "this video is not available" YouTube page.
A YouTube search for the same title brings up lots of possibles.
All MOOC sites have videos that play well in every browser. Google makes money by selling something else so apparently it doesn't care much about the quality of this site.
Is withgoogle.com a new subset of educational resources from google? Going directly to withgoogle.com just redirects me to google.com. My first instinct was suspicion when I saw the link.
Found under "Foundations of programming" -- this is exactly the type of problem I'd expect as question one under this section. When it's made by Googlers, that is.
I make a lot of cool stuff day to day, and usually that requires a lot of code and knowledge about programming and topics that are rather advanced (currently I'm building a distributed collaboration system with event sourced data in Microsoft Orleans) -- but questions like this are just about the furthest thing from what I want from a developer. We're gonna be in a lot of trouble in the future if every developer is learning Google sanctioned code problems at Google University on their Google Chromebook in Google Chrome on Google.com. And I'm not even speaking about how disgusting that sounds -- just tell me how that's going to breed diversity in thought?
If I came across this problem in my "Foundations of Programming" course equivalent when I first started to learn how to code, well I'd probably be enjoying my life as a woodworker right about now.