Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: SkillUpper – practice coding interview questions step-by-step (skillupper.com)
143 points by rhc2104 on Nov 19, 2018 | hide | past | favorite | 33 comments



Your posting is already 13 hours old but I hope you still see this message :)

First of all, being sometimes involved in teaching CS to people, I like your website. You have obviously spent a lot of time on the question of how to teach people programming (the latter in the sense of "How to write a program that solves a given problem?"). I have seen textbook authors with less dedication.

However (there is always a "however" or "but"), I am wondering whether you are using the right approach for the N Queens problem. You are following a bottom-up approach. For example, on the first page you let people write a helper function "validateQueenPlacement", followed by another helper function "nextRowColumns" on page 2. For somebody who doesn't know the complete solution, it's very hard to see why you would need those two functions. Of course, you know that you will need them but you have already seen the final solution!

You are probably familar with the top-down approach by Wirth (1971) called Stepwise Refinement. I think (but I cannot prove it) that most CS educators see Stepwise Refinement as the superior approach because it encourages people to think about the problem and not the code. Why don't you let the reader write first the high-level (pseudo-)code for the backtracking:

    http://sunnyday.mit.edu/16.355/wirth-refinement.html#3
Once the reader has written the code for the highest layer, it becomes obvious what the helper functions should be, what their parameters are, and what data structures are needed. I know that Stepwise Refinement is unpopular nowadays because it forms a kind of psychological hurdle (bottom-up gives you directly small functions that you can unit-test) but I think it really helps people to guide themselves to the solution.


Thanks for the feedback! I'll think about adding more of a high level overview in the first step, with rough pseudocode of the end result.

That said, I don't think I want to have a step where "they write pseudocode", because a lot of people would just skip that step without thinking about it.


Feels very weird to me to spend time and energy building tools for a type of interview process that is becoming more and more derided by the community?


The community isn’t asking the questions, the majority of companies hiring developers right now are. Thoughts on HN != the status quo in companies today.


I created SkillUpper because I was thinking about how Computer Science learning could be more interactive.

Codecademy pioneered a more interactive REPL style of learning for basic programming.

I felt that this interactive step-by-step approach could be used to learn other parts of Computer Science more efficiently.

Because of algorithms being highly represented in software engineering interviews, it seemed like a good place to start. But this step-by-step format could also be used to teach other parts of Computer Science.


Any SkillUpper for salary negotiation?


My business partner and I are professional negotiators for software engineers. Happy to chat further inquiry@dangoormendel.com


Have you worked with Project Euler by chance? If not, you might find aspects of it valuable for this project.

https://projecteuler.net/


Are you aware of https://exercism.io/ (volunteer run)?


Like zybooks?


I haven't personally used zyBooks, but they seem to have a similar vision. Although from what I can see on the Internet, reviews of their materials seem to be mixed.


Being more derided by the community doesn't mean it isn't worth playing the game for some.

I personally loathe these types of interviews and don't administer them myself. Nonetheless, it isn't going away anytime soon and I am willing to practice for it if needed.


Employers still ask them, so we still gotta answer them. Reality for most devs. Many complain but only a few can realistically opt-out.


They ask those questions when they have to compare many candidates because they can't think of anything automatic that scales. It's the street light effect.

https://en.wikipedia.org/wiki/Streetlight_effect


I've found that creating a 1-1.5 hour test that is deliberately designed to be a realistic depiction of the work the candidate will do works and scales just fine.

In practice this means creating a small example project with all of the tedious boilerplate ready and in place and asking the candidate to implement 1-3 relevant stories on it.

It's not the streetlight effect that got the industry into this sorry state. It's a combination of cargo culting and laziness.


If it helps securing a job that you think it will improve your life an career, I would argue that it is time well spent.

Unfortunately, the fact that it is generally considered a flawed process by the community, doesn't mean the this kind of interviews are not required or common anymore.


The only thing that's relevant is if these questions are standing in the way of a job you want.

That HNers hate them provides a nice circle-jerk, but doesn't help you get the job.


Agreed, sites like this only perpetuate the nonsense. The only way to win is not to play.


Agreed, I have refused a few places. Skyscanner keeps on contacting me. Their choice was a "full stack test" entirely in JavaScript, or a Systems engineer test in HackerRank. I wasn't wasting my time with either of those.

Its amazing how many people phone me "for a chat" after seeing my Linkedin profile then expect me to jump in to do their coding test. You have to convince me that I actually want to work for you first. If I haven't spoken with anyone technical about the job then its a definite no.


I must admit that I wonder if part of the success of these types of tests is that they are bullshit and what is actually being tested is how compliant you are as an individual and not really anything to do with technical competence.


Not to say asking puzzle coding questions is an ideal approach, but I think the motivation is somewhat opposite: If you base your hiring decision on how awesome the applicant tells you they are, a certain number of the people you say "hire" for are going to be bullshit artists who can't code. When the interview is over, the interviewer is judged either explicitly or implicitly on how well they conducted the interview, and there's probably nothing worse they could do than say "hire" to someone and have the next interviewer find out they're lacking basic coding skills. So they have to ask some kind of coding question, and it has to be something with some meat to it, but you probably can't make it too OS or framework specific, so you're left asking about overlapping rectangles.


Why not just ask applicants to bring in some code they have written to discuss?

If they have written it they should be able to discuss it and answer questions easily.

If someone else wrote it and they have understood it enough to discuss it, you probably have an even better developer, especially if the job involves working with some legacy code.


> Why not just ask applicants to bring in some code they have written to discuss?

That can be really difficult for a lot of candidates. It's predicated on the idea that they've written code they:

a) Feel comfortable sharing (not just a one-off weekend hack project)

b) Are allowed to share, legally. This is usually the real problem for most developers.

The truth about hiring: There really isn't any one-size-fits-all that works for everyone. Any hiring process you come up with is going to be an incredible obstacle for a sizable minority of people.


But why the fixation on algorithmic trivia questions - why not at least come up with examples that are relevant to the role?


I mean, I agree with you.

Honestly? Because companies view those kinds of questions as proxies for IQ tests (which are mostly illegal to use as a hiring method). Notice the defense of them largely revolves around "seeing how you solve a problem" and "how you communicate your problem-solving ability". They're seen as measures of your intelligence, which is what they're actually interested in--not your experience or expertise (outside of specific areas).

Those interviews are, of course, something you can easily (with a serious time commitment, but fairly easily nonetheless) prepare for and game, which of course makes them bad proxies for intelligence. And of course, I don't think algorithm questions necessarily measure someone's actual intelligence at all. And of course, it's unclear how highly intelligence is correlated with your ability to succeed as a software engineer.

But that's why. I have little doubt that they would actually try IQ tests if they thought they could get away with it. I'm fairly certain at least one major company tried asking candidates for SAT scores.


> Those interviews are, of course, something you can easily (with a serious time commitment, but fairly easily nonetheless) prepare for and game

Assuming a certain level of intelligence, that is.


Thanks, for what it is worth I also have problems with the idea of IQ tests! ;-)


Let's say the actual role involves, for example, writing a health and monitoring system for a Windows-based service, and you're hiring for a long term FTE position. Do you ask something Windows+C# specific and eliminate the very talented guy who's been focusing on Linux+C for the last few years?


>If you base your hiring decision on how awesome the applicant tells you they are, a certain number of the people you say "hire" for are going to be bullshit artists who can't code.

If you base your hiring decision on how awesome they are at leetcode hackerrank style algorithm questions you're selecting for bullshit artists who memorized "cracking the coding interview".

There's a systematic industry-wide bias against realism in developer tests. It's depressing how many people think realism either isn't necessary or isn't possible within typical interview time constraints.


Compliant, with lots of “free” time...


For me, completing the Spiral Matrix, I did not find the hint for the third and final section relate-able at all to my solution.

Essentially, it assumes a bit too much the shape of the resulting solution.

That's not to say I didn't get value from the hint, or that I think others will not - on the contrary, it is a valuable and clear hint.

The unfortunate part is just the feeling of being taken completely "out of it" in solving the solution with a slightly different approach and wondering momentarily if somehow one is wrong, even though one is passing all the tests.

The recap is also fairly lacking in content, but I imagine that is clear to you already.

Nice job on the site, generally. I only had a couple hiccups with submissions and some weird aliasing on the editor if I would edit a line before it redrew.

Edit: add some dropped words


Thanks for the feedback!

I'll think about what can be done about the last-step hint for Spiral Matrix. I guess a reason why this tutorial format is not more widespread is that it's easy for the hints to be "off".


Aye, I don't know what the solution might be, and I know the language was already couched in terms of "probably you did something like this", so it wasn't completely prescriptive.

Maybe all it will take is another sentence that basically says "or maybe you are doing something else, and that's okay too"?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: