Hacker News new | past | comments | ask | show | jobs | submit | rootsofallevil's comments login

>I wonder if tech is unique in having an implicit mistrust for everything on the resume. I've worked at several large companies where people don't even bother looking at them before interviews. I blame the FizzBuzz blogpost (https://blog.codinghorror.com/why-cant-programmers-program/) for making everyone deathly afraid of imposters. It's a masterpiece in ego-stroking - of course you can write FizzBuzz, look at all these sorry suckers who can't. Resumes are lies, only a 45 minute brainteaser on CoderPad can reveal the truth (ignore existence of Leetcode).

While I agree with you on the whole, I have had interviewed candidates who couldn't code.

One of them basically said that they would learn (CS degree but 4 years as a PM), which was fine but not what we were looking for.

Another simply refused help and proceeded to bat their head against the brick wall of simply syntax. Didn't show signs of nervousness, panic, etc ... just seemed not to be able to code.

In short, I think they exist (programmers that can't program) but they are the exception not the rule.

>In reality if I'm considering a senior candidate from FAANG (like, actually senior, not 4 years out of school), simply talking about past experience is perfectly fine. The work is the fucking same, everyone is migrating from legacy system X to a distributed system with zookeeper and kafka, they'll fit right in. It would take a truly psychopathic liar to fake their way through all the mundane details you could ask about a project like this. And if one slips through the cracks, they'd probably make a great PM anyway. (As long as they're on your side.)

I agree with you that an interview probing the experience of the candidate by getting them to talk about what they've done seems like a far better proposition. You will get a better idea of what the candidate knows, doesn't know.


> If I'm doing something complex enough to really benefit from "everything is and object" then why not just use a more capable language like python?

I've seen this line of reasoning mentioned a lot but never justified.

A few reasons to stick with Powershell over Python:

Team might not be familiar with Python (and the benefits of learning might be outweighed by the learning curve, shallow as it might be)

Hard to judge what the cutover point is (namely at what point does a script benefit from being written in Python?)

A lot (some?) of stuff out of the box in PS requires libraries in Python, which requires installation, which cannot happen during the script (I think at least, I've not done that much Python)


I don't know, maybe things are different in the UK but I'd say that about 50% of the job ads I see have a salary range.

Is there any point in applying for a job without a good idea that they can match your salary expectations?


Talk to people in places like Germany and you might get a different take about bad/good unions


This didn't happen in Germany, this happened in the US. For whatever reason unions behave differently here, and it's the union model in the US that people rightfully evaluate against.


but does your manager? or does s/he actually only see the pizza guy delivering the pizza?


"My manager doesn't recognize my hard work" is a totally different statement than "performance is subjective".


Good point. But the managers make most of the decisions, and they usually can't recognize performance. So if you have a 10x developer who is admired by all coworkers but disrespected by management, they probably won't get raise (unless they change the job).


...does that happen?

The situation you're describing is going to be far more about the specific personalities and power dynamics than anything related to accurate performance estimation.


I have repeatedly seen companies treat their contractors with lot of disrespect, such as repeatedly sending their salaries too late, or extending the contracts for the next year at the very last moment (in one case the papers to sign were delivered literally on December 31st in the afternoon). Some of those contractors were the best developers in the company. The company employees who did the paperwork simply didn't care.

In the most absurd case, the best developer got a contract offer from another company with deadline to sign at the end of November. He wanted to stay at the current company, and the managers were telling him they wanted him... but he was unable to get the contract extension on paper. So at the end of November, he was like "okay, screw it, I wanted to stay here, but I am taking the other contract, to avoid the possibility that I would miss that other offer and this company would not extend my current contract here". So he signed that other contract (starting from January), and when towards the end of December this company finally offered him a contract extension, he was like "sorry guys, but I already signed a contract with someone else". He left, and the managers were all surprised "but we told him we wanted him, why couldn't he wait?" During the following year, a chain reaction started, when his closest coworkers left too, then more people left, most of them reasoning like "I was already considering a change of job, but I stayed here mostly because of good friends at workplace, now that the friends are no longer here, I might leave too". The company lost most of their developers... because there were not willing/able to extend their best developer's contract two months before expiration instead of the usual one month.

Another example, this was an internal employee, there was a super talented guy, freshly out of university, but incredibly experienced in some technologies. Also super helpful, whenever someone in the company had a technical problem, they usually asked him for help, and he always helped. The whole company changed their technological stack based on his recommendations. Then, one day, another company approached him and offered double of his current salary. He wanted to stay, so he went to management, described the situation, and asked them for 50% increase (which would still be only 3/4 of what the other company offered him). The managers refused; as I later learned, their conclusion was that "a person this young does not deserve such high salary". So the guy left.

Maybe this is "specific personalities", but those personalities seem quite frequent.


>There will be false positives, but the rate is low

What do you base this on?

My experience is that if you are perceived to be good you can get away with being a low performer for a lot longer than if you are not perceived to be good, I have seen it quite a few times.


Wouldn't that be a false negative?

A false positive would be when a high-performer is perceived as being a low-performer, and terminated for that reason despite their superior productivity.

I can certainly imagine this occurring, especially to employees who are socially deficient, but then you start to get into the quagmire of politically- and socially-motivated terminations, which I think is outside the scope of this discussion.


You're right. Edited the comment accordingly. Both false positives and false negatives exist.


>One never knows what the future may bring.

Exactly, so make the changes when you need them, otherwise you are relying on hitting the abstraction lottery


that's an interesting take, what would you consider strawman arguments in the article?


For example, the open-close principle. The author blames this advice on tooling of the 90s and proposes instead “Change the code to make it do something else”.

This has nothing to do with tooling, but the fact that pulling the rug under an established code base could have very unintended effects, compared to simply adding and extending functionality without touching what is already there.

By doing as the author suggests you’ll end up with either 500 broken tests or 5000 compiler errors in the best case, or in the worst case an effectively instantly legacied code base where you can’t trust anything to do what it says.

I once had to change an entire codebase’s usage of ints to uuids, which took roughly 2 whole days of fixing types and tests, even though logically it was almost equivalent. Imagine changing anything and everything to “make it do something else”.


What's the alternative here? If you had to change a codebase's usage of ints to uuids, should the original author have used dependency inversion and required an IdentifierFactory that was ints at the time so you could just swap out the implementation? And if they did - why wouldn't they have just used UUIDs in the first place? You're betting on the fact that the original author anticipated a particular avenue of further change, but also made the wrong decision for their initial implementation, which seems like the wrong bet. If they made the wrong decision, they almost certainly didn't anticipate the need for another one, either.

And how long would it have taken for the original author to use an IdentifierFactory instead of ints and write meaningful tests for it? Less than two days?


In the uuid case, the person had no choice. Remember, these are principles, not laws, and at some point your system is making concrete choices. Choosing UUIDs isn’t necessarily a OO design problem. He was just highlighting how expensive it can be if you require changes to your fundamental classes to extend or change behavior. In the identifier type case, it’s rare that folks abstract this stuff away. Though: I do know a LOT of systems that use synthetic identifiers for this exact purpose, as larger enterprises tend to deal with many more different identifier types from different integrations, from a DB type that can’t hold new identifiers, because IDs need to be sharper/distributed etc. So yeah, it’s a principal, and one should choose if it’s worth the upfront cost for its benefits.

OCP though more commonly refers to: 1. Building small, universally useful abstractions first 2. Extending behavior of that abstraction or system by writing new code rather than changing published code directly.

This is trivial when you have a few patterns under your belt. Template factories, builders, strategies, commands. I mean, while it’s not the best idea in most cases, even just inheriting a parent class and giving a new concrete new behavior is still better than changing something fundamental to the system.

Like has been said 999 times in this thread, software isn’t black and white. You have to make choices about where you go concrete and where you abstract, and gauge that against risk factors. A somewhat complex class you expect to go away in a couple months? Make it a god class that anyone who wants to can scan through. A fundamental class that will be used by hundreds of developers and underpin most operations in a production system where 5 minutes of downtime costs tens of thousands of dollars? It’s worth the upfront cost to build with these standards.


Changing ints to UUIDs is a classic example of the Primitive Obsession smell, and the solution is to wrap primitives in a type representing their semantic meaning, such as “ID,” or “PrimaryKey,” not to use a factory. That way, when you need to change the underlying type from int to UUID, you only need to do it in one place.


Indeed. Unfortunately, in some languages – like Java or C#, it is harder to do without incurring a significant cost (boxing/unboxing) than in languages that allow type aliases/typedefs.


In theory, yes, but in practice performance is dominated by network and (less often) algorithms. The cost of boxing/unboxing doesn’t even register except in rare cases, which can be specifically coded for.


It has a fair bit to do with tooling. For example, C++ suffers from the fragile base class problem and some changes can cause long compile times. Nowadays, we have tests and deployment pipelines that are explicitly designed to let us make and deploy changes safely.

Honestly, if you cannot change your code, you have a problem.


The OCP is imo poorly named, but it has far bigger implications than the post acknowledges. For one, it implies the concept of abstraction layers. In particular, base libraries should provide abstractions at the level of the library. In this way it's able to achieve being "closed for modification but open for extension".

https://drive.google.com/file/d/0BwhCYaYDn8EgN2M5MTkwM2EtNWF...

Flipping it around, if base libraries were to be always open for modification instead of extension, then instead of writing your feature logic in your own codebase, you might be tempted to submit a pull request to React.js to add your feature logic. That sounds ridiculous but that's the equivalent of what I see a lot of new engineers do when they try to fit the feature logic they need to implement anywhere it makes sense, often in some shared library in the same codebase.


>This has nothing to do with tooling, but the fact that pulling the rug under an established code base could have very unintended effects, compared to simply adding and extending functionality without touching what is already there.

That's still a matter of tooling. With type-checking, static analysics, and test suites, changing code doesn't have "very unintended effects".

Back in the day, without those, things were much more opaque.


Not the person you asked but, I would say that the expectation that SOLID provides or should provide, unambiguous guidance


His entire screed on single responsibility principle spins around on semantics.


it would be more accurate to say:

Would I bet $20/day that something will happen to this car that would make the rental car company want to be reimbursed for?

Depending on rental car company the limits of scratches, dents etc .. can be very low.


And those scratches, dents, or even more could be entirely not your fault. Heck, they could even happen when you're not in the vehicle.

So effectively it's a bet against you, other people and more generally the world.


> 1) The speed limit obviously an arbitrary choice - the chance of death is decent at 50k/h (standard "town" speed limit in Slovenia) (so why not 40hm/k or 30km/h), and e.g. Germany has no speed limit on the highways (in some places) so there's really no good reason to set a limit of 130km/h (why not 110km/h or 150km/h)?

As are most limits.

I get that driving on a motorway at say 85 mph is probably not that much dangerous than the speed limit (70 mph) but 100? 120? 140?

I know about Germany, but would you say that Britain's or Slovenia's road are designed and kept to the same standard?

> 2) The easiest objection to (1) is "society has jointly decided about acceptable risk limits" but that clearly isn't true. Speed limits are simply a bad idea. A much better idea is speed constraints - roundabouts, speed bumps, chicanes [0] - i.e. actually forcing people to drive slower or else they ruin their cars.

how do you constrain speed on a motorway?

> The fact that governments default to traffic cameras etc. is proof (revealed preference) that they don't actually care about safety and people driving slowly, but what they want most is making their citizens live in fear, punishing them and extracting money from them. This is particularly obvious when there's a hidden police / traffic control on some section of the road where it's obviously safe to drive fast, but it's just technically within city limits so it has a low speed limit.

In this country (UK) there is this myth that the cameras are money making devices, when during the 2008 crisis they were turned off to save money for some constabularies, this is the same type of argument.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: