Hacker News new | past | comments | ask | show | jobs | submit login
A Developer Culture Test (pragmaticengineer.com)
107 points by gregdoesit on May 25, 2020 | hide | past | favorite | 41 comments



One of the strengths of the Joel test is that it is very objective. There is no wiggle room on "Do you use source control?" or "Do you have a bug database?".

Many of these are completely subjective. Do you have "Healthy oncall" and "Technical managers who build trust"? Most companies you could argue either way.


Agreed. I'm not sure I see any clearly definable measures here.

> Celebrating people taking initiatives. Are people taking initiatives to make things better celebrated or rewarded over this seen as unnecessary distractions?

What does 'celebrate' mean? Yay... colleague introduced a new initiative for doing XYZ, but it violates the current standards and now requires upgrades of other legacy stuff. It's not providing any measurable or substantive benefit, but... if I don't "celebrate" it, am I part of a 'destructive' culture?


It's the difference between catching the initiative early enough so that it doesn't cause a cascade of problems, while being nice about it... And saying, you idiot, you did something you thought was good and now I'm angry because I have more work... And what you did wasn't even good.

The question is a bad management detector.


A quick skim and I agree.

However...! Them things that are most valuable are often[0] least quantifiable. Sure "Technical managers who build trust" is probably completely unquantifiable but if you work for people who are willing to mess up your work, it's very bad indeed.

I worked at a place that used source control (we all do now) but oh hell, the boss could not help pissing about weekly with the project's aims, growing it every time, no clear direction, etc.

It's a bit like code comments then, the value of which can be indisputable. Perhaps require 33% of your code to have comments? Seems like a metric! Now how valuable is that when enforced? Some programmers will do little more than paste code lipsum. We want quality comments - and now we're back to subjective. The best stuff is.

[0] That needs some thinking about which I haven't done.


Subjective things are indeed important!

But if you're in a job interview, and you ask your interviewer whether their technical managers think trust is important, even the most toxic employer would agree it is.

The same way a prospective employee, when asked "what's your greatest weakness", wouldn't admit to any truly great flaws.


Well I'm looking at it in the sense of a platonic ideal whereas you are being realistic. Reality wins, as always.


Came here to say this. I was sorta shocked at the hubris in the title, then I read the article and was disappointed by the rife subjectivity.

The Joel test is not “feels outdated” and there was no evidence presented to support the claim.


I agree with the points re: subjectivity, but I do feel the Joel test is somewhat outdated, if only because everything on there has become table stakes rather than genuine differentiators.

A 12/12 on the Joel test doesn't mean you're a great company, it means you're not a total garbage fire.

I'm not going to presume I can come up with a whole new test on the spot, but I'd expect a "modern" Joel test to have questions like:

- Do you use CI? - Do you deploy changes individually? - Is your code coverage > (some amount)


>I agree with the points re: subjectivity, but I do feel the Joel test is somewhat outdated, if only because everything on there has become table stakes rather than genuine differentiators.

This might be, but there are a whole lot of companies in the world that can't even make table stakes.


Some questions from the Joel test do sound a bit dated - understandably, it's a 20-year-old post at this point.

For example, if you were writing the list today "do you make daily builds" would probably have updated to "do you build and test every merge into master"


The subjective part of the Joel test is the choice of those particular objective measures.


The original Joel Test was 12 binary and mostly answerable questions, rather than an invitation to BS candidates. This test might be salvageable by cutting it down to the hard questions underneath. For example, "Do you have CI process setup that runs before committing the code?" is one of the best questions here, but rather than "before committing" it should probably read "before deploying to production". I don't think I've ever heard of a local CI process, which is what you would have to do to run it before even committing the code (or maybe the author meant a pre-commit hook?). I would suggest something with little wriggle-room like "Do you run every change through an automated QA process before being able to deploy it to production?" That is, nothing ever gets to production unless the automated (and ideally extensive) process succeeds.


I guess the word "culture" in the name of the test should have been a warning sign for upcoming vague borderline BS-bingo.


The main point of running a CI process is to stop you breaking the build, so you need it to run before the point where your first build happens. For most teams that will be on a merge to develop, because develop is deployed to a test or QA server. It'd be odd to run CI locally but you definitely want it to happen a long time before anything is ready to go to production.


> It'd be odd to run CI locally

Oh, I would if I could. Conceptually, not very different from running make and fixing any compiler warnings or failed tests before making a commit.

I think that's how a lot of CI gets started: locally runnable tests are automated to make sure they're always run before the commit lands.

Unfortunately automating CI often pulls a lot of server side infrastructure and platform specific mechanisms that break the ability to run it locally. I find it very frustrating to commit, push, wait for server to nag me, fix, commit again, push again, hope it works this time. It just creates a lot of noise and IMO doesn't flow very well. I'd prefer to be able to test everything locally as far as is possible.


One of the nice things about the project I work on is that things like eslint, stylelint, and the unit tests all work locally inside of Docker, so I can be sure I'm pushing code to the repo that won't be rejected by CI for code hygiene reasons. CI runs all that stuff as a check, but it's really there for the integration and end-to-end tests, and the build and deployment processes. If you can get your project to work that way it's really nice to have.


Sure, but the overwhelming majority of dev teams will run CI on remote branches. That's well after "commit". I think a pre-commit requirement for tests pass would be a turn off for me (I routinely rely on CI to tell me what's wrong with my code so long as it's on a private branch, so I can move onto something else while tests run).


I'm not sure what this is responding to in my comment. I certainly never meant to say anything contrary to what you're saying.


I like this list. I’m at a Fortune 500 now after being unceremoniously dumped by a company that was trying to get themselves sold to private equity (If you watch daytime television in the USA you’ve probably seen the old company, and likely never heard of my current employer).

Being able to correct code on other teams if we’re blocked with a pull request, versus having no idea who owns the code and being actively stonewalled and ignored by people at the old place. Having documentation and business requirements written up vs being given source code and the old developer rage quit. Having a competent BA who is competent enough to read code vs having a BA whose job is literally only to ask me to update my tickets and badger me to write up the requirements to hide their own incompetence. Having good conversations about the tech we use vs feeling gaslit when mentioning a potential solution to a technical problem and having nobody respond or make eye contact.

Realizing that everyone who had been there a long time had the personality of a whipped dog. My coworker had letters from his wife encouraging him letting him know he could do it if he just worked hard! Which I suppose worked until the entire dev team got fired a year after I did.

I was really scared after leaving a great job and moving to that terrible one that I’d made a terrible mistake and most dev shops were just awful. Maybe they are but I’m glad I stayed where I am, I got an offer to make way more but I’m 100% sure I’d be laid off now due to covid-19 if I’d jumped ship. Based on my companies response to Coronavirus I think I might stick around for a long time.


What I like about the Joel Test is that it's not a test about whether your company has good working environments. Good is extremely subjective. Good depends on what you want. The Joel Test is a test about whether your company has adequate working conditions. Many of the points are no brainers. If you don't use source control and you don't have testers and new candidates don't write code, well shit, that's pretty damn bad.


I find that what keeps me at a workplace are the relationships that I form there. When I enjoy the company of the people that I work with, then I enjoy coming to work, even if the work itself leaves a bit to be desired.

I guess that this perhaps implies that communication is good and so all the desirables (code ownership, reviews, transparency, clarity of responsibilities etc) fall out as a result of a group of people relating well.

So, it's the relationships, more than anything else, I think.


Of all the updates to the Joel test I've read, this is by far the most... Nebulous. Maybe even asinine because even if you were to ask this in an interview, the interviewer could reply to half it these with a bland "yes" and there's really no way to confirm or deny during the interview since it's all based on opinion.

In the best scenario, the affirmative answers were straight up lies and then you know immediately that the organization is not for you (but only until after you join). In the worst scenario, the "yes" was actually honest and it turns out that you disagree with the company on what makes for good "celebration of initiative" or "trustworthy management" and suddenly you're the bad guy for disagreeing.

I like this one much better: https://myers.io/2017/04/04/the-joel-test-for-2017/


This is not a test, it is a list of paragraphs with opinions.


That in itself does not make it wrong or useless. What would you suggest in place?


Either write a test (like joel's test) with clear questions that afford clear answers, or don't name it a test. The current title is quite misleading. I have nothing against these opinions, they are quite reasonable, if a bit vague. But a test they are not.


this reads like science fiction to me - all quite different to the work culture that i experienced; are there any examples of real companies that practice a significant subset of these principles?


I'm not going to be popular by saying this, but Amazon and AWS thrive at every single one of these.

The only one that I know is not universal is "Healthy oncall".


Not only FAANG; a smaller-scale startup where I'm working now also checks most of the boxes.

Going somewhere where most of the boxes are not checked, and the leadership thinks that it's OK, looks only reasonable to me if the pay is exorbitant, or the experience is huge and unique, so that you can leave in a year and have a better position in a good company.


Most FAANG companies. Definitely Google and Facebook.


Yes. Is there any reason why in particular it reads like science fiction to you? What kind of work culture have you experienced?

I can elaborate further.


Pretty much all of the companies I've worked for in the last 20 years have most of these boxes ticked. They definitely exists. To be fair, at most of these companies, I've had a hand in the processes ;-) However I would say that it's actually a pretty low bar mostly.

The main sticking points I experience are "Cross functional collaboration". There are entire categories of work where this isn't really easy/feasible. The actual users are not available, so you must use a proxy. The opportunity for improvement is where they developers have some say in how the proxy acts (can everyone see the same data they work on, is there an opportunity for feedback, etc).

Things like "functionally complete" vs "ready for production" require a strong management presence for the development team. IMHO, the easiest way to solve this is by not having a distinction between the two. Don't make it functionally complete before it is ready for production. This requires breaking down the work differently than a lot of organisations like. You need a strong lead/dev manager who will take high level items in the backlog and break them down into vertical stripes of functionality that can be delivered incrementally. It can be quite a political problem, but this is probably the place where a lot of organisations are going to run into trouble.

Another common problem (especially with smaller teams) is the lack of career progression. This should be obvious when you start. If the group is small and not geared for growing rapidly, you will not likely get a promotion. It's simple math :-) No spots for promotion means no promotion.

There are a couple things on this list that I would dispute, though. Celebrating taking initiatives: there are pluses and minuses here. Some developers just do crazy stuff. Other developers do nothing for fear of criticism. It's not really the case that you want to have people indiscriminately taking initiatives. It's more that everyone should have opportunities and everyone should be encouraged to take those opportunities occasionally. There is always a balance -- going too far one way or another is really bad for everyone.

CI and CD deploying directly to production: Sometimes a good idea. Often not a good idea. In reality, you want a human (or some humans) making the call as to whether or not something should go. If you make it an automated part of the process, then people will put blocks earlier in the process to ensure that customers are not surprised, or stakeholders get a chance to OK the final product, or deployment meets marketing schedules, or there is a final quality check for risky work. What often happens is that these blocks become adhoc rather than part of the process. Usually it is better (in my experience) to always have manual deployment (although it should always be easy to the point of triviality to do it).

I think how I would take this list is more as a set of questions you can bring with you on your job interviews. How does the company handle these things? What are the opportunities for improvement? How eager does the organisation seem to implement these improvements? If you join a team and you have significant friction in many areas, it's basically too late. You should probably start looking for a better place to work.


"The Joel Test": 12 concrete questions that can easily be assessed.

"The Developer Culture Test": 12 vague statements.

How is this a test?


Every single one of these lists have an unwritten clause buried deep in them:

This has nothing to do with culture, because it's not like everyone who joins a company is somehow a magic promoter of all these things.

Rather, it's usually one or more people at top of the hierarchy who set the tone. An enforcer saint. Everyone else is just following orders.

Pray you work underneath one of these saints.



> Is paying attention and acting on microaggressions and unconscious biases part of the culture?

So this is a woke culture test.

The conservative or libertarian half of developers are excluded.


I think you need to be more charitable with your interpretation. Sure, "microaggression" is an incredibly loaded term which I don't think is used outside woke intersectionality politics (which, I will declare up front, I think is a terrible thing). However the ability to identify unconscious biases is an important skill — especially in a hard science — regardless of a person's political outlook.


> However the ability to identify unconscious biases is an important skill

I very much agree with that.

But the woke have no interest in exploring that. They already "know" the bias everyone has. It's part of their belief system, and not open for debate or reflection.


Please don't construct a strawman like this. Also, please don't accuse others of doing the same thing you're doing yourself.


p(woke | rationalist) is like pretty high dude


"Is paying attention and acting on microaggressions and unconscious biases"

I understood this as aggresion and bias with regards to work-related technical discussions, for example ignoring or talking over people in a video call who suggest taking a deeper code dive when debugging, or saying "no, I disagree, you are wrong" on a technical point without saying why or making any effort, or starting a call by dropping f-bombs, or shouting "but the code is the documentation!!" when someone politely suggests that issue comments by a library author are also worth paying attention to when trying to understand the meaning of a rabbit's hole of code call chains... especially where these people are from different teams or companies with different incentives.

These are just recent examples of aggression and bias in a technical context that I have personally witnessed.


I agree that fighting the jerk behavior you describe is essential for a healthy team.

It might even be part of what the writer means.

But by using leftist sectarian language they send a strong signal that people like me are not welcome to work in their team.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: