I find it much easier to deal with someone who has no relevant experience but cares vs someone who had 10 years experience but doesn't care. Now someone who has 10 years AND cares is rare but pure gold.
But in general, I would encourage folks in computer tech industries to be hesitant to assume such a binary approach to evaluating prospects. "Did you code for fun in high school?" might be a useful question now, because software development as a field is so young that high school students can try out significant work easily.
But in most high-end professional fields, that's not true. Imagine asking a prospective medical resident "did you treat any diseases in high school?" or "how much surgery do you do in your spare time?"
I think we should take it as a positive sign that people are approaching software and computer technology as a professional career that they can professionally pursue. That approach can produce great work too, and a growing field means a greater diversity in how people find and display their passion. Getting a CS degree is not easy, and requires some base level of interest and commitment to complete. Even a bootcamp is not free, and takes some focus and work.
Anyone above a certain age (like me) came into the industry sideways, as it was developing, as a result of personal passion and interest. Let's not mistake that for an inherent property of a good tech employee... every industry on Earth started that way at some point.
Computer technology industries are maturing, just like railroads, oil development, aviation, telecommunications, and dozens of other industries have over time. That's not bad, and we will have to take it into account as we consider the model employee.
Hiring people who are "not proven" might also mean hiring people who are well trained, but maybe have not yet proven their passion in a way you recognize.
As a doc: you can't become a doc for love of the profession, because it's a profession you can't test drive. You can become a doc for love of the fantasy of the profession. It's awesome that programming is different in that way: you can't produce an OS in HS, probably, but you can certainly take a run at making simple stuff in python or simple iphone games.
That’s the fantasy. And, hey, when it pops up, awesome - it’s a very energizing moment. However, most medicine has absolutely nothing to do with that day to day. The day to day is subject to Pareto’s Law. A surgeon may occasionally get to “take care of the sick,” but 80% of the time they get to do a five minute chart review of an obvious gallbladder passed along from the ED, a cookie cutter GB removal, and an uneventful recovery that involves a daily stomach poke and the same handful of questions they ask every other post-op pt on the floor. Medicine is a technical profession.
It's a Nobel laureate's lab. They work on biochemistry. They work on producing drugs that might one day cure cancer. Everyone that works there gets to say, "all the small, day to day, technical things I do ... are opportunities to help advance the fight against cancer!" And it's plausible! They're rockstars!
The primary investigator, he still has to chair like six goddamned committees because that's the institutional politics of his job. But it lets him do his job, so it gives him a chance to cure cancer! Surely that somehow makes all those committees less tiresome and boring. Every time someone spends half an hour arguing the merits of switching what brand of coffee pod they want in the faculty lounge (read: closet), he can think to himself, I'm doing this to cure cancer! Certainly that makes all the boredom just zip and go away.
His senior PhD student? When he's up at three AM writing a last-second response to a peer review of his latest publication of a boring and predictable iteration of their last study (but needed, to juice his pub count and help him land a job FIGHTING CANCER!)... when that response makes it abundantly clear the reviewer didn't bother reading his damn paper and just wants the student to revise it to cite the reviewer's last paper (to juice their pub count)... well, that student can rub the grit out of his eyes, pour himself another cup of discount-brand pod-coffee, and say, this is awesome! I'm helping to fight cancer!
When the janitor comes in in the morning, and gets pissed because the water has turned blacker than the faculty's discount coffee but the nearest closet with a hose is on the other side of the goddamn building, well... hey, that's okay. Because he's keeping this lab clean, which helps the lab workers do their jobs, which means he's helping FIGHT CANCER!
None of that is un-true. All of that helps people get out of bed in the morning. But just because your job, big picture, has a noble end doesn't mean the every-day misery of every-day work is somehow magically awesome.
If I was hiring a doctor, I'd want the person to understand that. Because if they didn't, they'd be a goddamn train wreck once they found out that hours of paperwork hoop-jumping isn't any more exciting just because it's medically related.
I don't mean to go ad-hominem here, but honestly: are you a college student or something? If you've held down a job, you should fully understand that the "mission" of the job is separate from the day-to-day tedium of ... work. Work is work.
You can't even become good at flipping burgers without being interested in doing it.
Boredom has never been an enemy they’ve faced. And I’ve thought about it and over the last decade of programming I get it. Boredom hasn’t been an enemy I’ve met. And I look around at my friends and nor is it an enemy they know.
I think the cynicism just misses the joy most people get from performing their craft right.
But, hey, if you want to buy the hype, go for it.
A lot of the documentation is an attempt to create "quality standards". I like that in theory, and it's independent of what kind of payment mechanism is used, but ... docs will have a riot if we're held accountable for final outcomes ("This guy has had 30 docs in 20 years, smokes like a chimney despite my repeatedly trying to get him to stop, and I'm getting my wallet drained because he had a heart attack?").
Alternatively, process measures ("Did you put everyone with high cholesterol on a statin?") kill autonomy, require documentation, destroy nuance (there's a good reason I don't want this patient on a statin) and also calcify medicine (advances in medical knowledge occur faster than Medicare updates its performance metrics).
Unfortunately, "quality care" is also a PR move to cut costs. Create enough metrics over enough things docs have no control over, and a documentation slip-up becomes a good reason to ding us our reimbursement. This documentation is also a way to cut government and insurance budgets: they want the information for their programs, but don't want to pay for anyone to convert unstructured medical notes into structured data. Therefore, it becomes an unfunded mandate plopped onto physician's heads. That's not going to go away, unless all insurance - public and private - go away.
All-cash payment removes documentation because there's no one to be accountable to. There's no central party trying to track your outcomes. But... I like the idea of tracking performance. I like the idea of encouraging quality care. I wouldn't mind if we could divorce quality metrics from centralized payors, and put the cost burden of data entry onto the party using and benefiting from that data rather than physicians.
Some of the paperwork headache won't go away regardless. Primary Care docs spend most of their time doing bullshit paperwork tasks. As often comes up on HN, an employer won't let you bring your own chair to work - unless you get a doctor's note. Family med guys write stupid notes every single day, and it occupies a significant percentage of their workday. It's disheartening, and it's not going anywhere.
Another big thing that's happened is social work. Every issue that society doesn't want to deal with rolls down to healthcare: the homeless, the mentally ill, the uninsured, eventually land in an emergency room. That means we're the central clearing house for social services. Psychiatry - especially ER psychiatry - spend probably as much of their time (or more) networking with social work and the state trying to secure Medicaid and housing for patients than they do addressing their mental health needs. It's work that needs doing, but man is it heartbreaking to train to be a physician just to spend your day trying to arrange housing.
Lastly, even a single payer system has incentives to deny services. That's still a cost borne by the system. Accordingly, docs will still find themselves fighting paperwork battles with the insurer to justify a course of treatment - they'll just be doing it against a single bureaucracy instead of several.
My outside knowledge is that a medical facility focuses on: diagnosis, confirmation of diagnosis, selection of treatment along with annotations about EXCEPTIONS to standard treatment, finally actual treatment. I'd like for doctors to focus more on the keen observation and decision parts and would not mind automated transcription of doctor / patient interactions to be reviewed and possibly have a summary forward (but not replacement of actual data) added by other staff. That might be an opportunity to hire/train other types of staff and gain experience in a more concrete way; much like the source article wants to make it easier for potential experts to grow in to a job.
If there's a typical outcome given an input it's important to document the decisions that affected the selection of non-generic courses of action - exceptions are things that should be known in the future. That's something that any worker should do.
The NTSB, as seen in a different recent hacker-news linked article, has excellent postmortems, even for incidents which only came close to being disastrous. A cascade of failures and lack of good decision processes seem to be the typical cause and review with recommendations on how to prevent them from occurring in the future is good. An honest mistake or poor circumstances for otherwise good people are worthy of overlooking and avoiding in the future. Lack of training can be identified and refresher courses or other supplementary training can improve the situation for everyone. Much like making sure someone is addressing problems in their job and growing to accommodate the required work.
Though there might be a bad fit for a job; either someone not able to do the expected work of an individual in that position, a job that's poorly defined and/or not broken up in to manageable units of work, or a worker that is a bad actor to some degree. All of those defects are situations that review and recommendations for remediation should address and resolve.
In your specific case, I believe having a single payer system would improve the outcome related to the above considerations. Affected individuals would still be covered by 'the system', good doctors would not be burdened by specific negative outcomes that happened to occur under their care, and bad workers of any type would be removed.
The actual outcome of individual patients shouldn't factor in to compensation. However addressing that in detail is clearly off the main topic.
I think it is both ethical and practical to recognize and classify cases that are bad fits for a given worker and to attempt to route them to someone that is a proper fit; while providing the best intermediate care and transition possible.
Also of note is that for a 'single payer' system the costs SHOULD be divorced from the actual treatment; though might be a considered criteria when a given standard of treatment is selected.
The patient history we track isn't a literal transcript: it's a transcript of what we find pertinent from our clinical interview and observations. The word "pertinent" there is key; it's intimately and inseparably attached to our decision-making process and diagnostics. Think of it as a persuasive essay. The facts and the deliberation are what a medical historty is, not just a list of data. Med students spend half of med school learning the very basics of this.
> That might be an opportunity to hire/train other types of staff and gain experience in a more concrete way; much like the source article wants to make it easier for potential experts to grow in to a job.
In learning hospitals, we already have residents and med students doing this. And then an attending will come and do it again, because we're better, and this is a learned skill built around our clinical acumen, not a literal transcription.
> If there's a typical outcome given an input it's important to document the decisions that affected the selection of non-generic courses of action
The combinatorics of medicine are too huge for "typical input." That said, we justify all of our decisions, so that someone reviewing our actions can decide whether our behavior - the outcome - was justifiable given the input. The "reviewer" tends to be someone in our own specialty, though - replacing this with something standardized and codified would require, literally, encoding the entirety of medical reasoning. It's a bit beyond modern EMRs.
> The NTSB...
We have what are called "morbidity and mortality conferences." If something goes to shit, the doc responsible gets to take the stage in front of his and all related departments next week, and explain the entire course of the medical episode and the decisions taken at each step, while being monday-morning quarterbacked by every doctor they're even vaguely familiar with. The episode is also forward to Quality Improvement, which is a hospital-led group looking to address systemic and process errors. And, lastly, malpractice suits are the final inspection.
When docs fuck up, there isn't a shortage of post-mortem. None of that does anything to shield physicians from malpractice liability.
(An exception: if you operate in a FQHC - federally qualified health center - for the underprivileged, and you maintain a QI program that meets government standards and audits, the government assumes your facility's liability risk. But physicians are still fire-able at the end of the day as part of the QI process, so the incentive for Cover Your Ass medicine remains.)
"Bad Doctors" are a rarity, in my experience. What is more an issue is "doctors good enough to practice good medicine under modern time constraints, and those that aren't." Not everyone can manage a complex patient in 3 minutes. In fact, most can't. But with everyone squeezing down hard on reimbursement, that's become a necessity. No one wants to pay for the time that good care requires. So, docs default to shotgun medicine - throw all the tests at the patient so you can't be accused of overlooking something, and hope that something comes back unambiguously positive. Next patient.
> In your specific case, I believe having a single payer system would improve the outcome related to the above considerations. Affected individuals would still be covered by 'the system', good doctors would not be burdened by specific negative outcomes that happened to occur under their care, and bad workers of any type would be removed.
I think I should clarify what a single payor is. It's often abused in popular literature to mean something like "government monopoly on healthcare." It's more literal than that, though: it's a single payor. So that can mean things like:
a) A government monopoly on healthcare, where all healthcare facilities and providers are owened by the government, paid by the government, etc. HC is distributed as a utility, and people assume it is covered by their taxes (UK) or they pay a nominal fee (Canada, if I'm not mistaken).
b) Government monopoly on health insurance, but healthcare facilities and providers remain private competitive entities. Healthcare provision remains fragmented as a competitive market, but at least these facilities can expect uniform negotiations and documentation across all their patients, since they're all coming in with the same insurer. Patients expect their care to be covered by their taxes, premiums, or some combination of the two. This is closest to "Medicare for All."
c) Regional monopolies on health insurance. As per "b", except that inter-state entities continue to see some heterogeneity in payors. This regional monopoly might be governmental (e.g., Medicaid For All) or private (such as areas where only one private insurer is available.)
None of these things change the liability landscape directly, although in "a" malpractice liability is usually assumed by the government as hc providers are employees. This doesn't eliminate CYA concerns, but does shift them from "do everything the patient wants, whether or not it's best for them" to "follow local policy and guidelines, whether or not it's best (for the patient)."
> Also of note is that for a 'single payer' system the costs SHOULD be divorced from the actual treatment; though might be a considered criteria when a given standard of treatment is selected.
Why is that? Regardless of who the single payor is, they have budgetary constraints. The appetite for healthcare is infinite compared to resource inputs. Someone is going to be squeezed to make those resource allocations. Currently it's the physicians, but if not physicians, someone else.
* Everyone is covered by one pool
* The pool is funded externally
* absolutely no incentive to defer detection
* absolutely no incentive to defer treatment
* absolutely no incentive to defer care
* because everyone will be covered by the same system in the future.
* Competition can still occur as far as offering services /to/ the pool.
I'm going to have to give my response in a couple of posts, since HN says it was too long.
> My outside knowledge is that a medical facility focuses on: diagnosis, confirmation of diagnosis, selection of treatment along with annotations about EXCEPTIONS to standard treatment, finally actual treatment
The first thing to clarify is: there are a number of different types of medical facilities, ranging from private primary care to massive, regional specialty care hospitals, and the modifications to the above really depend on what type we're discussing. I'll pitch my answer to small-to-mid-sized secondary care (bread and butter specialty care like cardiology; general surgery, some onco surgery; little or no sub-specialty care) because that's the most commonly encountered facility. That's with the caveat that, again, the answer to that is different from other facilities (e.g., your family care practice) that are just as important to discuss.
Your list of things facilities focus on is correct except for your idea of annotations of exceptions. Our documentation focuses on the entirety of the patient encounter, all of the physical and laboratory exam findings we consider pertinent, our treatment choices, and often some degree of our treatment rationale. Outside observers often think "well, don't you just give a standard CHF treatment to someone with CHF, unless there's an exception?" A large purpose of our standard documentation is to provide an outside observer the chance to recreate how we came to our conclusions regarding diagnosis and the best course of treatment. In short, we document to cover our asses from malpractice.
Second, we document so that the hospital can bill insurers. Insurers create increasingly specific requirements for what must have been done or detected before a service can be provided - and those things must be in our note (or else the insurer assumes it didn't happen), and must be linked in our writing (Patient had finding X therefore we did Y). Increasingly, if one doesn't link it, they argue that they couldn't infer that Y was because of X. (That comes up more with performance metrics - oh, you told the patient to lose weight? We didn't realize that was meant to be an intervention for being overweight. We can't just assume what you mean to be treating.)
Lastly, we document for government and insurer mandated performance metrics. For instance, I need to do a depression screening for all over-65s annually. So, a helpful person working on our EMR built-in a reminder tab - did you do a depression screening today? I have to go through a drop-down to select "No", and then another for reason why ("Already Performed", "Patient Not Eligible", "Patient Already Diagnosed with Depression") about 30 times a day. That's our simplest metric, and one of dozens (because there's not a consistent set of metrics across all insurers.) You're about to suggest a way that this can be automated to suck less. I can suggest that, too, but as you may have noticed, this program is paid for by the hospital, to benefit the hospital's performance with insurers and the government. Physicians aren't the customers. Dev time is committed to making it suck less for us only enough to keep us from storming the hospital with pitchforks and catapults hurling ICD10 printouts.
And, lastly, something I truly didn't understand when I worked in health insurance but I do now: there's absolutely no such thing as a standard patient, plus or minus exceptions. The reason for that is because there's no such thing as "a patient with CHF". There's "a patient with history X, which leads me to believe they have CHF subtype 2C, with complications X, Y, Z, and complicating factors 1 and 2." Good doctors keep all diagnoses provisional, because the evolution over time will absolutely change your understanding of the patient - whether to CHF subtype 1Zebra or because what you thought was Complication Y and Z was actually parallel disease Ampersand. This is why we constantly communicate the story of the patient's history to one another, and why every doc takes their own history. Accepting a diagnosis from someone at hand-off is called a "chart rumor," and making a habit of it is a fantastic way of mis-treating patients. I cannot possibly tell you how many times I've improved patient care by just starting over from zero rather than accepting a chart rumor.
The only people I see claiming everyone needs to be passionate all the time are business owners and management who then channel that passion into unpaid overtime
oh, I don't know about that. It's like a 6 week community college course to become an EMT. There's a test drive for you. There are 2 year programs that will get you a nursing license.
You can get a taste of the medical career path without going the full MD route. Maybe not as accessible as coding, but it is accessible.
As my kids are just now graduating HS, this is something I try to drill into them. College can be like an assembly line that spits you out saddled with the equivalent of a 30 year mortgage, trained for a job space that you literally have no idea if you'll even want to do ... or it can be like a Baskin Robins of careers, and you can try every last one of them until you find what you really like.
passion and knowledge of self are everything. the rest is a commodity.
EMT and nurse aren’t “mini doctors,” any more than doing video game QA is “mini programming.” It gets you near the profession, it doesn’t put you into the shoes.
I don’t know how to articulate this. The job that requires about eight years of post-grad training, including four of them as heavily supervised on-the-job training with slowly increasing responsibilities for 80-100 hrs/week, is wildly different than the job that you can start doing in six weeks. Working in the same setting as a physician is no more “test driving” what it’s like to be a doc than being a secretary at a hedge fund is test driving what it’s like to be a hedge fund manager.
The other part it leaves out is the hours. "What do I do with this patient?" is a very different thought process at hours one, eleven, and eighteen respectively, of what should have been a twelve hour shift. One hospital I know of has its trauma/SICU surgeons do 5 days on and 5 days off where they pull 12 hour shifts daily, and they're on call every night. But trauma/SICU doesn't really sleep, so these folks are making critical care decisions at hour one-hundred-and-twenty, of which maybe eight hours involved sleep.
All of these things occur in the internal landscape. Shadowing is ... not effective.
Most med students have done clinical research, shadowed doctors, volunteered in hospitals, etc. Back in the day, I did hospital volunteerism, clinical research, hospital QI, and I worked in health insurance. I'd seen medicine from pretty much every vantage point before I second-careered into being a physician. And every senior med student and resident and physician will tell you, "holy shit, I had absolutely no idea what it would be like." Even the occasional nurse that decides to go to med school, who most commonly think they're halfway to being docs already, will say "omg, I had no idea how much I didn't know, and how much you guys have to do." We had two in my med school class back in the day. When we were in didactics, they were shocked by how much docs had to know. When we got to clerkships (the second half of med school, where you work in hospitals) they were floored by how much was involved in being a physician that simply wasn't visible to nurses. And that's... you know, nurses. Folks who work in our vicinity on a daily basis.
Given the current state of medicine in the states, you touched upon the topic of insurance companies. The endless paperwork seems to be a side effect of physicians being beholden to insurance companies to supply a steady stream of patients that afford an income that will offset the
steep debt and decades of opportunity cost spent in school. This seems unique to America from what I can tell and is only getting worse, along with what I'm told regarding physicians (MD/DO) competing with nurse practioners and physician assistant, government oversight, etc over area of practice.
Finally, the topic of burnout and physician abuse (lack of sleep, working overtime and being on call), is truly disgusting. This was a tough read, previously posted on
I sincerely hope you and all overworked physicians take care of mental health and avoid burnout. I think private practice and limited hours for certain lower specialties might be the answer for my significant other if we plan on starting a family anytime soon.
The simple truth is this: pay has been dropping like a rock, every public mention of doctors is about how much we suck, regulators and bureaucrats are telling us how to practice medicine (but we continue to carry the liability), we're given 5 minutes to see patients when we should be given 20 (and when we rush out the door, patients think it's because we don't give a shit), and and and. .
The worst of it is: everyone else has a "career" - they're allowed to worry about work/life balance, about trying to get paid for their time, about trying to build a nest egg. When physicians do that, well, medicine is a /calling/. You're not allowed to worry about paying for your kids' schooling, or paying off your debts, or etc. That stuff is for programmers and accountants; you're just working with "sick people in their worst moments," so you're not allowed to be anything but self-destructively selfless. No one is allowed to discuss physician misery (I hate the word "burnout" - those docs aren't a resource that came to the end of its useful lifespan, they're human beings in desperate misery) except when residents are throwing themselves off the roofs of hospitals. I've lost -two- friends in the last year. TWO in the last YEAR.
And the only people that pay attention to that are the residents who have to carry on and the attendings that go, "well, it was still better in my day, when residents didn't expect to sleep or ever go home. It was better for patient care continuity if their doc never went home."
Private practice is dead or dying for most specialties as well. The healthcare field is heavily concentrating into large regional networks.
I knew this all going in. So I can say to your wife what I said to myself: The only reason to become a doc is if you cannot, for the life of you, force yourself to become anything else. It has to be a fire in your goddamn marrow.
And for all that, I recommend choosing a residency in psych. They work 9-5 even in residency - call tends to be 9a-8p or 9a-11p (rather than 24-hour shifts like the rest of us) and every other weekend they tend to work 9-9 Sat and Sun. It's the lightest residency on the planet, and they still make - per hour - the same money as IM and FM. It's the best thing I've ever heard of for people that want to have work/life balance and a family. Unsurprising, as they're the ones who spend every day seeing stressors break people's minds in half.
It remains that there are young people who understand this and still long to study arduously to be doctors, and then put in the work. It seems there is something appealing about the work that can be done, even if it’s imperfect. Is there any kind of enthusiasm you could accept as beneficial?
So I want to clarify: what I said was that people who aim at medical school because "they're passionate about medicine" are mistaken. They're passionate about a fantasy of what medicine is, because you don't really know what it's like until it's there (and popular depictions of it are as unrelated to actual medicine as 1980s hackers movies are unrelated to actual programming.)
Many people are deeply hurt by the gap between fantasy and reality. They don't complain about it openly, but inside the doctor's lounge... oh yeah.
Some find a new passion, for what medicine actually is. Sometimes this is closely related to their original ideas, more often, it's only tangential. But they're on fire, and that's great.
Most just grow up, and find that they do a difficult but worthwhile job. They don't necessarily have a "passion" for it, but they appreciate the importance of what they do, and concentrate on doing it well. They work to take care of their patients, but also to avoid liability, and to earn their colleague's esteem. They're normal physicians.
I'm discussing the fact that what people think medicine is vs. what medicine is has a huuuuge gap. You can't be passionate for a thing when you've only seen its mirage. That doesn't mean enthusiasm is inherently bad. It's misplaced.
>it remains that there are young people that understand this
No, there pretty much aren't. That's rather the key point. I've never met a student, resident, or practicing doc that said, in retrospect, yeah, they had anything resembling an accurate clue about what medicine would actually be like.
I agree public perception of a field lags behind reality, but it’s only a lag. Medicine has been a tough job for awhile. There are students who understand this well enough to handle the adjustment (since they don’t have first-hand experience yet.) It’s not a failure of “passion” if it’s crystalized into tangible goals, and better developed motivations and principles, as the student matures. It’s also not fair to equate surprise at some of the reality of a job with regret.
Finally, I know people who pursued medicine from childhood and are doing well in it. Granted, most of them had doctors for parents, but they were still quite excited.
And then you go home while they stay behind on call...
I didn't even know my career existed until after my freshman year of college, but no one I work with would say I'm not passionate about what I do.
As an aside, the advantages of knowing what you want early are huge assuming that you continue down that path for a long while. If we consider something like graduate school, knowing as a freshmen that you want a PhD means growing your network early, cozying up to professors to write recs, doing REUs. You could have someone figure out towards the end of undergrad that they want to continue on and really struggle to put together a compelling package even though they might be just as passionate or have just as much potential.
Sorry that was kind of a rant.
Software development is more easily compared to things such as cooking, playing some instrument or even writing.
There's some knowledge and training involved but that can be learned very quickly and a lot of stuff can be done on intuition.
There's been a number of geniuses composers who wrote songs at very young ages; there's potential for that in software as well.
It doesn't means that more experience doesn't matter - whatever someone developed at an early age is bound to be worse than after more years of practice, but you can start programming very early.
However, humans are complex and you can have boy geniuses and late comers that are equivalent.
But, I think it's not taken as a necessary attribute within the medical field. There are a lot of great doctors who didn't decide to pursue medicine until college, or maybe even later, and as a whole the field seems fine with that.
Med schools are increasingly selecting for that, in fact. Second careerists tend to do better (because they're adults), and have less burnout (because they're more likely to have made a knowledgeable adult decision about what they're getting into, rather than being disappointed reality didn't live up to fantasy.)
Eventually became a GP (family doctor) in Georgia
Yeah, for some reason, when you're interviewing as a forensic pathologist, mentioning that you're cutting up partially decomposed bodies in your spare time does NOT count as a positive.
I still think there is value in looking for passion in people, but maybe passion for craftsmanship in a field other than software should count equally well.
Obviously, we don't limit candidates to those who are also musicians. Heck, we don't even tend to ask about it in interviews. But, it serves as a general "we probably did ok with this hire" shortly after on-boarding.
There was a time some while ago (as I blow mental dust off my college literature studies) when "passion" was not a word that carried wholly positive connotations. It came with implications of things like irrationality, moodiness, strong emotions, even violence.
Passion can be an asset if a person must maintain momentum in the face of hardships and competition. It can be the base for tremendously hard work (which is one reason I think employers like it). But, it could potentially be a liability too, if it drives a person to create conflict or inhibits teamwork. Depends on the role and the team.
* Discussing and ultimately making decisions
* Learning about how things work from people with a different set of knowledge than you
* Doing things which optimize for the understanding of other people after you, as opposed to optimizing for your own productivity in this second
Software engineers who do these things with low empathy end up contributing to the stereotypes about engineers who are unpleasant to work with because they don't consider other people.
If you mean CS in the academic sense, e.g. algorithm research, probably not much.
If you mean CS as in "occupation that involves designing and/implementing software" then I'd argue empathy is an integral part of being an effective professional. Empathy facillitates effective communication and ethical/moral decisions.
I think the idea that "all that matters is your code" is flawed. It's very short-sighted and narrow minded, and it does nothing but to artificially limit oneself. The world is bigger. You interact with other people and can have either a positive or negative effect on them.
Some of those other people might not know as much about CS, but might have other skills like design, marketing, management, etc. Empathy is a trait that helps one to appreciate and adapt to these differences constructively.
I think it is a shortsighted strategy, but not one without basis. The supply problem is real.
Saying that you shouldn't hire people because they're only in it for a paycheck and not passion is just shorthand for I want you to make this your life, but not pay you to make it your life.
Subsequently, what I've seen is that most problems in the field of k-12 education in the states can be tied to this sentiment.
One of those places, however, did slide after a few years into the nastiness described (I left when the slide started, and it reportedly continued). Another
So while "Passion" CAN mean "we want huge levels of uncompensated work without complaint", that has not been the norm for mean and I think it's a mistake to assume that is what is always meant. Certainly I've been at some great places to work that I'd have avoided if I thought asking for passion was a negative.
My usual retort is why should I care about a founder’s vision if I don’t have a substantial amount of equity in the company? They find that highly insulting.
Nothing wrong with believing in the product or the company, but there is something wrong with thinking that would be enough. I don't think the entitled executive syndrome you're discussing is really the norm for companies, but yeah, I've encountered it and it's annoying.
Come on, you can't leave us hanging like this... What happened there? :)
A more charitable interpretation of "passion" would be... I'm nominally a Python & Scala programmer (according to my CV), but I can also competently talk about OCaml, Haskell, Rust, instruction scheduling, tracing JIT compiler implementation, propagating type inference, TCP stack, C memory model, register allocation, hardware concurrency primitives and ring 0 privileges.
Compare that to another Python programmer that once tried to convince me that when a C++ program tries to access unallocated memory "the computer just throws an exception".
But would you consider someone passionate if they said that they only care about learning a language/technology/framework that is marketable and doesn’t believe in learning for its own sake?
Unless the Python developer is actually writing C++, I don’t think he would ever need to learn the intricacies of C to be a good developer.
But take that opinion with a large grain of salt. I spent 12 years bit twiddling in C/C++, 10 in C#, and have been doing Python for a grand total of 6 months so I definitely haven’t done anything complicated with Python.
The end result is, that (1) my knowledge and skills extend way beyond any of my past or present job descriptions, and (2) (this is probably even more important for employers/productivity) that I'm capable of, and curious enough to actually want to, learning quickly/deeply and solving hard problems that might not have an obvious instant-knowledge-based solution.
I think that anyone that has ever had an interest to go "beyond" (in any way) the obvious, or immediately what's necessary in his/her day job, would built such a knowledge base - but could of course be completely unrelated to mine (e.g. networking specialist, cryptography geek, hardware/embedded engineer, ...).
Wants to stay competitive and be able to earn near the top of local market => interest => knowledge.
There's no immediate reason to expect a company to help train up employees (not just juniors, but moving people from every level to the next step)....unless those companies want to improve the outcomes of their staff, want to improve morale, and want to resolve their perpetual difficulty in finding people senior enough for their needs.
Don't do it out of social responsibility, do it for keeping your business functioning long-term. I've worked too many places that have almost exclusively senior devs with more work than they can handle. They focus on hiring MORE senior devs (but don't have much success), meanwhile their senior devs are spending a lot of time doing work that junior devs could accomplish.
There are numerous side-effects too - mentoring (if you like it) can help improve your own skills and outlook. Fresh perspectives are always good, and once someone gets their first few years in, they will occasionally have insights that seniors can learn from. At the best places I've worked the more senior devs have wider perspectives and deeper experience, but the "who is right" between junior and senior becomes more of a statistical model than a certainty.
(In STEM. Obviously you presumably don't apply to Juilliard because you decided you might want to give this music thing a try.)
Take a lot of craft trades. Ie, wood working, painting, music, writing, whatever. Those can often be people who just love doing the area, they love creating with their medium. I feel passionate tech people are similar.
The example of a medical professional was.. unfair. Yes, free time usage as a measuring stick for passion in some fields (like medical) is a terrible measuring stick, but I'm not legally inhibited from wood working in my free time.
Note that I'm not at all speaking about whether you should be binary about hiring (passionate people vs non-passionate). I'm simply talking about how in some crafts, crafting outside of work-time is a decent indicator to passion. Another indicator is, I feel, knowledge outside of their professional experience. I'm not a frontend programmer, but I enjoy learning and following tech trends, so I've used and can speak somewhat comfortably with frontend frameworks, my preferences in them, and etc.
There are other areas where "engineer" is thrown around pretty liberally as well in the US. But title inflation in software is probably more pronounced than just about anywhere else.
At ten years you should be absolutely expert in the areas you've worked in. This is staff or senior staff level, if you also have some business acumen, ie the ability to see beyond engineering requirements.
Doctors, and most other professions, cannot be practiced without a lot more investment. If we lived in a society where access to resources required for practicing those careers would be easily available to teenagers, we would see young high school students do that too. I don't think you're making a fair analogy here, there is basically no way a high schooler could even _attempt_ treating diseases but it's very possible for a high schooler to program 6-8h a day and learn.
This is not true at all. I was fascinated by bugs, cells, and other bio stuff as a kid and a decent microscope is not that expensive. The fundamentals of being a doctor - biology, chemistry, observation - are well within reach of teenagers. A lot of medical knowledge is also freely available. Obviously no one expects a teenager to successfully perform surgery, but teenagers also aren't expected architect operating systems used by thousands of people.
> I don't think you're making a fair analogy here, there is basically no way a high schooler could even _attempt_ treating diseases but it's very possible for a high schooler to program 6-8h a day and learn.
Again, not true. Treating a little sibling's cuts and bruises is well within reach of most people. It's really a question of being around people who get hurt more than average - having an energetic, younger family member provides a lot of opportunities for treating small injuries. As does volunteering as a coach/mentor for little league sports.
The more interesting distinction is that doctors generally don't build anything. They diagnose and fix. So, while a teenager into programming can show off a webpage or a game they've been working on, a precocious youngster who knows how to treat wounds, set breaks, inject insulin, and identify a stroke doesn't have a cool portfolio of projects to show off.
I've seen teenagers submit great projects to HN and get feedback from the knowledgeable crowd here, that really seems to be impossible in any other field (unless you have connections).
This is maybe not out of reach for an average teenager, but it is definitely out of reach for quite a substantial portion of the population, the poorest percentile might just not have the time to dabble in programming. They might have to take care of siblings, work for a living or to pay for college later.
What you want to measure in the interview is passion. What you can see is their accomplishment. But accomplishment is roughly passion × means. A rich person can accomplish more for the same effort because they have more power at their disposal.
But you can't easily cancel out means in the interview because you sure as hell can't ask any direct questions about their socioeconomic level, for good reason.
I grew up in the country, got the internet in grade 9 on a 28.8 connection over a single land line (so my time was very limited), had a computer that couldn't load the QBasic help files (it didn't have enough RAM), and our town library was painfully out of date. I never got past basic "if/else" as there were no resources available to me, so I quickly gave up. It's become much easier to get access to all these things, but my answer to "Did you code in high school" has to be a "no". It's funny to me that we assume that developers are young enough to have had youtube in high school.
From a different angle, a good friend of mine who is now an excellent engineer was denied a computer with internet access for totally different reasons during her high school years. Her fundamentalist parents thought the internet was immoral. She couldn't even have a game console, because her parents said those were for boys.
But yeah, asking people that question basically assumes that the answerer had a degree of privilege that isn't nearly as "average" as the asker would like to believe.
My father grew up, went to school, and finished university in the 70s and 80s, in the Soviet Union, where none of those things were within the reach of an average person. He's only started programming towards the end of his physics PHD. Since moving to Canada, he's done ~20 years of programming.
Biasing hiring towards people who programmed in high school is incredibly ageist.
Sure, a computer is within the reach of most teenagers now. No, a computer was not within the reach of most teenagers 30 years ago - or even 20 years ago.
Why not go with something safe - like asking people to implement fizzbuzz and merge-sort on a whiteboard, or something else that has nothing to do with their age, gender, or ethnic background?
On the other hand, many students I know who are studying MechE right now did work on lego-kit robotic & Arduino projects while at school. I'm not sure if it'll play a role in their job application, but in my opinion there are genuinely very few fields (only CS comes to my mind) that a teenager can do at their home at a level equalling a professional.
And if you have family that is in those sorts of heavy equipment fields, chances are good that as soon as you are big enough to hold a flashlight or run and pickup wrenches, you are going to get drafted into helping out.
Nobody is telling their kids to become a doctor by reading aimlessly about medicine.
Many of the top people in other fields also have a strong passion for the field, which came early.
Ex. top lawyers usually love the law and many had experience with related areas (ex. debate) from high school. I know plenty of high school students who are very excited about medicine and do things like volunteering at a local hospital or helping a professor with research.
Early passion shouldn't be a requirement, but it's certainly a good indicator. I've never found someone who is only in tech for the money to be particularly good at it.
Those people certainly exist, so this just sounds like a bias on your part.
Don't get me wrong, I believe most who pick this profession due to its perks, tend to be worse then their computer inclined counterparts; but there are lots of examples of this just not being true.
I don’t see why programming is any different. There’s not much difference between a crude drawing of a house in crayon, and a crude Pong clone in Scratch.
To a first order approximation, someone who has been into computers their whole life has that “spark” as second nature. Someone who sees programming as “just a job” is probably not going to explore any more than necessary to fulfill the requirements of the task at hand. Sometimes less. This makes the curious more effective at approaching harder problems, and puts them in a position to spot potential improvements even on mundane problems.
I see this firsthand at work. My careerist colleagues are lovely people and capable of doing the work assigned to them, but it’s the lifelong computer nerds who solve the really knobby problems and push us forward.
People want to scream about CS degrees being trash and yet all the passion in the world doesn't seem to convey an understanding of memory leaks, garbage collection, and VM/bytecode/etc. hijinks (like JVM, MRI, etc.) to the degreeless people in my team.
A CS degree program worth its salt will dip you into enough low level OS programming that you can bring with you to any language. It also should expose you to multiple programming languages so you can quickly separate universal ideas (pointers, references, pass-by-value, pass-by-reference, various models of multithreading) from syntax.
Meanwhile, degreeless bootcamp people just think Ruby Symbols are nifty immutable strings and have no idea why symbolizing dynamic string keys in hashes results in a memory leak.
I'm not against bootcamps, but polyglot is still my primary requirement for hiring for this exact reason. Bootcamps need to at least integrate a week of Computer Org and a week of doing what they just did in another language into the curriculum to really be a valid way to bypass CS degree programs.
Meanwhile the degreeless people on my team wrote their own operating system that's older than your degree and still used in production today. We're not all the same.
It's still relevant knowledge.
Sure, degreeless self-taught programmers who know their way around this knowledge exist. I never said they didn't. But when you've got 500 applicants and your resume says you were flipping burgers two years before you got your junior position...I don't have time to interview you.
There's so much work to do in both fields and we need all kinds of people, but there is certainly a difference in work environment when the team is dominated by one type or the other.
I think software engineering is more like writing than medicine. Most writers, wrote in high school. Many writers write in their spare time. They have blogs, zines and long posts on facebook.
I agree that not everyone has a opportunity to code in high school, nor should jobs exclude people who leaned later in life, however I think as the field matures more people will have more opportunities to code earlier in life.
Not a fair comparison because the practice of medicine is not accessible anywhere to the same extent as the practice of programming. You need a license to practice medicine. You don't need one for programming. Similarly, surgeons need facilities and equipment to perform surgeries. The only thing programmers need is a computer.
I think the OP has a point. I know several brilliant mechanical engineers for instance, and every single one of them used to tinker with cars and gadgets and whatever else they could get their hands on when growing up. Indeed, such a background is a strong signal for curiosity, which is an important factor in success and professional growth.
Doctors, lawyers, accountants, architects, and engineers can easily destroy or end lives in the course of their work and the certification process tends to weed out people who are in it for the money but not actually capable.
I really don’t think your average CS grad can be said to be filtered in the same way.
I personally feel like the field is too broad to say yes to that above question but in some domains of our field, it is definitely a yes. We might need a more regimented continuing education requirement similar to the legal field's CLE requirements to remain part of the state bar. As we see more profound effects from software we as an industry will eventually become a target.
Just to be clear: That's not what I meant with "caring".
My professional career has been mostly engaged in implementation, customization, solution design and administration. I'm not a practitioner of Computer Science, I'm more like a hybrid of an engineer and a tradesman. I define an engineer as someone who applies known knowledge/methods and a scientist as someone seeking novel knowledge/methods. There's a spectrum between scientist, engineer, tradesman, technician, etc.
Technology is different because you can enter as a tech or even a finance person and move into some engineering roles without formal education. Only a few corners of tech are off limits without formal knowledge.
Good programmers (esp. the ones OP referring to) are usually curious, like to tinker. So it's a valid check whether they like hacking since young age (I'm not saying those who don't are not good, but those who do most likely are).
Maybe a better analogy is an inventor (is this even a job?).
With all the caveats about dividing people into types, I think you need the curious / tinkerer types, and also the solid / bureaucrat types. At least, this is true if your business has reached a size where scaling is an issue.
I'm of the curious / tinkerer type, but putting a bunch of people like me on a project where the customer expects a 20 year service life, and operations expects to maintain their sanity, is not recommended.
In my view, starting from scratch in college and learning programming in a classroom may be harder than learning it by trial and error in your spare time.
-- one word: QUALITY
Passion is nice, but as author above I'd prefer to have a person who cares about what they're doing over just a passionate person.
Unfortunately to find a "caring" person is even harder than to find a passionate person.
In a sense I feel that this current trend of promoting software development as "just a job" is yet another step in the ditch that is forming between the ideal of "software engineering" and the reality of "software development". It's the way things are going and I have no strong feeling about it one way or another, but it seems obvious to me that employers would be more attracted to someone with passion for the craft, as well as fellow passionate wanting to work among a crowd like them.
This just makes the same kind of mistakes that the article is saying to avoid.
In a way it might be a way to weed out people who are reluctant to try something new. someone who only coded in highschool, a somewhat antisocial hobby, went through college and still only wants to code maybe didn't try anything new.
Lately I have been thinking that passion is something learned.
Building simple,solid, maintainable software that does what it is supposed to?
Or cares about chasing every new fad, has ten frameworks listed on their resume and is currently midway through the machine learning / blockchain hype?
I care about the former, but plenty of people would see that as having little "passion". For example, I don't have any experience using NoSQL databases on my CV, because I haven't had a use case for them and I could tell that they weren't all that they were hyped up to be 5 years ago when those were peak hype cycle. (I can design a relational database properly and know how to index it and optimize queries).
Ten years is plenty of time for this industry to kill any passion you started with. I still care about producing good work.
If I weren’t working such long hours currently, I could see myself creating a webpage or writing an app, or playing with arduino for fun. But as it is, I’m pretty much tapped at the end of the day. I don’t really have the mental energy (or solid blocks of time) to accomplish much of anything.
I could definitely see myself doing it if I were in a situation where I didn’t have to work. (If I won the lottery or I had a rich wife/parents or whatever).
I’ve been doing this for about 10 years.
That has happened once in the last 15 years that I can actually remember.
I guess that's the question which most commenters actually missed :-)
If I had to define things I'd say that the former is actually "care" and the latter is a bit closer to "passion".
This is a legitimate, but deeply opinionated "why". You employing me and me working for you is a business transaction. Why should I care about your mission beyond giving enough 'care' to produce great work that helps you run your business, run it well, so much that you'll want to keep the relationship going, but maybe provides me with some learning opportunities and skills that if we part ways, I don't starve looking for the next opportunity.
I don't care about whatever your mission was when you started the company, I don't care how you think you're saving the world by creating another conference call platform. In my 22 year career I have deeply truly cared about one company that employed me. One. And even then, it was still a transaction: we need your skills, I have those skills, let's trade and do good work together. That's all I need to 'care' about, IMO it's all the employer needs to 'care' about.
Maybe I'm jaded looking at a tech industry that doesn't seem to have a damn clue what it's doing when it comes to hiring, disaffected by all of these bullshit job requirements that are as far removed from reality as Earth is from one of Jupiter's moons, and offended by these job interviews that feel less like conversations and more like I'm trying to outwit a power hungry DM during a game of Dungeons and Dragons.
I don't care if you care about the mission of the company. It probably helps but I don't care that much either as long as the company doesn't do anything illegal or something I find ethically objectionable. I care whether you care about your craft and want to improve. I care if we can have interesting discussions about work and tech or if you just want to be told what to do next and never contribute anything interesting.
I spend around 40 hours a week at work so I want to make this as enjoyable as possible. To me that means mental stimulation and enjoying working with my team. From my experience people who respect each other and enjoy each other are vastly more productive than people who don't.
However, I think there are three big reasons that I don't want to hire people who are just clock-punchers.
One is the users. When I start in a new domain, I work hard to understand the people the product is meant to serve. And when I'm running teams, I work hard to make it so that everybody can develop empathy for the users. I don't believe it's possible to make great products without it.
Number two is the team. Google's research on good teams  matches my experience: you don't get good teams without people who care about their colleagues and what they're doing. I get the desire to think like a plumber, who just comes in to fix this one sink and then goes away again: that's easier and less risky. But software is a team sport. So I want people who care about their team members.
Number three is the bigger scale: infrastructure, process, culture. Anybody who just wants to focus on their contribution does that at the expense of paying attention to broader impact. I want to work with people who notice and care about that, who are interested in making tomorrow better than today. For themselves, for their colleagues, for the users. Because that shit doesn't happen on its own.
So yes, bullshit job requirements are bullshit. And any requirement can be bullshit given the context. But there are contexts which really caring is vital, and those are the places I want to work.
I can dig this, it absolutely is. It's definitely supplemental (emphasis critical).
As someone who started programming from a young age and has almost 10 years of commercial experience, I have to say that being in this industry is horrible.
If they promoted people using a random number generator, it would be a step up from what we have now.
The most horrible thing is that in spite of your passion and experience, your ideas are constantly undermined by money-oriented snake oil salesmen who make extraordinary optimistic claims to become liked by management and get promoted quickly within the company; then, just before everything falls apart, they jump ship for a higher position in a different company and let other people deal with the consequences of their past actions. That's why it's really hard to find people with 10 over years who still care. Caring is a weakness.
I explained...that is because I had such perfectly tested work that I didnt have any major Production issues requiring critical client meetings or mitigation plans. The success criteria there was almost encouraging poor delivery!
Implicit in this was an assumption that a certain number of defects were there to be found in the first place. As a result, if you were too thorough and conscientious before your reviews it actually reflected badly on your reviewers, because they weren't finding anything. As a result, we started pushing out intermediate WIP that we knew had problems so the defects could be "found" and "fixed".
For every manager who grew from the rank and file, understands what their teams are going through, and shields their teams from the BS, there will be another who went through business school and was indoctrinated that you should ask for more than what you want, to hell with the consequences and the corner cutting, and the rank and file will figure it out somehow to deliver what you want.
I like programming as much as the next person here but i'm not going to code 80 hours a day. I also like going to gym and working out, riding my bike, playing soccer and hanging out with friends.
Companies these days expect you to have side projects or contribution to popular open source libraries while having a job. Oh and did i mention asking you questions about algorithms and data-structures that you most likely haven't used in years and won't be using for foreseeable future?
Also I think knowing data structures is essential cos you are using them ALL the time you just don’t know it cos you never bothered to learn them. But they are corner stone of every program.
On the other hand imagine how much could you get done if you worked 16 hours a day. Why even take vacation? Imagine how much time you are wasting when you are off on vacation. Oh wait, studies have shown that amount of hours sat in the chair doesn't translate directly into quality work. It might work when you are doing it occasionally as a one-off but won't work everyday.
>Also I think knowing data structures is essential cos you are using them ALL the time you just don’t know it cos you never bothered to learn them. But they are corner stone of every program.
Did i say that? I know what a graph, BST, Tree, Linked List, Stack and Queue is. Do i know fancy algo's off the top of my head? No. But i also know how and what to google and how to convert that pseudo-code into actual working code.
Probably a lot less. Exercise is associated with better mental acuity and reduced risk of mental deterioration with age.
Edit: Oh, and if you instead trade sleep for learning you're shooting yourself in the foot as well.
You'd reach burnout extremely quickly. You'd literally go insane. Your body would break down rapidly. None of these are good for anyone involved.
Most data structures fall into four categories:
1. Basically a list/array/set
2. An association (map/hash table/dictionary)
3. A struct/record
4. A union/enum/choice amongst finite options
It is only in specialised algorithms and when strictly necessary that one must deviate from these (and the second two aren’t even really data structures). In such situations it can be useful to have other things known but even then I think a lot can be done with some combination of:
1. Binary tree
3. Hashing and Bitvector magic
I don’t think much is lost by getting stuck and needing to research in such cases.
- started programming in 1986 in AppleSoft Basic and 65C02 assembly language.
- graduated with a degree in CS
- I can bit twiddle pretty well and spent the first 12 years of my career doing C.
- I read blogs, post in technical discussion, listen to podcast, etc.
- I have up to date AWS certifications.
But I don’t think I have a “passion” for development. I do it to pay my bills and remain competitive. I don’t do side projects but I will work late to learn a new to me technology or framework. I definitely don’t spend time learning frameworks or technologies that aren’t marketable.
While it might have seemed good back then, we now have a massively cobbled-together web ecosystem. I know it's a cheap shot, but those same passionate people built the crap we have today. I don't think it speaks very highly of that time or the merits of passion.
(many individuals had passion back then and would be strongly against today's web practices -- I do get that)
When you start peeling away the layers of the career, the actual craft is only a portion of today's work. If I could code 8 hours a day, I would. But it's hard to spend a day in the workshop with that passionate, quiet, productive focus.
I am really passionate about the information collection system I work on. I do my best to ensure data isn't leaked, everything is secure and encrypted. If you would like to offer a solution to the fact that the vast majority of people in this country now expect software to be free (save, some AAA video game titles), I am all ears. Until then, I am going to be passionate about what I do, but realistic as well as I have a family to provide for.
This is critical. I think it even goes beyond this: most developers expect software they use to be open source. People have gotten used to the software from Google/Facebook/etc that they can develop and release as open source due to their essentially infinite revenue stream.
Subscriptions? I pay for Netflix, Amazon Prime, Github, MS Office, a VPN, and probably a few others.
If you're B2B, use price per user/customer/developer? e.g. Highcharts was worth the money vs. free options.
The problem is there's no guarantee you don't also collect info or decide to play ads even if I pay for a subscription.
Good to hear. The next dev, manager, owner on the team is not likely to be as conscientious however.
Once information is collected it is rarely deleted, so trust in random third parties is still not wise.
I just genuinely don’t understand how if someone gives personal consent to use their provided data in a specific way, then the company is still acting immorally.
Society has moved forward over the last few decades in favor of people using their body however they like as long as consent is given and no one is hurt, so why is it not the same with information? Why is personal consent to use my information not enough, to the point where we want to force companies into a payment/subscription/no targeted ads model that may not actually work for their business?
1) Do you have to repeat consent every time the ToS or another policy changes?
2) What about repeating consent every time there is a feature change?
3) Should your consent remain after controversial events involving security (FB using 2FA phone #'s for ads; data breaches like Equifax)?
4) What about internal company changes like leadership changes at the executive level?
Unlike consent involving intimacy, there is no big red line where it clearly becomes non-consensual. You can technically delete all of the data, but no one has built something that does that and is trusted. There isn't a right-to-be-forgotten law in every country, so archiving sites can still retain this information anyway.
Plus, it could always be on some flash drive that a malicious employee passed to other companies. Large companies like Facebook have pockets where people can essentially operate with impunity or oversight for periods of time. The core issue here is that consent to one site proxies affirmative consent to share your information out to other sites.
As a hypothetical, would you give consent to Facebook knowing that they will then share all of that information with anyone who asks (every site, every 3-letter agency, every stranger from anywhere in the world who likes your bikini pictures)?
What if they say they won't share that information, but they do anyway? Tech companies move too fast for law to keep up, and once the information is duplicated to multiple parties, it's almost impossible to track down every copy with certainty.
It's like the friend that swears they will pay you back if you would just lend them a substantial amount of money. Said enthusiasm drops 95% once the transaction has completed.
I don't think most older ecosystems were any better. For example Unix is a cobbled together mess. Autotools is a terrible mess of a build system, etc.
Big "professional" ecosystems aren't much better either. Just look at all the jokes like "fizzbuzz enterprise edition".
We are always open to junior engineers, but lately it has been rough interviewing them. It's common to see applicants with no real world experience asking for $100/hour. About half didn't have a portfolio anywhere, but said they could get a recommendation from their bootcamp instructor.
One of our last ten junior applicants had a passion for software. 9/10 of the last applicants were motivated by money and asked for ~$100/hour rates. We've stopped actively interviewing juniors because of it.
We live in a very affordable city, too.
He was regularly contacted for years afterwards by people wondering how they too could get a high paying job on Wall Street.
Since I took up gardening I’ve developed a more nuanced view on this. For perennials, there’s a way a plant “should” be but the plant is organic. It has a life of its own. Trying to force it will kill the plant. There is a tempo to bending it to your design and for some plants you need a five year plan to get there, and the plan changes several times.
For software I don’t have that much patience. I change jobs more often than houses, and I expect a team to listen to reason, so the more like a 2 year plan.
To get A you often have to give up on B or C for a while. Pick priorities that are constructive and try not to grind your teeth while the wheels of progress grind into motion.
Manager says "let's have a weekend pizza party hackathon!!!" and the old-timers are the first to recognize and decline the opportunity for unpaid overtime fixing bugs.
I mean every company wants you to work as hard as possible and turn things around...but now I feel the ones publish it want to put you through the ringer.
yes. and, if I were to pursue a job with such a description, I'd have only myself to blame when the insane conditions started.
Never happened. The company simply limped along in this now-familiar “zombie mode” neither succeeding or failing until it was gobbled up by a bigger mediocre company.
Have you had a part in writing Office Space  screenplay? It's Milton's wish.
As a dev you usually stumble upon it for the architecture / coding part. But those are just details. The Domain part is all about the business: where do you add value, what things you can outsource because they're not at the heart of your strategy, which projects should get your better people etc.
So I think it can be a good bridge for people who come from tech and want to dabble in the business part of their company.
Pros: I learned how to listen to customers, create proposals that tend to close, and upsell... all while being able to build the end product. This has helped me to close a good number of side projects after I moved back to working for larger companies.
Cons: The challenge working as an engineer in these small firms is that your resources are limited to the budget of the customer, which can be too small to do top-notch engineering work. You can generally find the balance without sacrificing too much quality, but you will never be completely satisfied with your work-product.
Another aspect is meaning of life - if your response is work, then you either have that 0.01% type of work which really matters (and is hardly IT, maybe in Red Cross/MSF style), or much more probably you are just lying to yourself.
Life has so much more meaning outside of work, outside of sitting in front of computers. The older you get the more clear this should be, even if you started as a hardcore IT geek. I know I did, but boy am I glad I moved on.
I still enjoy programming, but it must be purposeful, there must be a goal and a plan. I won’t just habitually open a terminal and start programming something “because programming.”
All this can be done in a 40 hours week. No need for overtime or starting at age of 3.
During a one year project I see a lot of people who barely grow during that time. Others grow a lot because they put in a little extra mental effort. That's the people I like to work with.
One way to attract those people: choose to build your product using a rather exotic language (e.g. Clojure or Elixir). You take a substantial risk because the language is rather unproven and you will have a hard time finding experienced developers. BUT if someone with 10+ years experience makes the effort to learn a language that does not have an immediate payoff that is a good sign you found someone who cares.
And that goes both ways. I want to work for a company that chooses language X instead of ruby-python-java-and-the-like because they care.
Mind you, this is not meant as a language bashing, not at all
Edit: obviously, those companies chose Clojure first of all because of technical reasons, but the hiring part is another perk
I've been working on my first Clojure/ClojureScript project for a couple of months now. Alone, with no previous real-world experience on any lisp. I dare say I'm more productive now than I would be with React+Redux and a Spring/ASP.NET/Rails backend (all of which I have experience with in more than one project).
I agree with the article on not hiring with the big tech companies' processes.
I wanted to add the book 'unlocking the clubhouse' on hiring by finding candidates who make programming/CS their life. There is nothing wrong imo on those who are passionate about computer science, but when you hire based on uber-interested in CS candidates, then you end up with a non-diverse team (not hiring people who didnt come into CS through the same route and didn't have the 'fortune' of being introduced to CS when they were kids). I'm not doing the book justice.
Instead it becomes more useful to be comfortable with the work and maybe not care too much. Eventually people will see how people who breath this stuff are far more comfortable with the work and teams that include them produce far more pleasurable results.
As Programming has become mainstream in some way, far more companies look for coders, even outside of the startup space which creates much more opportunities also for people who know their stuff.
In law, pro-bono work is at least somewhat similar to open source work for programmers: It doesn’t directly make money, is considered morally good, and possibly a way to widen one’s experience. However in law, lawyers in a firm are typically required to spend a certain amount of time doing pro-bono work (during working hours) whereas the same does not happen with programming.
I'm considering going back to school for tropical paleoclimatology, largely because I never cared much about the career prospects.
Maybe there's nothing wrong with doing it if you want a good career? Hobbyist programming is substantially more rewarding, in my opinion.
Also, people who stayed in this industry long do care generally.
There is also something good to he said about people who understand importance of boring tasks and negotiation and planning and process and do those boring tasks.
Pertinent to the article, most of those folks are unhireable today because they can't code with a gun to their head (ala swordfish).
The otherwise ridiculous film Swordfish has a similar scene with higher stakes.
I'd go so far as to ask: "can this candidate learn to do the job?". In our recent job postings, I've started adding a specific paragraph after the desired qualifications stating more or less "if you don't tick all the boxes above but are motivated to learn and grow, please apply. We'll teach you what you need to do the job"
> 2. Will this candidate be motivated?
This. Even more than question 1. I've had a lot of good surprises with motivated candidates who didn't have all the expected qualifications. Some of my best hires were people whose CV didn't line up with what I was looking for but who demonstrated impressive motivation to get the job. On the other hand, I've often been disappointed with "perfect" candidates who didn't have the right mind-frame for the job. (Note: I'm not saying I'm expecting slavish devotion and "giving it 200%" every day of the week. But if your personal, intrinsic motivations don't align with the job's responsibilities, it won't work).
> 3. Will this candidate get along with coworkers?
Duh. But also, how do you actually test objectively for this without introducing bias? And how do you insure you're not creating a monoculture that will eventually harden into navel-gazing and dogma? I'm really of two minds on this topic.
> 4. What this candidate will be in three, six, twelve months from now?
Related to the rephrasing of 1. Can the candidate learn and grow? And will we provide the right environment for them to grow?
One of my favorite quotes is from an ex-manager who hired me when I was far from ticking all the boxes on the job description. "If you hire someone who has all the necessary qualifications for the job, they'll be bored in 6 months."
> Duh. But also, how do you actually test objectively for this without introducing bias?
You can't because it's a bs metric like "culture fit". After about a dozen people, you can no longer ensure people will get along or like each other. I only have five brothers but I don't even like all of them.
People have quirks, and it's easy to find reasons to pass on candidates because of them (reminds me of Seinfeld and how the characters ended relationships because of man hands or toes). I see this a lot in non-technical hiring where marketing folk exclaim "I like the candidate, we have great rapport" only to discover the candidate was subpar.
> I'm really of two minds on this topic.
I used to be, but after suffering through a bout of mental illness that rendered me an anxious mess when I was once the life of the party really changed how I evaluate this dimension. Be kind and assume the best, which we can all agree would be amazing if the tables were turned.
When I look for personality, I'm looking three things:
* How they ask questions for things they don't understand
The problems I give are directly applicable but I leave a few slightly vague. Someone really experienced could fill in the missing pieces easily, but usually this doesn't happen. I then ask them if the problem makes sense or if I missed anything. If they can't tell me that the problem isn't clear, or they need more information, then they'll have trouble working with a team trying to solve and communicate problems.
A mediocre candidate will say that the problem isn't clear with "I don't understand". A good candidate will explain what they don't understand. A great candidate will ask for clarification on the vague piece without much fuss.
* Reaction to not knowing something.
I have a hard problem that I give at the end. I clearly tell them it's not expected to be solved and that I just want to talk about how it might be approached. If they get angry, say "oh I know how, just give me 5 more minutes" then stand there blank, etc, then they're not going to fit in a team that is trying to solve hard problems together.
A good candidate will stay calm and provide any sort of input, ask any sort of questions, or show any sort of interest. Sometimes people get nervous here, so I keep this very very lighthearted.
* Are they full of themselves/assholes
This is pretty evident within the first few minutes, and really rare (and almost always accompanied with a stream of buzzwords after every sentence).
A good candidate won't be an asshole.
> Are they full of themselves/assholes / This is pretty evident within the first few minutes
Not it's not. People can act awkward during interviews, it's a stressful situation. Some people need to peacock to feel self-confident - I may not like it, but I'm not going to fuck with their future because of an emotional reaction I had to how they present themselves. I've hired plenty of "assholes" that just needed the benefit of the doubt and a chance to grow.
We're not psychologists or therapists, so we should we stop trying to "figure people out" and decipher complex human interactions & behavior by trying to bucket them into checkboxes for whether or not someone is good.
Be kind, give the benefit of the doubt. This is someones career on the line, not a first date. Treat it with the respect and gravity you'd like someone to give you.
The problem is that we're too quick to be offended and we look for excuses. I'm in the middle of hiring a business analyst, and my COO didn't want to hire him because he didn't send her a thank you note after an interview (but he did send one to me). She thought he was an asshole. See the problem with that train of thinking?
Everyone is an asshole to someone.
Life is stressful, work is stressful. My manager and his manager are reasonable people but after my second assignment, they said I missed a requirement and that it was a “big f’ up”. Guess what? I found that refreshing after dealing with managers that beat around the bush and you had to constantly try to figure out what they were thinking - or even worse, they didn’t tell you anything until your review.
I’ve had to whiteboard architecture in front of CxOs, be interviewed by potential investors, etc. It’s when things get stressful that you really need people who can keep their wits about themselves.
It's free to say things, doesn't make it true. Too often we assume because someone is in a position of power - or is wealthy - that whatever they say must be true. It's not. Sure, we have to nod our heads and pretend it is to keep the job but that still doesn't change the reality.
Shit rolls down hill with exponential momentum. I've seen many instances where a CEO says something innocuous like "I'm a bit disappointed in X" but by the time it gets to someone that can fix it the management in between transformed it from a simple comment into a condemnation. People like to exaggerate things to make something seem more important or impactful than it really is.
There's also a big difference between an interview and dealing with work everyday. I know plenty of people that shine under the pressure of interviews but not under the job itself (and visa versa). You cannot determine these things about a person from an interview. Period. Performance in an interview has very little correlation to job performance (if any).
> you really need people who can keep their wits about themselves.
I need a diverse group of people that I can collaborate with to get things done. If they can't keep their wits about, it's my job to deal with that issue and get them back on track and protect them from organizational crap. Might as well expect every girl or guy you date to be a model with a PhD.
Well, when the requirement was in big bold writing as one of the key features on a PowerPoint slide...
I need a diverse group of people that I can collaborate with to get things done. If they can't keep their wits about, it's my job to deal with that issue and get them back on
That’s not a luxury you have as you move up the ladder - even if moving up the ladder is just being a real senior developer/architect (by knowledge if not by title).
My first job at 22 was as a computer operator trying to get my foot in the door was within 6 months build a custom, networked data entry system used to support a completely new department and a new line of business. Working at small companies, you don’t get the luxury of hiding within the bureaucracy. The one time that I did work at a large company, it was suffocating.
Amazing, I'm stealing this, thanks.
Also, by the by, great to read someone writing openly and candidly about mental illness.
I'd disagree. An indirect metric is "Can I as the technical lead  communicate effectively with the candidate?" If the leadership is stable, then a "yes" to this question is likely to imply "yes" to the "getting along" question. If all "subordinates" can effectively communicate with the "hub" (lead), it's not unreasonable to expect that they'd be able to get along with each other using the same mode of communication they use with the lead. That's what my intuition tells me. May be wrong though.
 Presumably, the technical lead takes part in the interview process.
1) Can this person DO things? This doesn't even have to be the kind of things we need done. A diversity of experience or interests is mostly what I look for.
2) Can this person LEARN things? Again, some diversity of experience goes a long way.
3) Is there enough interest in learning to DO what we need them to, and enough education/experience to get started reasonably well.
I start by looking for verbs on the resume. It's amazing how many people say "I was on the team that XXX for system YYY which was a type of ZZZ with technologies ABCD" and never say what they actually did. I say "I understand there's this push for being a team player, but I don't care about that, I want to know what you did." Sometimes this shifts them into useful discussion while other times it reveals that they didn't really do much if anything. One of my co-interviewers once started crossing out whole lines of a guys resume right there in front of him which was a little cruel, but made the point to the candidate.
Again, a diversity of things done and an interest in learning to do what we need is almost everything. I leave the personality evaluation to other people in the process but will vote "no" if something about them really bothers me.
This doesn't just apply to technical jobs. There are things managers need to DO. Letting your people handle the tech does not absolve someone from adding value to the organization through what they do.
Another relevant point here is that women are less likely to apply for positions with a predefined list of requirements even if they are only missing 1 of them as opposed to men. So why are we shooting ourselves in the foot with all of these requirements?
Dev team cohesion and harmony is hugely important for us. We'll pass on a highly skilled engineer any day if we believe they'll sow division and conflict on our dev team.
In the sense of hard metrics, you can't really test for a toxic personality, but here's what we do:
- We have engineering candidates come on-site for a couple hours before formal interviews and chat informally with several of our engineers. Our engineers show them what they're working on, what technologies we're using, etc. You can learn a fair bit about someone just based upon less formal interactions with a variety of different people. Are they showing interest in what we're doing? Are they eager to tell us about something similar they've done? Are they a respectful attentive listener while a junior is showing the candidate something? Can they communicate and express themselves easily? Obviously candidates are on their best behavior when they come in for a visit, but you can still pick up on certain behaviors that might indicate a problem.
- During formal interviews, we ask candidates the following questions: Tell me about a team project when you had to work with someone difficult. How did you work through that? Could you tell me about a time that you disagreed with a rule or approach? Tell me about a time you made mistake that you learned from and what you’d do differently the next time? The answers to these questions can be very telling. We've had candidates who have been unable to think of a mistake they've made as well as candidates who thought of a mistake they made, but then spent the next 5 minutes explaining how it really wasn't their fault. We've seen some people describe some truly awful approaches to dealing with difficult teammates.
I'm so tired of investing in my work and my workplace to have obscure management "advice" on fixing my character flaws (e.g. being too soft / too emotional) to have any chance of growth by trying to transform in to something they want.
Part of that is on me- the picking and choosing of battles is a real problem for me- I care a lot about maintaining high standards for code quality with automated linting and formating, for instance.
I like having soft skills and as I tell anyone I'm interested in working with- I want to work with nice, intelligent people; in that order.
Totally agree. I was involved in the hiring process (new for me) in the last two hires and this was my main motivation. I primarily looked for passion in the field. I didn't care if they explicitly knew what we were programming in, or what frameworks we were using, or w/e. I wanted to know if they were interested in the area, if they wanted to learn (if needed), and if the workload areas (backend, frontend, etc) were areas they wanted to work in.
I wanted to hire a bright person, passionate in what they do, and interested in the problems we'd give them. I felt like if those three were true it didn't matter what they knew.. within reason, of course.
: I'm not saying what I did is right or wrong, just explaining what I did.
: By interested, I mean they want to work in X language with Y workload. Not explicitly that they were personally invested/interested in the product domain as a company. I wouldn't expect that of anyone.. rarely, imo, do companies inherently do such interesting work that people should be personally invested. Ie, solving cancer or feeding homeless. A job can be a means to a paycheck. I just want it to be enjoyable for all involved, as much as possible at least.
Give them a simplified version of a class that represents what you do everyday with failing unit tests. Have them pair program with another developer to make the unit tests pass.
Then once they do that...
Give them another set of unit tests for the same set of problems and add more requirements. They have to make the second set pass without breaking the first set.
It’s realistic, you can see how they think through a problem, and you can see how well they work with others.