Hacker News new | past | comments | ask | show | jobs | submit login
The code I’m still ashamed of (2016) (freecodecamp.org)
306 points by zdw 13 days ago | hide | past | favorite | 172 comments





I think the most important take-away from this essay is that questionably ethical items won't have a big flashing red siren go off when they happen, there won't be any forewarning. Your company won't have a long internal debate on the ethics of it.

It will be passed as just routine and a normal part of the process. It'll be "just the way things are done". It'll still be up to you to understand and appreciate the ethical consequences of you choices.

In this case he was "lucky" enough to become aware of it due to heavy news coverage and then a follow up of an immediate family member being directly impacted. It almost never will be that blatant.

That's the lesson to take away from this. As part of your profession, you need to understand the consequences of the code you write and choices you make - and represent for and protect the people who won't be in the room, but who will be the recipients of the consequences.

Your day will be a subtle milgram experiment, and you need to be the one to say something.


One of the reasons I was thankful to go to university where I was provided a broad spectrum of courses and curriculum. Among them were a healthy dose of ethics courses including ethical computing.

I was really thankful that I had the same opportunity, and I personally loved my CS ethics course - unfortunately, it seemed like the value and necessity of such a course was lost on a fair number of classmates, who vigorously argued that the CS ethics course shouldn't be required.

My university had one, but it seemed more like a cover-your-ass move by the university than something that'd actually get people to think about these issues if/when they occur in their career.

I think the awkward thing about ethics courses and training is that they are effectively initial state only in the pipeline. They are to be "overridden" by incentives and selection when possible. It seems to be a "trade" of slower start up time due to inculturation to undo it to unethical standards in exchange for ass covering. Even if they had say yearly ethics course requirements the actual incentives dominate in practice.

You don't solve persistent corruption by ethics courses you do it by removing conflicts of interests, changing incentives, and enforcement. They may not be easy to do, free of costs or even within capability to change.

Ethics could still be useful of course but the institutions need to care about it and the incentives need to be changeable accordingly for it to be more than just a figleaf.


Some occupations - usually professions such as medicine and engineering - seem to have a relatively strong ethical tradition. Perhaps it comes from the self-regulatory aspect of being a member of a profession. These ethics courses in software education are an attempt to move towards that goal. I think ultimately you can’t bootstrap ethics into a population unless that population feels like they owe something to one another. The computing field is just too broad for that.

In my case, the ethics course we had to take was the same one engineers take.

My university is a mixed bag. We had an ethics discussion in a 101 course that seemed beneficial. However, the other "ethics requirements" seem like a way to shove political propaganda down the throats of students who would have otherwise avoided it by their choice of major.

Requiring CS, CoE, and EE majors to take courses on the humanities - without making any statement about the merits of those courses - the sort of thing that the university decides based on its political and financial goals, not based on its desire to create stronger CS professionals. A course with a title like "CS Ethics" sounds much more relevant and beneficial than "African American Womyn's Studies", which is the sort of thing I would have to take to satisfy my university's ethics requirement if I hadn't satisfied it with AP testing.


I was shocked at the number of classmates I had in mine who had no prior understanding of the ethics involved in software. It was clearest when it came to presentations and things; people would take clearly grey areas and try and present them as black and white (so something like DRM wasn't "it's complicated, and companies should weigh the pros and cons carefully, as at best it will inconvenience their users while only delaying a release, and at worst will lead to the pirates having the best experience, and drive their own customers to third party patches to remove it", it was "it's a necessary evil that all companies should use")

My CS ethics course consisted almost entirely of things I already knew, or exploring scenarios where there was an obvious correct choice. At the time, I thought anyone who didn't already have a grasp of ethics would ignore it, and anyone who did would have a better moral compass than the course could provide.

Today, I'm not so sure. People do tend to completely ignore ethics when there's social pressure to do so, but maybe the author of this essay could have benefited from it.


Paper investigating whether ethicists are more ethical:

http://www.faculty.ucr.edu/~eschwitz/SchwitzPapers/BehEth-14...


Could you maybe state the point you are trying to make and the position you personally hold? Do you have to be "more ethical" to argue that ethics are necessary and a good thing to teach?

I feel that if a career teaching ethics doesn't make someone more ethical, then a class is unlikely to do so.

Personally - I haven't taken any classes. I once wrote a template mailer that financial institutions used to mass mail customers, which I felt icky about, but I still did it to the best of my ability.

1/2 way through my career, I took a pay cut to leave finance and work in cancer research, mostly for the interesting work, but I do feel better about it as it's more positive sum for the world.

I would not begrudge someone doing something dodgy but legal to feed their families. Maybe they should feel a bit bad and look for other jobs, but I can understand.


The ACM has an "Ask an Ethicist" blog analyzing ethnical case studies. They're fictitious but still illustrative of the gray ethnical areas in real product development. Unfortunately, it hasn't been updated for a long time. I would love to see more case studies.

https://www.acm.org/code-of-ethics/case-studies


But the example in the article is clearly unethical based on the client's requirements alone - it doesn't take an ethicist to see that.

In fact, a world where companies are so very ethical it takes a professional ethicist to spot unethical practices sounds nearly like a utopia.


https://news.ycombinator.com/item?id=23965787

> you need to be the one to say something

This implies a moral duty to learn rhetoric.


I'm ok with that.

Sure, but it is a moral duty with so few resources to learn how to meet it.

I was hired by a company that made client on-boarding software for investment banks. It did all the Dodd-Frank, EMIR and KYC stuff. It was built around SQL Server and table and column names were quite long e.g. CustomerReferenceForGovernmentCompliance. We got a client from Massachusetts, they used Oracle 8 and wanted us to use Oracle 8 in our app. Oracle 8 has a 30 character limit for column and table names, so we had a problem.

I had Oracle experience and was hired to develop a middleware to translate long DB object names to short ones in Oracle (no refactoring of the app was allowed), plus I had to convert a shedload of T-SQL to PL/SQL. I told my employer this would take months.

Two weeks in to the job I was told "You're off to Boston to demonstrate your work to the client", "Err, its not ready" I said, "Don't worry its just a demo to get our second payment from the client, it will be Okay" my CTO said.

So on the flight from Dublin to Boston, my CTO leans over and says "I'm not asking you to lie, but..."

Long story short, they rigged the application to make it look like it was connecting to Oracle, but was actually saving data in SQL Server data files. I had to convince the client it was working fine. A low level techie kept giving me the stink-eye as he could clearly see this was a scam.

On my return I resigned and called the security department of the bank concerned. My wife is a US citizen so I really don't want to get in to trouble with the US authorities. The bank took the product, but they insisted on a massive discount to not take legal action.

Okay, it wasn't my dodgy code, but I felt the experience was similar.


For once it was the bank that was going to be ripped off and you had to tell the truth ...

I am so glad to see this.

Basically, I am in despair, at what I consider to be a complete ethical and moral collapse of the entire software development industry.

It's a long story, but I spent about 27 years in a "silo," where not all was The Sound Of Music, but where I was never presented with ethical dillemas.

Then, I left that company, and returned to the world that I had come from before I joined it, as an idealistic, optimistic, energetic young engineer.

Oh.

My.

God.

It is now exactly like the finance industry has been for decades. Profound moral collapse. That means that it will never get better. There's just too much money sloshing around.

It does annoy me, when folks treat me like I'm either an idiot or a chump for insisting on living a life of Personal Integrity. I don't go around, trying to get other people to change, but I feel as if I'm punished, whenever I mention my own point of view.

I won't write dark patterns, and I won't write code that runs against my Personal Integrity. I'm quite grateful to have the luxury of choice.


It’s not like the software industry was a bastion of societal morality in the first place. It was created chiefly to win wars and delabourize industry.

Kraft 1977 is my favourite Marxist analysis of the software industry.

https://www.amazon.com/Programmers-Managers-Routinization-Pr...

That being said there is an ethics of the software industry. It does not necessarily encompass an ethics of marketing but there is an ethics of privacy, security, authentication, authorization, identity, and decision making.

It would seem that extending the domain of computer science ethics into the ethics of its subject domains would be an overreach. Does it make sense for computer science to apply the idea in Kraft that should overpower all other industries and thereby decide the ethics of pharmaceutical marketing, simply because it controls the means of production?

It is a Marxist analysis like I said that computer scientists should properly have this power.

However it is not wrong that if you are working on a project and don’t agree with its ethics that you should speak up and work to improve it. Everyone has that responsibility.

It isn’t about computer science or software engineering. It’s about professionalism.


Well, I don't really get that analytical about it.

The thing about me, is I never did the "boiling frog" thing.

I went into a company with a fairly insular culture in 1990, and came out in 2017. It was sort of like being transported in a time machine. I learned the tech, but not the prevailing culture.

In 1990, it was [finally] cool to be a programmer, but it was just beginning to be "lucrative." Bill Gates was sort of the only real software mogul most of us knew. Larry Ellison, and a couple of others were just beginning to flex their muscle, but they hadn't really hit their stride, yet.

It is now crazy lucrative. A lot of people, make a lot of money, and they also act as cultural bellwethers for their employees and fans. Their values become the values of the workforce. Fairly standard human nature. Nothing surprising about it. Good old-fashioned "Monkey see; monkey do." As I said, I live in New York, and have been watching this happen with the finance industry since the day I moved up here.

It was just a shock to encounter the same ethos in the software industry. In 1990, most of the folks that I hung out with were tech enthusiasts. We enjoyed the tech, and were happy to make a decent living off it, but most of the people I hung with considered it a vocation of love; not avarice.

We were dorks, but happy dorks. Working in a team was fun, and we didn't feel the need to have the crazy competitiveness that we have today (but it was beginning, back then; I just didn't encounter it in my circle).


I also feel Microserfs by Douglas Coupland is a quaint anachronism in 2020. (https://www.amazon.com/Microserfs-Douglas-Coupland/dp/006162...)

That was an amazing moment in time and I miss it. I barely touched it but I was too young to really hold onto it myself.


I've never had such a serious ethical issue in front of me, I have code I'm ashamed of in a "Oh god we could have done so much better, what a fool I was at 22" kind of way. But so far my ethical conscience is largely clean. I can think of really only one ethical close call in my career, but still not as serious as the linked article -

"Hey developers, customer account manager here, just sold an upgrade to the customer on product X, they need capability Y"

"It already has that in the version they have, it's probably just misconfigured"

"Oh... well I've sold it now, can we just increase the version number and pass it over to them anyway?"

We laughed that guy out of the room, thankfully.


I wonder how big of a computer it would take to calculate all of the sweat and tears spent by software developers so that some salesperson or manager could save face.

I once designed and built an entirely new video integrated article format because a salesperson sold something based on a concept we had no intention of building.

My boss's response was basically, "Welp, we just got $200,000 to build this. We should probably figure out what the fuck it is."


I think I've got you beat.

My first job out of university, the sales guys would desperately throw every feature they could dream up into the contract to try to close a deal, then we'd have to build it before they committed to purchase. Then we'd be on the hook to make it based on their whiteboard fever dreams and find out after it shipped the customer just needed 2% of the whole thing


I used to work on a team that made flash ads at a giant company. The salespeople didn't say no to anything.

These companies were paying $500,000 for 12 hours of exclusive time on our home page, so they got whatever they asked for. If they wanted a car to drive out of the ad and smash into the text below, and have each character morph into 5,000 butterflies that then reformed into the car, they got a car driving out of the ad and smashing into the text below, and having each character morph into 5,000 butterflies that then reformed into the car. And they got it within a week.

I swear, the wizardry those flash devs pulled out of their asses was some of the best hacking I've seen in my life.


I miss actionscript, and Flash in general. A hackers delight, I’m still yet to see such a great mix of coding and visual editing/timeline systems.

Hi there, do we work at the same place?

Sincerely,

help.


I once worked on a system with small modular pieces of hardware. But the hardware was expensive, so there was a plan that you could buy less hardware but pay more for software that would cover for the hardware. So we literally wrote more software so that we could get paid less.

Sounds like somebody's manager will be calling that customer back, voiding an invoice, and putting them in touch with the correct staff to help them with the configuration.

And somebody is leaving the sales team for either a makework job till they leave, or just straight up fired if there's less politics involved.


IIRC he hit the road soon after, yep. Turned up later at another company I contracted for in a related industry, as one of the upper management. I didn't stick around there long!

Most of us are ethically in the right in that regard.

Like: change the API and forget to update the version number.

:)


Never confuse Sales with Implementation. :P

> We laughed that guy out of the room, thankfully.

If the customer can be made happy and the company can make money by just making an increase to a version number with no additional changes, why not just do it?


You'll make way more money in the long run if you own up to mistakes like this.

It builds trust in your company, so that when they think about renewing their contract they remember you have integrity and aren't trying to rip them off. Instead, you believe in your product and are actually trying to sell it on its merits.

It also becomes a funny story they tell their friends / others in the industry, which could lead to further business.

There are no downsides to owning up (I'd even claim the upsides above), but there are some possibly gigantic downsides to not owning up.


This is a reasonable consequentialist argument, but I think you could also argue that it's unethical and wrong to lie/mislead someone this way even if the consequences weren't potentially worse for you in the long run - even if by lying you might come out ahead.

It's fraud. Accepting money for delivering nothing yet pretending that you give them something is fraud.

Even if it was perfectly (or at least technically) legal, I'd still argue that it would be wrong to do it.

Becasue selling something that they already own to someone as a new feature is unethical at best and illegal at worst?

Don't you think the client would be happier with a refund and an explination?


Because it's morally wrong to charge someone for a product they already have, and then to cover up what you've done.

It's likely also criminal, as fraud is illegal, but even if not the deceit should be something that makes your conscience bristle.


Many products with embedded systems ship with the same firmware regardless if it’s the basic/mid/pro model where features are enabled/disabled with a flag set in the EEPROM. So to charge a customer $XX more the company just has to flip a bit at production time. Is that ethically ok do you think? Not sure I have an opinion either way so just curios, but it’s not a million miles from this scenario above .

It's certainly not the same. In one case, the client had already paid for the feature but weren't aware, and a salesperson told them they needed to upgrade to unlock it (a lie). In the latter case, they simply didn't pay for the feature.

Is it ethically wrong to ship chicken bits like that? You can make arguments either way (e.g. a device might not have been QA'd or tested for the disabled features, and the feature itself might be faulty, e.g. that's extra support labor they didn't pay for). But that's a different question to whether the salesperson should have sold them a feature they already had access to.


It’s totally fine. They’re not paying for the bits burned into the EEPROM, they’re paying for the effort it took to develop the capabilities.

Those features cost money to develop, but if someone is paying you for a feature and isn't using it, it's probably unethical to charge them.

Because the customer already paid for those features and you're accepting that you'll dick them over for a few pennies.

They should get a refund if they already paid for the feature. If you want to make money off of the situation then say you'll send a support technician or someone to look over their configuration and set it up to do what they want. That's the only legitimate and fair thing to do.

As others have said, it's effectively fraud at that point.

The customer has already paid for that feature. Best case we rip them off and take their money for nothing, which is scummy. Worst case they figure it out later and sue.

Either way it's the wrong thing to do.


Did you actually read what the person was asking them to do? It could be illegal in the worst case.

Yes, I did. In the best case it's a happy customer and more money for the company. I'm a glass-half-full guy.

I'm gonna rob a bank. In the best case, I'm an instant millionaire and I never get caught. I'm a glass-half-full guy.
jfk13 12 days ago [flagged]

Thanks for letting us know we should avoid becoming your customers.

I work in the public sector, part of what we do is to automate tasks. No one gets fired by this directly, but over time positions disappear. When people retire, no one gets rehired and stuff like that.

Sometimes it makes sense, some tasks shouldn’t be done manually. But sometimes you automate something that was better for both the employees and citizens/patients the inefficient way, because it’s cheaper. After a long time doing this, the thing that gets me the most is how I used to buy all the corporate bullshit about how things like citizen/patient comfort, corporation, employee happiness mattered more than money, because it never did when it came down to it.

Still there are perks to the job, you get to genuinely improve people’s lives, sometimes even build things that save lives, but the automation thing, meh.


This is something I've struggled with in the past too, my job is basically to put people out of a job by replacing them with software, and those people might then suffer because of it.

I've come to the conclusion that it's not a moral failing on my behalf that people might suffer as they were replaced by code, but a failing of society. There's this belief that it's a sin to work less than full time from the cradle to the grave (minus a few years either side), that if you're not working then you're a sloth, you're lazy and you get what you deserve. More and more work is being automated, but there's still this pervasive belief that everyone needs to be gainfully employed. Instead, as manual jobs become automated, as a society we should be working less hours. The 40 hour week should become 30 hours, 20 hours.

What's the point in making somebody dig holes and fill them back in (or the clerical equivalent), just to prove that they're "working" and useful? Menial unskilled labour for the sake of it is just wrong.

My goal in life is to make myself redundant, to automate my own job away. If the programmers are out of work, then maybe we've reached utopia.


> This is something I've struggled with in the past too, my job is basically to put people out of a job by replacing them with software, and those people might then suffer because of it.

I spent five years working for a small company that was a scrappy startup, especially in how they thought about technology--everything was duct taped together and barely worked--but was cheap!

At a high level, my job was to replace people with software. What I am proud of is that we didn't let anyone go, we just found something more valuable for them to do. When I started they had 4-5 people who's job was to spend 40 hours copy and pasting tracking numbers from one system to another. This was automated and they were able to move to roles that actually helped customers.

I think my (apologies for taking a while to get there) point is that you can look at digitization and automation as a cost cutting exercise to improve the bottom line or instead view it as a way to invest in and improve the customer experience. Information Technology as margin-defense is a short-term benefit, while IT as a value-driver is a long-term one.

"offensive operations, often times, is the surest, if not the only (in some cases) means of defence" --George Washington


> you can look at digitization and automation as a cost cutting exercise to improve the bottom line or instead view it as a way to invest in and improve the customer experience

Also as a way of 'getting rid of the boring bits' of a task allowing system users to spend more time on tasks that add value to the business and / or the customers.


This is the consequence of production being oriented towards people who are not the producers. In previous economic systems, what you produced was yours and the more you produced in a given interval, the fewer hours you had to put in to reach subsistence. In our mode of production, incentives are geared towards producing more and more with less and less with no natural endpoint.

> my job is basically to put people out of a job by replacing them with software, and those people might then suffer because of it.

If you're gonna dig a hole, you can have 1 person use a digger, or you can have 100 people use spoons.


Won't someone think of the spoon operators?!

> I've come to the conclusion that it's not a moral failing on my behalf that people might suffer as they were replaced by code, but a failing of society.

I mean no disrespect, and I’ve felt/feel this too, but I’m scared it falls in under the old punk saying of “the guilty don’t feel guilty, they learn not to”.


I worked at a company last year (specifically their RPA sector) that one of the projects we got was create a "robot" to automate certain tasks within a client.

Later after we delivered it, we learned that project alone was the reason the client cut 700 low level positions. A single "robot" could do in an afternoon what 700 people did in a week. (Was/is a pretty large company.)

The words from my manager still echo in my mind: "If we think of the "ethical" aspect of it, we wouldn't have our own jobs."


The problem is not that jobs get automated away. Efficiency in of itself is a good thing.

The problem is that the gained efficiency is often not used to improve the lives of all (former) participants: There is little responsibility towards employees and customers. Businesses are not seen as communities, neither by employers nor by employees.

Leaders, owners, investors, employers and other powerful actors profit disproportionally, because their decisions are not tied to a holistic responsibility but only to financial metrics (which are also directed by them; a whole other problem).

There are also actors with higher (or sufficient?) ethical standards that will invest efficiency gains like you describe to educate and train their employees or at least give them the financial means. This inspires loyalty and trust.

I'm longing to hear more about such cases.


I don't have a moral dilemma about my work cutting those kinds of jobs; it's busywork, their existence doesn't improve humanity. We're better off with something like UBI than paying people to do boring stuff that they don't really need to do.

I'm not saying it won't be the right move eventually, but that was 700 people who were getting paid for busywork who now aren't. Today I'd say there's still a dilemma.

You can stop any investment or operational improvement with that mindset though.

Should we give our gardeners a lawnmower? Nah that would result in redundancies, leave them with their nail scissors.


If you apply the reversal test, the question becomes, in a world where these busywork jobs didn't exist, should we create them?

Or, taking a step back, if you ask "how many busywork jobs should there be?", it would be surprising if the answer is "exactly the number we have right how". So it seems either you should want to eliminate busywork jobs, or create more of them.

To me the "dilemma" smells like status quo bias.

I will say though, status quo bias is not all bad, there is some value to stability, but I'm not convinced it is the role of businesses to provide stability, that seems like a role for government.


> that seems like a role for government

But then you'll have to deal with all the people who complain about government interference in the free market.


It wouldn't be surprising if the dynamics of the human society pushed the number to the current number as being optimally stable for society. Too few busywork jobs and you have large crowds of protesters; to many busyworks jobs and the sectors of the economy that are growing in response to new opportunities are starved of labor. Not really stating a belief but just want to point out that in complex homeostatic systems it there are often dynamics pushing certain numbers to where they indeed are. Certainly true for body temperature and blood pH but no reason in principal not to be true of certain things about economics either.

Society and people's jobs / skills can be shifted rather more easily than bodily systems, though. Just look at the difference in the average day's tasks from 1820 to today.

Provide the right kinds of support, retraining, or yes, UBI, and we're no longer talking about people going hungry when they lose a menial RSI job. We're talking about people whose struggle to make ends meet can change into doing something that feels like a step up in the world.

Having the means to choose your employment is a HUGE thing for a lot of people. Been there, and I can feel the huge weight off my shoulders knowing that if for some reason my current job goes away, that I am certain I can find something comparable.


Think about how many secretaries and clerks that Microsoft Office put out of work.

On the other hand, isn’t it better that everyone can type their own reports and check spelling and grammar themselves via code?


It is, and that's why it's complicated. I'm not saying it's the wrong move, but there's still a dilemma there.

>> isn’t it better that everyone can type their own reports and check spelling and grammar themselves via code?

> It is...

Is it really? Seems like an awful lot of stuff just doesn't get checked any longer. The "checking" done by MS Word et al is no substitute for the eye of a skilled human.


> We're better off with something like UBI

I strongly agree... but we don’t have UBI. So for me it’s still a moral dilemma. That person is still out of a job and might be out of a job for a very long time if the economy is weak. Yes, there are answers to this problem like UBI, but as someone living in the US I can’t honestly say I can see it being implemented here any time soon. So my work has the potential to devastate someone else’s life.

(and yes, I know, I know, if I quit someone else will take my job and it’ll all happen anyway. Doesn’t mean it isn’t still a moral dilemma)


But UBI won't just happen by itself. It needs a few things:

- Work actually getting done (through automation) - i.e. the "supply side" needs to be there

- People that demand it/ see it as necessary solution to _some_ problems (it won't just appear if there's no problem to be solved)

For me, it's simple: does it move society in the right direction? Yes, it creates some transient problems - but progress always does.


That’s a very easy thing to say when you’re not the one on the receiving end of these “transient problems”, though.

Would you tell a homeless person to their face that their poverty is a shame but it’s the price we need to pay for progress? And that you don’t know with any certainty when positive change will happen?


I have another argument that might be more convincing for you: automation makes things cheaper, so by definition it creates wealth for society. Now, you can argue that said wealth is unfairly distributed - and indeed wealth distribution itself is a thorny subject. But it is also a completely orthogonal one! I don't think we should stop from creating wealth until we find a "satisfactory" way to distribute it.... That feels like it would be a very bad idea to me.

There's a difference between being needlessly cruel and believing something is necessary and good, despite it creating some problems for some people for a while.

> We're better off with something like UBI

Okay - but are we replacing them with UBI?

It feels analogous to ripping a person off of life support while saying "some day, you'll get an organ transplant. I prefer that to keeping you alive mechanically, strapped to a bed."


Automation makes UBI more politically and economically feasible. I.e. no-one could reasonably call for UBI when 95% of the population were farmers.

I wonder sometimes, in particular with smaller construction jobs. Work that can be done either by an overweight guy in a Bobcat, or a shovel crew of five to ten, in about the same time.

I wonder if the Bobcat is really that much cheaper. And the construction workers in photos from a hundred years ago always look much healthier & happier than the guy in the Bobcat.


If the client could cut 700 low level positions with automation then they were probably treating human beings like robots.

Yeah, imagine a world where each employee had a team of highly trained specialists ensuring they had absolutely everything they need to do their job, and if they became sick at 2am on a Saturday, multiple people get woken up to take care of them. When the CEO parades investors around on the assembly floor, those employees are shined up, their capabilities are demonstrated, and they are showcased as the pinnacle of why their company is the best.

Low level employees would be lucky to be treated like robots.


If low level people were as productive as 700 other low level people they wouldn't be low level people.

No, they'd be superhuman.

Society was rich enough to support those 700 people, and now it's richer. Should be good news for them. It's a huge challenge getting to that point as a society, but there are also very rapidly evolving attitudes towards the problem (much more readiness to acknowledge that it is a problem, to start with.) I feel optimistic that this could be solved in the next hundred years.

Curious about how you feel about the number of people needed to farm 100 acres of, say corn, today vs 200 years ago. Was all the automation applied to that problem unethical?

When you consider what it's doing to the land and environment, maybe so.

This story hits very close to home for me.

Back in 2001 I worked for a 'Medical Communications' company, building marketing websites for various drugs companies. I was young, it was my first coding job and I desperately needed the money

I even ended up building an internal marketing website for use by sales reps pushing an antidepressant. It was an awful PoS, technically, but it had a basic CMS system and I was flown to the US (from the UK) to show the very friendly team at the drug company how to use it. I was fairly blasé about the ethics of it - they paid a lot of money after all. The drug in question was later removed from the market after it was found to increase suicide risk.

14 years later, my mother had maxxed out on the same kind of anti-depressant. She kept relapsing and the doctors kept upping the dose until they couldn't prescribe her any more. She took her own life shortly after.

Not so long ago, I interviewed a candidate for a data engineer position at the 'FinTech' I was lead engineer at. He turned us down because he didn't like the CTOs attitude towards the credit data we were collecting about our customers. Until that point I hadn't really considered the ethics of what we were doing, again. I started looking for another job the same day.


With this issue and most ethical issues I've encountered, it's important to realize that the work is still going to get done by someone. Don't fool yourself into thinking you're going prevent Pfizer from marketing their drug. But that's not why you do the right thing.

You do the right thing because you get to choose whether the blood is on your hands or not. You get go sleep and go to your grave with different amounts of guilt and peace. You get to tell your kids these stories and help them make better decisions. Through all of that, you might make the cost of deceptive websites a little bit higher and the message a little bit less effective. Or you might not, but that's not the point.

In short, don't make your ethical choices on the impact it will have on the world. Make them for the impact it will have on you. (The alternative will mean you'll struggle a lot more making the right decision, knowing that Pfizer will just get someone else to make the site and maybe you should still take the work and then donate some of the money to a good cause.)


> it's important to realize that the work is still going to get done by someone.

Not necessarily; saying "whoah, this is really wrong and I'm not doing it!" also forces others to consider what they're doing, and perhaps adjust their position. Would it in this case? Probably not, but it's not inconceivable that something would have changed to be less bad.

Either way, just rolling over because it's the most convenient thing sounds lazy, apathetic, and quite selfish.

> In short, don't make your ethical choices on the impact it will have on the world. Make them for the impact it will have on you.

So you should "just be following orders" if that has a positive impact on you? I hate to Godwin things here, but that's really how your entire comment is coming across...


> So you should "just be following orders" if that has a positive impact on you?

If having blood on your hands has a positive impact on you I doubt you're reading a HN thread about ethics, so you weren't my intended audience. I was writing to people who are trying to do the right thing, even when it's hard to know what that means in every situation.


> So you should "just be following orders" if that has a positive impact on you? I hate to Godwin things here, but that's really how your entire comment is coming across...

You came up with a reading that was completely opposite mine. I had thought that the gp referred to the ideals of the pinnacle of proper behavior, not self-gratifying behavior.


"the work is still going to get done by someone" - this assumes an infinite supply of devs willing to do unethical work. Sure, in this specific instance they probably will - but I don't think we should completely dismiss the impact of decreasing that supply.

And even if that work would be done anyway, I still would not want be a person who designs a death camp.

[flagged]


Not sure where the disconnect is exactly, but I feel like I'm saying the exact opposite of your characterization.

I'm trying to give people the mental clarity to easily reject unethical work without worrying about whether the choice is going to affect anyone else. If you only choose to do the right thing if it makes things better for others, you'll have a lot harder time making the right choice.

And if enough people do this, the impact on the world for good is unlimited. And that is my hope.


Think of Google and Project Dragonfly, to the business execs it's a perfectly reasonable thing; but the employees it's evil. But you work for Google and if they tell you to jump you say "how high" or you get fired, that's how jobs/capitalism work. Following your moral code WILL be difficult...

President Kennedy was shot. Martin Luther King Jr. was killed. Mandela went to prison and still found a way to forgive every single former oppressor. Yet, all these people advanced societies. Look at Ghandi, who freed an entire nation, that was difficult.

Finding a job or starting a business outside of Google is not the same level of difficult.

If the employees thought something was evil, either they are evil for doing it, or they didn't actually think it was evil.


I think you've misread the intent of that post.

I'm certain that ballenf meant to follow the dictates of one's own inner conscience to do what is commonly called "the right thing." I fail to understand how can so many people instead ascribe evil intentions to what was a wonderful comment about improving one's actions.

Reading the title, I was expecting an article about poor quality code. I'm sure all of us have looked at code they wrote in the past and thought "What the hell was I thinking?"

Didn't expect an article on ethics.

I haven't had to write code went into a project I considered unethical, but I have performed penetration tests on DRM systems. While I don't consider the idea of DRM to be unethical, I really don't like it. While in the design review meeting, and the PM was describing the project, they avoided using the term "DRM", and I really wanted to be like "So...is this whole thing just DRM for X?"


Do you think, after reading this article, that if you were in a situation like that again you would feel more responsibility to speak your mind?

No, because I understand why the DRM exists. It's a case of competition between corporations causing a race to the bottom on pricing, and so they need to ensure an alternate revenue stream.

I can't say more because I don't want to violate an NDA, and I don't want to self-doxx and say who I work for, especially because I still work for them.


I appreciate Bill Sourour for acknowledging the issue in the industry and coming to terms with what it seems like a heavy burden at young age.

When I was taking CS Engineering UG course(IN~2008), there was an elective subject(have to be chosen by entire class) called 'Engineering Ethics'. It was a preferred elective as there was no course work and I think there were no tests as well.

I remember the professor starting the class as,

>"If a Structural/Civil Engineer builds a bridge and it goes down, he/she will go to jail; lucky for you guys there are no ethics for computer science".

Now that a code can easily manipulate the life of an individual, I think we need to bring in accountability into CS/programming along with universal whistleblower protections for reporting unethical behaviour at work from UN.


"If a Structural/Civil Engineer builds a bridge and it goes down, he/she will go to jail"

How true is this? I've never heard of such a thing happening. Of course catastrophic accidents of that sort are thankfully very rare (maybe because of the accountability)...

Does it apply in cases of negligence or clear-cut corner-cutting, with proven intent? Or just in any case of structural failure? In which countries?

I'm not calling you a liar, for the record, it just gets bandied about a lot, and I would like some context.


We had a local incident a few years ago where the roof of a shopping centre collapsed during renovation, killing multiple people. Not much info in English about it to quote, but one of the legal consequences was conviction of the structural engineer who signed off without proper verification the changes to a key construction element that made it insufficiently sturdy for the load. He was sentenced to six years jail for violation of construction regulations and manslaughter by negligence.

In general, if a bridge/building goes down due to a honest mistake, that would not result in criminal charges, however, the planning and approval process is designed so that simple mistakes by a single person don't result in such failures, and it needs some clear violation of the process - not doing a mandated re-verification of calculations someone else made, faking some signatures of approval, intentionally skipping parts of process or change documentation, etc; those are unethical acts which make you criminally liable if they result in actual harm to people.


As someone who was on the Civil Engineering track prior to ending up in tech, jail time seems unlikely unless the breach of ethics was particularly egregious.

The consequence of poor decision making in a Professional Engineering setting that we were most often warned against was ejection from the profession. If your work results in the loss of life or property, you may find yourself unable to continue working in the field due to loss of license or a steep increase in insurance premiums.

As a software "engineer", the consequences for poor professionalism are not as sharply defined.


I don't like the comparison though. If you build stuff the cement mix you'd been using for 10 years isn't suddenly changed.

I'm not trying to imply that writing software is harder, but it's a lot more finicky and shifting. Also you're the architect, the bricklayer and the person taking out the trash - all by yourself and usually getting alotted time for a 1/3 of it.

I'm actually often surprised software works at all..


I have replied to another similar comment in this thread.

Just wanted to say thanks for your replies, everyone, that's interesting info, and it looks like it's limited in some countries to negligence, particularly wilful negligence. Though it may vary by country.

No, software engineers don't often have the same levels of responsibility.

Personally I do have a indemnity insurance of various forms which could pay out in the event I am sued over this sort of thing, but that's because I'm operating as an independent consultant with my own company. It appears, in the UK market, to mostly be understood by all parties (my clients, the insurers, myself) to be a formality, and criminal liability is very far from my mind. I can't think how we would get there.


>How true is this? I've never heard of such a thing happening.

May be its more common in India, could be very well be the norm because every time a bridge goes down a related Engineer gets arrested the very same day or soon under 'Causing death by negligence'; perhaps a practice from colonial era still being practiced to pacify public.

Read:

Another BMC engineer arrested in Mumbai bridge collapse(2019)[1]

Bhubaneswar flyover collapse: Engineer, director of construction firm arrested(2017)[2]

4 Engineers Arrested In Kolkata Flyover Collapse Case(2016)[3]

IIT Roorkee: Two professors arrested in bridge collapse case(2015)[4]

SMC engineer held in bridge collapse case suspended(2014)[5]

Gammon, Hyundai officials arrested, probe ordered(2009)[6]

I'm sure you can pull up such cases going back at least 200 years.

[1]https://www.deccanchronicle.com/nation/current-affairs/02041...

[2]https://www.hindustantimes.com/india-news/bhubaneswar-flyove...

[3]https://www.ndtv.com/kolkata-news/4-engineers-arrested-in-ko...

[4]https://www.indiatoday.in/india/story/iit-roorkee-two-profes...

[5]http://timesofindia.indiatimes.com/articleshow/38044181.cms

[6]https://www.ndtv.com/india-news/kota-bridge-collapse-30-dead...


> "If a Structural/Civil Engineer builds a bridge and they sign off on it and it goes down, he/she will go to jail"

Fixed.

For all intents and purposes, you can still work as an engineer without having to take responsibility for your work.

I don't think it's a great system because the person who signs off on it becomes the scapegoat if something goes wrong.


I have replied to another similar comment in this thread.

Some background, I work for an ad-tech startup. We provide workflow and performance optimizations on top of Facebook and Twitter ads.

A few years back we managed to draw the attention of a pretty big agency who at the same time was in the news because one of their employees killed themselves over the work pressure. Now, this company has ten thousands employees so we never gave it much thought, but later we learned that the account we scored was actually the account of the employee who killed herself. The employee was of a similar age to me at the time, and the company approached us the day after the news of the suicide broke.

Although it's not in similar vein, it made me feel like shit for a few good days and I still think about it whenever the Agency is in the news or someone mentions it.


Sometimes, even if it isn't about your code killing people or supporting morally questionable things, just making sure the right thing is done is a matter of ethics.

I saw a job posting on my universities' IT depratments' job board. Another (non-IT) department wanted an application implemented in a specific way that was completely unsuitable. Not impossible to implement, just the wrong tool for the job and it would be a nightmare for the users.

I wasn't particularly interested in doing the job, as I wasn't short on money and had a "real" job lined up, but I did reach out to the department, asked whether there was a specific reason for the requirements (there wasn't), explained the better alternatives, and offered to help them write a better ad, and if they really couldn't find anyone else, implement it (for a fixed price that would result in an above-average hourly rate for me, which I was transparent about).

I ended up implementing it, and it is an implementation that at that time would have been controversial, but still seems to be the right choice even with many more years of experience and hindsight.

Had I simply ignored it, the most likely outcome is that thousands of students would have had to suffer with a really bad system. That code I'm still proud of.


My boss was telling me some stories about his time at facebook. During the 2012 election cycle facebook was making bank on election ads. There was a problem though, when someone clicked x to make an ad go away they would never see an ad from the same source. Eventually, facebook wasn't able to show enough Romney ads because so many people had x's them out. My boss was tasked with turning this feature off, and he claims it was never turned back on.

I'd argue there are ethical issues in letting them turn it off. It makes the user less aware that the site they're using is accepting money from that ad source. It reinforces their bubble. And it overall makes them feel like the customer, when they're really not, which is deceptive.

If FB let users click that button, but secretly broke the functionality, then that's a different issue.


I had a similar experience early in my career. I worked at a company tracking social media posts for companies to analyze conversations online about their products and services. Coincidentally, everything we built worked brilliantly for tracking people. At it's worst, our product was very Orwellian.

I built the NLP features in Arabic, because Saudi Arabia was having a hard time tracking their people as well as they would have liked since Arabic wasn't supported. A few months after I shipped the features, Jamal Khashoggi was killed. Who knows what other atrocities I contributed to.

I was just so damn excited as a junior engineer who didn't even speak Arabic to work on something so cool.


I worked on Credit Default Swap calculators and infrastructure to support real-time valuations.

Just before and during the 2008 meltdown.

At least 3 big banks were using the software I helped make.

To paraphrase, the things people were doing with CDSs were analogous to taking out an insurance policy on your neighbors home, and then burning the home down and collecting the insurance.


Very interesting work, what do you do nowadays? Did you have your own firm?

I once made a multi-level marketing scheme website. For an acquaintance who wanted to launch one. Realized halfway through it wasn't a good idea and returned back with excuses about not having enough time. The acquaintance gave up on the idea as well so nothing came out of it thankfully.

What was interesting was that - it was a very fun problem to model on the DB side, especially with the kind of constraints around payouts and it kind of sucked me into saying yes.


One of my high school teachers framed a few math questions around calculating probabilities for unusual combinations of items. Took me a couple of questions to realize he was fishing for lottery / scratch ticket optimization methods.

Learned the next year that he was arrested over the summer.

Never did actually answer more than just basic stat course questions.


Figuring out and exploiting mathematical weak points in lottery systems is perfectly legal. What was he arrested for?

Possession with intent to sell cocaine on school grounds.

One of my early-career jobs was working for an ISP that old fogies may remember called Slip.Net. This was 1994, just before everything took off in the commercial Internet space. There were a couple gray area things that resulted in me being eager to leave that particular company.

The first was that they were a "free" ISP. The way they made money was that all their POPs were in facilities taking advantage of the rules around long distance termination fees. Effectively you could set up a CLEC in a corn field in Iowa and as long as you could get a lot of people to call you, you could make money from the Bells. Free Internet access, but a long-distance call. It was probably a good deal for people who didn't have a local POP anyway. However, it led to a weird incentive -- you wanted to keep people connected.

In order to sign up for the service you had to dial into it. In order to create an account (these were regular shell accounts on SunOS systems mostly -- totally insane by today's standards) we had to have an entry in /etc/passwd and of course you needed their login to get an entry there. We had worked out how to take their details and create an account, but you had to disconnect to log back in as that new user, and we lost money every time someone disconnected. The solution was obvious -- please modify logind so that you could essentially sudo from the "setup" user into their new shell right there in the existing session. Boy that was fun, and felt a little dirty.

It turns out the other thing people liked to do besides Internet dialup that would keep their computer on the phone for a long time was to download porn (you could argue they were doing that with their Internet dialup too). So of course the company ran many Wildcat-based porn BBSes out of each of their POPs. In fact that business came first. They flew me out to (somewhere outside of Toronto I think) to upgrade the access for a particular POP and asked me to load some CDs in the jukebox at the same time. I didn't think much of it, but when I got there I realized that these were all hardcore porn. I don't think there's anything inherently unethical about porn, but it clearly also didn't check the box on things I'm going to tell my mother about my job.

I left that job after about 6 months, but I liked it better than the one before it, where they weren't paying their phone bills and would regularly tell their customers that they were having "technical problems" while they got SBC to turn their access back on. I think I lasted most of 3 months there.


I've been on the receiving end of a few shady product requests - popup ads, url hijacking, spyware. Even though the ethical questions involved were not that huge, it was always difficult to come through unscathed and it cost me a fair bit of sanity and work relationships.

I can only imagine the burden when it actually involves human life or wellbeing, and am thankful to have had the luxury of choice. This gets me to wonder if having a universally agreed code of conduct (like the one from ACM mentioned somewhere else in this thread), and the backing of some kind of union could make a difference - at least developers would have some comfort that they wouldn't be fired for raising ethical issues, or even refusing to do work, without the company having to face legal action. What's the equivalent for other lines of work?


In the 1990's I was working on Ph.D. in robotics, and funded by a huge DARPA grant. The task was to create software for multi-agent robot coordination. The text of the grants always said things like needing to gain the capabiity to send thousands of autonomous robots into a town in order to map out and investigate all the buildings, etc, all for "defense". To verify if a hospital is actually a hospital, for example. It didn't take a rocket scientist to see that you only had to put a bomb in their belly and game over. I wonder why they even tried to make it sound not like they just wanted to be able to drop thousands of robots from a plane and take over and destroy any city.

I'd guess they planned to do what they said: see if a hospital is a hospital. Then they'd drop bombs from planes, the old fashioned way, making sure to avoid any off-limits buildings.

Of course, it's still tech for killing, so the moral quandary doesn't change.


When I was 20 I was working, as a single developer, for a startup connecting people who would like to get consultancy in different areas (eg. photography, IT, school subjects, law, etc.) with paid consultants ("experts"). They were both calling the service, which joined their calls and billed appropriately. I'm not very proud of the code I wrote then (PHP without version control + Asterisk scripts), but it worked good enough.

Anyway, the business model didn't work out, so the founder pivoted into something that should work: phone sex and telephone fortune telling. It was a bit too much for me :)


Yep. The age of engineer doesn't matter. Do take ethics into consideration. Be aware that dual-purpose systems can have long life and change owners.

I have a job that involves making DRM stronger.

It's certainly legal, but I'm still undecided as to whether it is moral or not.

Personally I am opposed to the use of DRM, but I enjoy the technical challenges involved.


Why do you do it? I’m sure inventing chemical weapons is a fun challenge to a skilled chemist, but if you’re morally opposed to it shouldn’t that matter more?

A moral objection to DRM is hardly on the same level as a moral objection to chemical weapons.

I wouldn't go so far as to say that I'm morally opposed to DRM, it's more that I find it annoying.


I remember interviewing for a job out of college (circa 2005) at a company that will remain nameless. They were an oft cited internet marketing company and were able to track the popularity of most websites. I remember understanding that they basically just proxied all traffic through their servers behind the guise of various free software installs, plugins, etc.. spyware.

I declined their job offer on the grounds that I didn't feel comfortable with that duplicity.

The common thread I think with the articles author is that most marketing is about crafting a story and maybe hiding the true origins.


> As developers, we are often one of the last lines of defense against potentially dangerous and unethical practices.

> We’re approaching a time where software will drive the vehicle that transports your family to soccer practice. There are already AI programs that help doctors diagnose disease. It’s not hard to imagine them recommending prescription drugs soon, too.

> The more software continues to take over every aspect of our lives, the more important it will be for us to take a stand and ensure that our ethics are ever-present in our code.


This will never happen. Software developers work at companies like Facebook, Uber and Airbnb where they either help companies cause massacres, indebt their fellow citizens or cause entire real estate issues in the real world because they get paid.

Software will only be regulated by government action, not individual action.

Edit: I am not innocent of this as well. I got paid to deliver addicting software while working for Blackberry (just a radio tech), I got paid to build compliance tools for a bitcoin company, and I get paid to optimize the amount of ads and how much profit they bring to my current employer.


The first rule of ethical programming is to avoid working for fundamentally unethical employers. It can be hard because the more unethical a business is, the more money it has access to.

I spent my whole career not working for Microsoft, Lockheed, Goldman Sachs, Big Oil, Monsanto (now called "Bayer"), FAAG (Netflix seems OK).

You can't usually keep unethical people from using your software, but you don't need to specifically enable them. Little evil leads by insensibly small steps to big evil.


This is a hazard of any engineer or builder. Potential side-effect of the profession due to the inherent conflict of responsibility. As an engineer, one is responsible for the immediate result, that is the built code/system, yet the context of use and its effects are often out of control of the engineer. A seemingly innocuous component may be used in the heart of a vile machine. If you know of such use, you have a choice to learn about strength of own values or just write it off on lack of control.

I remember of a contemporary theater play, where at some cruel moment an audience volunteer is called upon to assist, just to be a stand-in witness. Reluctantly a few spectators walked into the scene, then passively stood through subject's torture. None dared to walk off or ask them to stop... I left my seat feeling disturbed. Later I found out that the play had an alternative flow based on the action of the witness.

It's hard to step into someone else's shoes, too easy to say right thing post-factum. In OP's story, the testing lady took the most sensible action, she verbalised the unspoken. This is akin to historiographers or news reporters, just describe things as they are, no judgement. This gives a chance to eventually placing the responsibility where it's due.


I’m quite interested in that play: what’s the alternative flow, what do they do to trigger it?

> ... what’s the alternative flow

Well, as I mentioned, I have left after that scene had concluded (conventionally, I guess). The prisoner got "killed", the witnesses returned to their seats with a mixture of puzzlement and amusement on their faces.

As for the alternative flows, I only learned that the scene had some kind of chance for the prisoner, should any witness have intervened. Basically, witnesses' participation was a vote (a silent compliance in that case). None of this obviously had been advertised ahead of show (it was part of a festival). The play and the company were from South America (Chile or Argentina) .... it's been awhile.


Sounds like a try at making somehow including the Milgram experiment [1] into a play. I guess we cannot learn to much, if the people in the experiment actually know it's all show.

[1] https://en.wikipedia.org/wiki/Milgram_experiment


My last job I worked for was a university. I wrote the mobile app that they used on campus. I was pretty pleased with it as it allowed students to do all the things that they wanted access to, class list, add/drop, email, new and events. I put in a slew of analytics to track use behavior across the application. Shortly after I had released version 2 of the application my old boss went on to better things. One of the things that my new boss asked me to do was write in a way to track where students walk around using bluetooth beacons and have it report to the backend db. I said, hell no. He basically got me pushed out of my job by him being a complete dick to me after that. I moved on to another job. Recently I saw this:

https://www.theverge.com/2020/1/28/21112456/spotteredu-degre...

Apparently we weren't the only uni planning on doing that.


Great article, it should be required reading for all developers. It's important to remember that just because your employer asks you to do something, you are not obligated to. You can't rationalize away your part in it later, you will be directly responsible.

I'm with you, but the pressure is there, do it or lose your job and maybe not get a good reference to help you get a new one. You cet get stuck somewhere very difficult if you have a conscience.

I wrote about this here:

https://drewdevault.com/2020/05/05/We-are-complicit-in-our-e...

In short: in general, software engineers enjoy a very good job market and lots of maneuverability, much more so than our peers in other fields.


There is a talk by Uncle Bob addressing this topic. [0] He suggests that programmers should take an oath, similar to doctors or other types of engineers. The first point says that you shall not write bad code, where „bad“ does not only refer to the quality, but also the ethics behind the code. I really recommend watching the full talk, it‘s a real eye-opener.

[0]: https://youtu.be/17vTLSkXTOo


So, author takes one anecdotal news story of a person taking the drug ending upin suicide, and immediately thinks that the drug is harmful. If anything, this irresponsible jumping to conclusions is the most unethical thing in the whole story.

Isn't drug prescription under the control of doctors in Canada?

Mostly but not completely. If you take "doctors" broadly (e.g. including dentists and psychiatrists) then mostly yes. Some prescriptions are available more widely (e.g. birth control pills) and pharmacists also have varying powers to generate repeat prescriptions (they can provide emergency prescriptions almost everywhere; and they can provide unlimited prescriptions for "maintenance" drugs like thyroxine in some provinces).

That doesn’t stop drug advertising in the US. Patients often come in asking for a drug, and doctors are influenced by ads as well.

And pharma reps. I have an extended family member (a Dr., no less), that does this for a living. A walking, talking, knowledgeable shill for one particular company.

Thank you for posting this. It took guts to post it. You have my respect. Hopefully others will read it and learn the easy way rather than the hard way.

As I read a long time ago and always repeat, Tron gave us the rule zero for every developer: Fight for the users.

This seems like a failure of the Canadian counterpart of the US FDA. Drug trials are generally extensive enough to surface side effects, and severe side effects lead to cancellation of trials or at least label warnings.

Not by a long shot.

The FDA is the definition of regulatory capture.

A long line of examples. Their still holding to the line that Glyphosate (roundup) does't cause cancer and that Marijuana has no proven medical value.


If you're looking for worse examples (not to excuse the FDA), there are many.

I've a special place for the Texas Railroad Commission, for example:

https://independentleaguetx.org/legislature-fails-texas-rail...


Sounds like the anti-acne drug maybe, I can't recall its name but it actually works and has probably saved far more young people from committing suicide have completed difficult so s result of taking it.

That drug, accutane, was never prescribed to women who could possibly get pregnant. (I took accutane and it was a life-changing drug for me.)

My daughter was put on the pill first so she could take it.

>it actually works and has probably saved far more young people from committing suicide have completed

Please don't say things like this unless you know. People say the same thing about SSRIs when pretty much every clinical trial says the exact opposite (hence the FDA black box label for both drugs).

Accutane causes crazy hormonal changes. I personally know multiple people that ended up in a mental hospital after taking that drug (including my brother).


I'm sorry about your brother but my family's experience was different and our dermatologist was of the opinion that the risks were exaggerated. I suffered with acne from age 12 to my early 20s. As someone who was pretty awkward anyway, having an acne related nickname in school was pretty much the icing on the cake in terms my negative self image. I never went to parties or discos at all in school, anything at which girls might be present. My kids inherited my skin and roaccutane was transformational for them. Even after the acne dried up I thought they'd be left with scarring but it all went away eventually. I was 100% behind their decisions to take the stuff risks and all. I think that for a boy you can get past acne and its damage eventually, I did, but for a girl the scarring it leaves is devastating.

> when pretty much every clinical trial says the exact opposite

No, this is incorrect.


...

Please don't be coy.

One emotion I don't understand is "shame." You made a mistake, just like everyone else on the planet. Get over it. You obviously have a good moral and ethical fabric, so at the end of the day that is all that matters. "Shame" is just a disgusting wet blanket that coats everything, and it stinks.

A few years ago, thinking of emotions, their ubiquitous appearance in human ccultures and history, and many aanalogues amongst other animals, that these must represent some deep evolutionary role, and adaptive benefit. I then wondered if there were anything in evolutionary biology literature that addressed this.

As it turns out, one of Charles Darwin's last published books is The expression of the emotions in man and animals. Chapter XII addresses self-attention: shame, shyness, modesty, blushing.

https://archive.org/details/expressionofemot1872darw/page/31...

You might care to reasses your casual dismissal of shame's relevance.


Maybe it's semantics (I haven't read your link yet) but shame isn't similar to shyness, modesty or blushing to me. The later 3 are things I think are good things. Shame to me (maybe my definition isn't correct) is equivalent to punishing one's self for a mistake, usually over a long period of time, and usually in an overly dramatic way (like the OP's article). It happened when he was a kid just trying to do honest work, he felt wrong about it, something terrible happened that wasn't his fault... so as harsh as it is, I still stand by "get over it."

I also said by the way "you have a good moral fabric" which goes along with that statement. If someone is say, a rapist, then I suppose shame is a great emotion to have, because they're upset that they lack what other people have, which is a working control of emotion and a sound mind. For most people (who are good people), I feel like shame is something that does nothing but inhibit life.


The common thread is that all the behaviours relate to Darwin's grouping concept of self-awareness, and further serve to act in an inhibitory fashion. There might be other forms of positive or reinforcing senses or emotions also reflecting self-awareness: pride, confidence, self-assurance, patience, say, though some of these move from emotion to personality.

I'd also caution about presuming Darwin had this all right, He was the first (AFAIK) to write on emotion as evolutionary adaptation, and certainly made errors or omissions. Classification of emotions remains very inexact.

But the discoverer of natural selection deduced the evolutionary role of emotion, suggested a deep significance and innateness of them, and specifically names and discusses the emotion you've very lightly dismissed.

You should probably hear him out.


Thanks, I'll check it out then.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: