Hacker News new | past | comments | ask | show | jobs | submit login
Twitter was down (twitterstat.us)
694 points by idlewords 5 days ago | hide | past | web | favorite | 484 comments





Ok, this is too many high-profile, apparently unrelated outages in the last month to be completely a coincidence. Hypotheses:

1) software complexity is escalating over time, and logically will continue to until something makes it stop. It has now reached the point where even large companies cannot maintain high reliability.

2) internet volume is continually increasing over time, and periodically we hit a point where there are just too many pieces required to make it work (until some change the infrastructure solves that). We had such a point when dialup was no longer enough, and we solved that with fiber. Now we have a chokepoint somewhere else in the system, and it will require a different infrastructure change

3) Russia or China or Iran or somebody is f*(#ing with us, to see what they are able to break if they needed to, if they need to apply leverage to, for example, get sanctions lifted

4) Just a series of unconnected errors at big companies

5) Other possibilities?


#4

I work at Facebook. I worked at Twitter. I worked at CloudFlare. The answer is nothing other than #4.

#1 has the right premise but the wrong conclusion. Software complexity will continue escalating until it drops by either commoditization or redefining problems. Companies at the scale of FAANG(+T) continually accumulate tech debt in pockets and they eventually become the biggest threats to availability. Not the new shiny things. The sinusoidal pattern of exposure will continue.


Yep, this also matches what I've heard through the grapevine.

Pushing bad regex to production, chaos monkey code causing cascading network failure, etc.

They're just different accidents for different reasons. Maybe it's summer and people are taking more vacation?


I actually like the summer vacation hypothesis. Makes the most sense to me - backup devs handling some things they are not used to.

So, a reverse Eternal-September? It'll get better once everyone is back from August vacations?

No, because it’ll only get better until next summer.

These outages mean that software only gets more ~fool~ summer employee proof.

I recall someone saying that holiday periods actually had better reliability for their services, because fewer people were pushing breaking changes...

I do wonder if it's that the usual maintainers of particular bits and pieces are on vacation and so others are having to step in and they're less familiar or spread too thin.


I'm more partial to the summer interns hypothesis.

I agree with this, but to be clear, the "summer interns hypothesis" is not "summer interns go around breaking stuff," it's "the existing population of engineers has finite resources, and when the interns and new grads show up, a lot of those resources go toward onboarding/mentoring the new people, so other stuff gets less attention."

Pretending that junior engineers is the problem, is the problem.

Just checking what your objection is. Is it that you think experience is overrated, or is it just that he was speculating without any evidence?

Can't speak for OP, but I can tell you what mine is.

If you have an intern or a Junior Engineer, they should have a more senior engineer to monitor and mentor them.

In the situation where a Junior Engineer gets blamed for a screw up:

1. The Senior Engineer failed in their responsibility. 2. The Senior Engineer failed in their responsibility.

A Junior Engineer should be expected to write bad code, but not put it into production, that's on the Senior. If I hit approve on a Junior Engineer's PR, it's my fault if their code brings the whole system down. If a Junior Engineer had the ability to push code without a review, it's my fault for allowing that. Either way it's my fault and it shouldn't be any other way. It's a failure to properly mentor. Not saying it doesn't happen, just that it's never the Junior Engineers fault when it does.


I'd caveat that slightly: only if the senior engineer is not also overburdened with other responsibilities, and the team has the capacity to take on the intern in the first place. I've been on teams where I felt like we desperately needed more FTEs, not interns. But we could hire interns, and not FTEs.

(I agree with the premise that an intern or junior eng is supposed to be mentored, and their mistakes caught. How else should they learn?)


the amount of time that the summer intern / new grad eat up of seniors time is the problem. Tech debt that does not get addressed in a timely manner because of mentorship responsibilities is the problems

If you don't train new and capable engineers, you'll eventually lose talent due to attrition and retirement. Talent can be grown in-house; engineering companies are much better environments than universities to learn how to build scalable platforms. The cost of acquisition is low, too, because junior engineers can still make valuable contributions while they learn to scale their impact.

If interns are able to take down your infrastructure, then it is the fault of the senior engineers who have designed it in a way that would allow that to happen.

Rule one of having interns and retaining your sanity is that interns get their own branch to muck around in.

Rule one of having a useful intern experience is to get them writing production code as quickly as possible. They check in their first change? Get that thing into production immediately. (If it's going to destabilize the system, why did you approve the CL? You two probably pair programmed the whole thing together.)

I completely agree- even if it's something small.

I'm an intern in a big company with an internal robotics and automation group, and I recently got to wire up a pretty basic control panel, install it, and watch workers use it. That was so cool, and made me appreciate what I was doing a lot more.


Sure. The interns have their own branch, but it doesn't stop them from being disruptive to the human in charge of mentoring them.

All changes should be in a new branch.

I used to believe this. Having solid lower environments which are identical to production, receiving live traffic where engineers can stage changes and promote up removes some of the “all things should live on a branch” business. I know that sounds crazy, but it is possible for teams of the right size to go crazy on master as long as the safety nets and exposure to reality are high enough in lower environments.

Yes, but it always seems to come down to a very small change with far reaching consequences. For this ongoing twitter outage, it's due to an "internal configuration change"... and yet the change has wide reaching consequences.

It seems that something is being lost over time. In the old days of running on bare metal, yes servers failed for various reasons, then we added resiliency techniques whose sole purpose was to alleviate downtime. Now we're at highly complex distributed systems that have failed to keep the resiliency up there.

But the fact that all the mega-corps have had these issues seems to indicate a systemic problem rather than unconnected ones.

Perhaps a connection is the management techniques or HR hiring practices? Perhaps it's due to high turnover causing the issue? (Not that I know, of course, just throwing it out there). That is, are the people well looked after and know the systems that are being maintained? Even yourself who's 'been around the traps' with high profile companies: you have moved around a lot... Were you unhappy with those companies that caused you to move on? We've seen multiple stories here on HN about how those people in the 'maintenance' role get overlooked for promotions, etc. Is this why you move around? So, perhaps the problem is systemic and it's due to management who've got the wrong set of metrics in their spreadsheets, and aren't measuring maintenance properly?


I remember all these services being far less reliable in the past. The irony of us talking about the bygone era of stability in the context of Twitter is particularly hilarious.

I do think that internet services in general are much more mission critical, and the rate of improvement hasn’t necessarily kept up. It used to be not particularly newsworthy if an AWS EBS outage took out half the consumer internet several times per year, or if Google’s index silently didn’t update for a month, or when AOL (when they were by far largest ISP in the US) was down nationwide for 19 hours, or the second-biggest messaging app in the world went down for seven days.


Which app was down for 7 days?

I don't see the value in lamenting the old days of a few machines where you could actually name them as Middle Earth characters, install individually, log in to one single machine to debug a site issue. The problems were smaller and individual server capacity in respect to demand was in meaningful fractions. Now the demand is so high and set of functions these big companies need to offer are so large, it's unrealistic to expect solutions that doesn't require distributed computing. It comes with "necessary evils", like but not limited to configuration management--i.e. ability to push configuration, near real time, without redeploying and restarting--, and service discovery--i.e. turning logical service names to a set of actual network and transport layer addresses, optionally with RPC protocol specifics. I refer to them as necessary evils because the logical system image of these are in fact single points of failures. Isn't it paradoxical? Not really. We then work on making these systems more resilient to the very nature of distributed systems, machine errors. Then again, we're intentionally building very powerful tools that can also enable us to take everything down with very little effort because they're all mighty powerful. Like the SPoF line above, isn't it paradoxical? Not really :) We then work on making these more resilient to human errors. We work on better developer/operator experience. Think about automated canarying of configuration, availability aware service discovery systems, simulating impact before committing these real time changes, etc. It's a lot of work and absolutely not a "solved problem" in a way single solution will work for any scale operation. We may be great at building sharp tools but we still suck at ergonomics. When I was at Twitter, a common knee-jerk comment at HN was "WTF? Why do they need 3000 engineers. I wrote a Twitter clone over the weekend". A sizable chunk of that many people work on tooling. It's hard.

You're pondering if hiring practices and turnover might be related? The answer is an absolute yes. On the other hand, these are the realities of life in large tech companies. Hiring practices change over years because there's a limited supply of of candidates experienced in such large reliability operations and industry doesn't mint many of them either. We hire people from all backgrounds and work hard on turning them to SREs or PEs. It's great for the much needed diversity (race, gender, background, everything) and I'm certain the results will be terrific but we need many more years of progress to declare success and pose in front of a mission accomplished banner on an aircraft carrier ;)

You are also wisely questioning if turnover might be contributing to these outages and prolonged recovery times. Without a single doubt, again the answer is yes but it's not the root cause. Similar to how hiring changes as company grows, tactics for handling turnover has to change too. It's not like people leave the company, but within the same company they move on and work on something else. The onus is on everyone, not just managers, directors, VPs to make sure we're building things where ownership transfer us 1) possible 2) relatively easy. This in mind, veterans in these companies approach code reviews differently. If you have tooling to remove the duty of nitpicking about frigging coding style, and applying lints, then humans can indeed give actually important feedback on complexity of operations, self describing nature of code, or even committing things along with changes to operations manual living in the same repo.

I think you're spot on with your questions but what I'm trying to say with this many words and examples is, nothing alone is the sole perpetrator of outages. A lot of issues come together and brew over time. Good news, we're getting better.

Why did I move around? Change is what makes life bearable. Joining Twitter was among the best decisions in my career. Learned a lot, made lifelong friends. They started leaving because they were yearning a change Twitter couldn't offer. I wasn't any different. Facebook was a new challenge, I met people I'd love to work with and decided give it a try. I truly enjoy life there even though I'm working on higher stress stuff. Facebook is a great place to work but I'm sure I can't convince even %1 of HN user base, so please save your keyboards' remaining butterfly switch lifetime, don't reply to tell me how much my employer sucks :) I really hope you do enjoy your startup jobs (I guess?) as much as I do my big company one.


Not sure where you’re going, but my take is that yes, the times for calling servers individually are over.

But we’re still touching the belly of our distributed systems with very pointed tools as part of the daily workflow. That’s how accidents happen.

The analogy is clear IMHO; just as we’ve long stopped fiddling daily with the DRAM timings and clock multipliers of the Galadriel and Mordor servers, we should consider abstaining from low level “jumper switching” on distributed systems.

Of course, this also happened thanks to industry introducing PCI and automated handshaking...


Those days of yore are when computers did things and we wrote programs that satisfied immediate needs. There was also a social element to it when there were multiple users per machine.

[flagged]



lol yes, whats the quote on "Don't assume bad intention when incompetence is to blame"?

After seeing how people write code in the real world, I'm actually surprised there aren't more outages.


Well we have an entire profession of SRE/Systems Eng roles out there that are mostly based on limiting impact for bad code. Some of the places I've worked with the worst code/stacks had the best safety nets. I spent a while shaking my head wondering how this shit ran without an outage for so long until I realized that there was a lot of code and process involved in keeping the dumpster fire in the dumpster.

Which do you prefer? Some of the best stacks and code I’ve worked in wound up with stability issues that were a long series of changes that weren’t simple to rework. By contrast, I’ve worked in messy code, complex stacks, that gave great feedback. In the end, the answer is I want both, but I actually sort of prefer “messy” with well thought out safety nets to beautiful code and elegant design with none.

One thing that stands out from both types of stacks that I've worked with, is that most of the time, doing things simply the first time without putting in a lot of work to guess what other complications will arise later tends to produce a stack with a higher uptime even if the code gets messy later.

There are certainly some things to plan ahead for, but if you start with something complex it will never get simple again. If you start with something simple, it will get more complex as time goes by but there is a chance that the scaling problems you anticipated present in a little different way and there's a simple fix.

I like to say, 'Simple Scales' in design reviews and aim to only add complexity when absolutely necessary.


Hanlon's Razor: https://en.wikipedia.org/wiki/Hanlon%27s_razor

"Never attribute to malice that which is adequately explained by stupidity."


I always thought that this cause should also include "greed". But then, greed is kinda one step closer to malice, and I'm not sure if there's a line.

Ah, but that's a lot of big corps being more stupid in the last month than last year? If it's two or three more, that's normal variation. We're now at something more like 7 or 8 more. The industry didn't get that much stupider in the last year.

I will observe, without asserting that it is actually the case,

that successful executions of #3 should be indistinguishable from #4.

(And this is maybe a consequence of #1).


I've also worked at a couple of the companies involved.

This is the correct analysis on every level.


How does the fact you worked at those companies relate to #4?

Edit: I misread the parent and my question doesn't make a lot of sense. Please ignore it :)


> How does the fact you worked at those companies relate to #4?

For Facebook I worked on the incident, previous Wednesday. 9.5 hours of pain...

And for my past employers, I still have friends there texting the root causes with facepalm emojis.


Do tell

Turned out to be number #1 The outage was due to an internal configuration change, which we're now fixing. Some people may be able to access Twitter again and we're working to make sure Twitter is available to everyone as quickly as possible.

Can you clarify what redefining problems would mean (with an eg)?

Think of computer vision tasks. Until modern deep learning approaches came around, it was built on brittle, explicitly defined pipelines that could break entirely if something minor about the input data changed.

Then the great deep learning wave of 201X happened, replacing dozens/hundreds of carefully defined steps with a more flexible, generalizable approach. The new approach still has limitations and failure cases, but it operates at a scale and efficiency the previous approaches could not even dream of.


That's not redefining the problem, so much as applying a new technology to solve the same problem. Usually using the flashy new technology decreases reliability due to immature tooling, lack of testing, and just general lack of knowledge of the new approach.

Also deep learning, while incredibly powerful and useful, is not the magic cure-all to all of computer vision's problems and I have personally seen upper management's misguided belief in this ruin a company (by which I mean they can no longer retain senior staff, they have never once hit a deadline, every single one of their metrics is not where they want it to be, and a bunch of other stuff I can't say without breaking anonymity).


FAANG(+T)(-N)(+M)

I think we 'bumped heads' at Middlebury in '94, and I think you are in store for an "ideological reckoning" w/in 3 years.

Pinboard is a great product, so thanks for that. I am surpised you don't have your own Mastodon instance (or do you?).


since all of them happen in high profile business hours, i'd guess either #1 or #5.

For #4 to be the actual cause, outages out of business hours would be more prevalent and longer.


Of course it went down during business hours, that's when people are deploying stuff. It's known that services are more stable during the weekends too.


Faangt = Facebook amazon Apple Netflix Google tesla?

Gmafia

Add slack to the list

Edit: and stripe


Twitter, not tesla

[flagged]


Former employees and current employees talk via unofficial online and offline backchannels at many companies.

Ok, so maybe I overreacted

geez, tough crowd. do you wanna ten dollar hug?

I was just polishing my bit. Not in a bad mood today so much as a bored mood. You seem like you know what you are talking about (yes, I was bored enough to stalk you, too)

If you are bored one day and around Menlo Park, come have a coffee or ice cream at FB campus. You can troll me in person.

Isn't it interesting where this is going? We all want to meet our accusers? I don't care for FB myself, but I appreciate what you all are doing in the larger sense. Cloudflare is my fave of your former employers (since you shared that in this discussion).

Could you please stop posting unsubstantive comments to Hacker News?

Life in tech is like a Quentin Tarantino movie.

...except everyone is sitting at desks typing, there's no blood or surf rock or chases or self-indulgent soliloquies, and the cursing is much less creative?

Maybe you're doing it wrong?

  cursing is much less creative?
I beg to differ.

Only one thing to add:

Tech debt is accrued in amounts where every VC fund would get wet pants if tech debt was worth dollars paid out.


I've still never seen this much downtime on these systems so it's weird to happen all at once.

It's possible that they're related without requiring any conspiracy theories or anything. Maybe these companies are just getting too big or too sloppy to maintain the same standard of uptime (compared to the past few years)? Or maybe there's some underlying issue that they're all rushing to fix which justifies the breaking prod changes within the same timeframe.

But it was weird when a it happened to two or three of them. Now we're going on something like 5 massive failures from some of the biggest services online within a little over a week...


Write a script to fire random events and you will notice they sometimes cluster in ways that look like a pattern.

You know, it would be cool if you found stats on the downtime metrics of these various high-profile recent outages, and calculated the odds of having such a cluster. Statistics is hard, though, and avoiding a "Texas Bulls-eye" would be hard.

"Celebrities die 2.7183 at a time": http://ssp.impulsetrain.com/celebrities.html

So the only take away is that now the population at large notices tech companies outages as much as they notice celebrity deaths?

"population at large"

This thread is linked to a status page run by Twitter, on a programming and technology news site. I'm not really seeing how most people that exist in the western/1st world are noticing this. Is there a CNN article, or FoxNews segment on how tech companies are having outages?


Yes, fox news even suggest it was part of a large coordinated censorship effort on the POTUS :D

https://www.foxnews.com/tech/twitter-suffers-widespread-outa...

quote from that url: "The outage came as President Trump was hosting a social media summit with right-wing personalities and tech industry critics who've accused Twitter and other websites of having an anti-conservative bias."



Sure does look like we are way out there at the tail end of the probability distribution, by those numbers.

I mean, we can assume the downtime variance follows a normal distribution. It should pretty easy to calculate P<.05 with just a little bit of data.

What you say could be true, but I don't know that we can assume it. If downtime requires several things to happen (cascading errors), but those things interact somehow (problem with one makes another more likely), I could imagine it might not be normally distributed. Disclaimer: I Am Not A Statistician.

Oh, sure. But Apple, Google, Cloudflare, Stripe, Slack, Microsoft, we're getting to more than five even...

The logic of the GP still applies though. Sites have outages every day so it is inevitable that some large sites will fail around the same time. Also, we know that Cloudflare and Twitter outages were attributed to configuration changes, probably others have benign explanations as well.

Sure, but "configuration changes" does not exclude several of these options. For example, is it harder to predict/deal with the consequences of configuration changes than it used to be?

Well, the options above cover pretty much every possibility, including the one I'm suggesting.

Reddit went down this morning too

Reddit goes down a lot though in my experience.

Reddit being up for 24 hours or generating pages in less than 3 seconds would be noteworthy.

Reddit goes down pretty frequently. It's been that way for years.

And now Discord is down!

no loss

This. I have first hand experience in this phenomenon multiple times. Complexity helps this effect too.

First, I think our general uptime metrics are trending upwards. Recovery times tend to be much shorter as well.

Big services are bigger, more mission-critical parts can fail.

Continuous development culture is designed with failure as part of the process. We don't spend time looking for obscure issues when they'll be easier to find by looking at metrics. This is fine when a staggered deployment can catch an issue with a small number of users. It's bad when that staggered deployment creates a side-effect that isn't fixed by rolling it back. Much harder to fix corrupted metadata, etc.

Automated systems can propagate/cascade/snowball mistakes far more quickly than having to manually apply changes.

We notice errors more now. Mistakes are instantly news.


> We notice errors more now. Mistakes are instantly news.

Heck, just look at Twitter itself from its original "Fail Whale" days where there was so much downtime, to now where even this relatively small amount of downtime is the top story on HN for hours.


So, when it went down, was there a Fail Whale displayed during this most recent incident?

I think they retired the fail whale some time ago.

I looked it up: in 2013, because they didn't want to be associated w/ outages.


5) Operational reliability is both difficult and unsexy.

The fancy new feature, increasing traffic, or adding AI to something will generate headlines, accolades, and positive attention. Not having outages is something everyone expects by default. This goes double for work that prevents outages. No one wins awards for what doesn't happen.

How many medals are pinned on the guys installing fire sprinklers?


Corollary: Work that prevents outages--or safe work--is SO unsexy it does not get noticed, but work that causes outages is postmortem-ed to death (pun intended).

Or maybe it's because the internet is tendentially becoming just a few companies' data centers? Afaik Twitter moved to GCP a few months ago. Maybe this is another Google outage?

Less likely since it looks fine from GCP status page.

Hmm, it seems that Twitter already figured it out, configuration change issues again.


Probably because we all use Kubernetes and YAML files and 100% of configuration failures are "oh shit, I used two spaces instead of 4, we're fucked".

Something like this is my bet too, there was a recent post somewhere called something like "why all outages are due to a configuration change". There are monocultures in site reliability ops for big companies, "configuration over code" but with heavy automation too. From my outside view it seems there's a tradeoff when you do that between more frequent smaller issues and less frequent bigger issues. Also reminds me of Google's move away from eventual consistency because with their infrastructure they can make a CP system highly available in practice... except when it isn't, due to a botched configuration change.

> tendentially

Is this a word? You don't mean tangentially? I'm having a crisis right now.


https://www.thefreedictionary.com/tendentially

Probably meant tangentially anyway.


"Tangentially" would make less sense. More likely, they meant to convey a present-participle form of "the internet tends to be consolidated."

Is it not? Sorry if I got it wrong, English isn't my first language.

dict.cc (my source of truth for English vocab) says it's a word: https://www.dict.cc/?s=tendentially


It's apparently a word but I'd say it's quite uncommon. I played around with google ngram viewer and had a hard time coming up with a word that is less common. But I finally came up with "astrophotographic".

E: "unsurpassingly" is way down there too


It's common in German, so I figured it wouldn't be uncommon in English. Oh well :)

It's very common in Biblical criticism (transliterated from German).


I (don't) like how you exclude Russia, China, Iran and somebody from your definition of 'us'.

His definition of "us" seems to just be "Americans". Which is fine because he's talking about American companies...

The assumption is that Russia, China, Iran are less dependent on Google, Twitter, etc., in part because some of them aren't allowed to operate in those countries, in part because some of them are much less dominant in those markets. 'Us' means 'people who might care that Twitter (or whoever) is down'.

Google, Twitter, Reddit, Facebook, etc, all legally operate in Russia.

But most have regional replacements. WeChat in China, VK (and some Telegram, though it's now blocked?) in Russia. This makes them less reliant on the American originals, which is why governments often encourage home-grown knock-offs.

Yes I have been also hit by the same bad feeling. Thanks for pointing it out.

Lots of people on vacation this time of year. Would be interesting to see if there is a seasonal component to the reliability of these services.

"Don't forget to occasionally do that thing I mentioned in passing 2 weeks ago, under my breath, during a klaxon alarm test. Otherwise the errors will DDoS the whole cluster. See you in a week, goodluck!"

Nah - that would never happen.


#1. I think the art of keeping things simple is being lost. These days people will mush together ten different cloud services and 5,000 dependencies just for a Hello World.

One possibility on 5) Too many KPIs and quarter goals to be reached, too many corners cut.

Obligatory to watch with this comment:

"Let's deploy to production" https://youtu.be/5p8wTOr8AbU


You know, I've watched a few of those memes over the past, but this one was especially well done, and timed perfectly with his gestures even!

The only possible way for me to make it more than 20-30 seconds into that was to mute it. That guy’s laugh is multiple orders of magnitude worse than nails on a chalkboard. Funny story (albeit too real), but man, mute before clicking everyone.

No idea how I haven't seen this, but it totally made my day.

This hit close to home. Hilarious. Thanks.

1/2) These are web apps. Big web apps but web apps none the less. We know what can go wrong theres nothing really new here. How would you quantify "too many pieces to make work". Is 1000 too many , 10000 ???? There are millions pieces of data on your harddrive and they work fine. In general the idea of variance can be solved with redundancy. Maybe there are not enough backups at twitter.

5/4) Incompetent people lead by incompetent people surrounded by yes men and a drug culture. Also having a company that demonizes conservatives which are some of the best engineers (scientist are squares naturally)

Human error is bound to happen and software is complex but so are rockets and supply chains. Things can go right and things can go wrong. Usually when they do go wrong there is a human error reason.

Does twitter foster a place where human error can occur more frequently that other places? I dont know. I have my bias about the company and any sjw company but thats very anecdotal.

Twitter worked yesterday and it doesnt work today. Doesnt really have to mean anything really important except for the fact that there is a blind spot in their process which they need to harden.

I guess the first person to ask is the dev op , then the developer. Something wasnt tested enough. That happens in commercial software, deadlines cant wait.

3)Russia / China / Iran ... stop watching CNN. You are parroting talking points. If twitter is crushed America could care less and would probably turn up sanctions, not lift them. Taking down twitter wont cripple anything in America except for certain marketers budgets.


Scientists are squares but they also have a brain. That's why they are usually not conservatives. Conservatives are not a party, it's a herd of paranoid people who tune into Fox News every night to be told what to be afraid of next, but it's definitely not engineers or scientists.

Brains are excellent pattern matchers.

Brains also suck at statistics.


This is the first time I can remember so much happening so close. It's statistically unlikely.

>July 11, 2019 7:56PM UTC[Identified] The outage was due to an internal configuration change, which we're now fixing. Some people may be able to access Twitter again and we're working to make sure Twitter is available to everyone as quickly as possible.

Seems #4 is the winner


Or #1.

I work on critical infrastructure at FAANG and it's frightening how complex things are. The folks who built the systems and knew them inside-out have mostly moved on, and the newbies, like me, don't fully understand how the abstractions leak, what changes cause what side effects etc.


6) White House social media conference just started.

https://www.10tv.com/article/trump-hosts-white-house-summit-...


It's a social media troll conference. Let's call it what it is.

Really not a good look.


I've been suspecting 3) for a few months now, and I'm quite curious how our government would handle it if it _were_ the case. Only a few of these outages have had plausible post-mortems ever made public.

Operational consistency creates a hidden single point of failure.

If everybody is doing the same things and setting things up the same way to ensure reliability then any failures or short comings in that system are shared by all.


It's #1. The real question is how this isn't blindingly obvious to everyone.

One possible answer: it's hard to admit that what you've worked really hard at becoming an expert in, might have been a mistake.

Because we can't all be as smart as you are.

My guess is its a slow news time of year coupled with more usage of cloud services which means these types of stories are higher profile.

Relating to 1: https://www.youtube.com/watch?v=pW-SOdj4Kkk (Jonathan Blow's "Preventing the Collapse of Civilization"... perhaps a melodramatic title, but well-said overall.)

Everything is made of plastic these days, even software. It's immediately put out as soon as an MVP is ready. Too many managers with zero coding experience. The marketing people have taken the browser. Time to start over.

Or we just managed to construct the biggest circular dependency ever using the whole internet and a combination of all hyped languages and frameworks.

That would in turn lead to an insanely fragile system with increasing amounts of failures that seem unexplainable/weird.


This is a pattern one might see if there were a secret, rolling disclosure of some exceptionally-bad software vulnerability, I'd think. Or same of some kind of serious but limited malware infection across devices of a certain class that sees some use at any major tech company. If you also didn't want to clue anyone else (any other governments) in that you'd found something (in either case), you might fix the problem this way. Though at that point it might be easier to just manufacture some really bad "routing issue" and have everyone fix it at once, under cover of the network problem.

so like all software has reached peak complexity this month?

It seems a bit of a coincidence, yes? Unless they are all copying each other (e.g. all using Kubernetes or what-have-you), in which case it might be less of a coincidence.

Ok, I have one to add myself:

6) We used to have many small outages at different websites. Now, with so many things that once were separate small sites aggregated on sites like FB, Twitter, Reddit, etc we have a few large sites, so we have aggregated the failures along with that. The failure rate, by this theory, is the same, but we have replaced "many small failures" with "periodic wide-spread failures, big enough to make headlines". Turning many small problems into a few bigger ones. Just another hypothesis.


Another possibility: US (or other) authorities are requiring some sort of monitoring software or hardware that where disruption of service is unavoidable during install

Keeping that many mouths shut seems impossible.

Most people won't be directly involved in assessing or fixing the fault. "Sorry the network link went down, here is the after analysis report," seems like a reasonable cover. There are many espionage activities which are covered up, only to come out decades later.

But really, I don't have any evidence that this possibility is any more or less likely than any other.


Software is getting increasingly complex. Why? To ensure better uptime, amongst other things. The funny part is that all the complexity often leads to downtime.

A single server would usually have less downtime than Google, Facebook and so on. But Google and Facebook needs this complexity to handle the amount of traffic they're getting.

Makes me wonder why people are trying to do stuff like Google when they're not Google. Keeping it simple is the best solution.


> Just a series of unconnected errors at big companies

Except that "at big companies" is basically selection bias, problems at little companies don't get noticed because they're, well, small companies.

And the underlying issue of the "unconnected errors" is that software is rather like the airline industry: things don't really get fixed until there's a sufficiently ugly crash.


For point #3, there are a few irregularities:

1. Services all going down one after another. 1 goes down - it happens. 2 go down - it happens sometimes. 3 go down - quite a rare sequence of events. But now a large number of silicon valley companies have experienced service outage over the last few weeks.

2. Russian sub that is said to be a "deep sea research vessel" somehow experiences a fire whilst in international waters [1]. It has been suspected that it could have been tapping undersea cables. Let's imagine for a moment a scenario where they were caught in the act, some NATO sub decides to put an end to it and Russia cover it up to save face.

3. Russia announces tests to ensure that it could survive if completely cut off from the internet [2]. A few months later it's like somebody is probing US services in the same way.

4. There is currently a large NATO exercise in a simulated take-over of Russia happening in Countries close to Russia [3].

Of course it's completely possible it's all unconnected, but my tin foil hat brain says there is a game of cloak and daggers going on here. I would say that Russia's incentive for probing the US/NATO is to test it's weakness after undergoing a trade-war with China and raising sanctions against Iran. After all, Russian fighter planes regularly try to fly into UK airspace just to test their rapid response crews [4], this sort of behaviour is typical of them.

[1] https://en.wikipedia.org/wiki/Russian_submarine_Losharik

[2] https://techcrunch.com/2019/02/11/russia-internet-turn-off-d...

[3] https://sofiaglobe.com/2019/05/13/6000-military-personnel-to...

[4] https://www.theguardian.com/world/2018/jan/15/raf-fighters-i...


It’s #4 but caused by #1. My pet theory is that we’re pretty far into this business cycle, so a lot of new companies had the time to mature, build up complexity, shed people the most knowledgeable with the original architecture, stop caring as much about the competition, and so on. Add Apple to the mix for recent software quality issues.

>It has now reached the point where even large companies cannot maintain high reliability.

Waiting for this to be backed up by statistics.


Reddit was also partially down this morning.

Reddit's down weekly, though, so that's no big deal.

Maybe those "INSTALL OUR APP NOW!!!" banners, floating action buttons, popups and bottom/top fixed bars caused too much traffic.

NSA firmware updates requiring a reboot.

5) Some of all of the above?

Although 3) doesn't have to be the explanation, it is definitely happening all the time.


4.

I think people are too accustomed now to high availability/uptime nowadays. I started using the Internet in the mid 90s. Stuff used to break all the time back in those days. Now I can’t remember the last time I couldn’t reach a website because it has been Slashdotted.


4.

And imho all that’s really happening is people are noticing the outages more. This is a good thing. For years too much of the mental model has been “{ cloud, google, Facebook, aws, xxx } never goes down!”

That’s been unhealthy. It’s a good thing.


3) Come on man, you can't just go around opening parentheses and then not closing them.

What about raising temperatures ?

I don't believe it's too complex, I believe people are getting lazy. Complexity can be handled by automation, but too often people just want to rush things out for a buck instead of planning out a good product.

Hypergrowth/blitzscaling also introduces entropy.

The more you hire, the more plentiful and diverse your bugs will be.

It stands out now because the stars aligned. But theses issues have been coming and going for years in patternless form.


5) The increasing interconnectedness of things introducing new interdependences so that when one service stumbles so do many others.

I'd normally go for #4 but hypotheses #3 is starting to be a more plausible explanation for the timely "coincidence".

A friend of mine who is retired military told me there is a saying that "once is bad luck, twice is a coincidence, but three times is enemy action". Doesn't necessarily mean it's true, of course.

There's also the HN filter bubble which could be presenting a misleading picture of "outage" frequency.

6) It's summer and lots of engineers are either a. on vacation or b. thinking less clearly

#4.

When things are random, they cluster.


Could be a Tacoma Narrows Bridge type problem.

it's end of half. everyone is entering reviews. gotta deliver.. somerthing...

Sysadmin and DevOps engineer walk into a bar ...

all of the above!

The brain is the greatest pattern matcher in the world. While it is unlikely all of these companies would have major outages in a month, be wary that the subconscious is constantly generating narratives to explain statistical anomalies.

Interesting theories nonetheless:)


The more the conspiracy grows the faster these otherwise minor stories shoot to the top of HN and add to the pattern.

It fuels itself.


"Minor" seems inappropriate. Can you remember another time when so many high-profile websites/services have had outages in so short a time span?

No. And a year from now I won't remember this time either.

> be wary that the subconscious is constantly generating narratives to explain statistical anomalies

This comes up all the time in sports. Let's take pool for example. There are various guesstimates floating around, and I do not have access to detailed tournament statistics, but I have heard that in games where sinking a ball on the break is an advantage, for decent players there's maybe a 75% chance that a ball will go down.

So once in every four breaks, you won't sink a ball. How often do you fail twice in a row? Once in every sixteen breaks. Failing three times in a row? Once in every 64 breaks. Four times in a row? Once in every 256 breaks.

What about five straight breaks without sinking a ball? Once in every 1,024 breaks. That's a lot of breaks. But wait up a moment.

Let's ask, "If you miss a break, what're the odds of it becoming a streak of five misses in a row?" The answer is, "One in every 256 streaks of misses will be a streak of five or more misses." 1/256 is not particularly rare, if you play often enough to sink a ball on the break 75% of the time.

What is the point of knowing that a streak of five misses in a row is rare but not that rare? Well, if you miss five in a row, do you chalk your cue for break number six as usual? Or do you tell yourself that your break isn't working, and start adjusting your stance, aim, spin, &c?

If you start adjusting everything when you get a streak of five misses in a row, you may just make things worse. You have to pay enough attention to your distribution of misses to work out whether a streak of five misses in a row is just the normal 1/256 streaks, or if there really is something amiss.

The brain is a great pattern matcher, but it sucks at understanding statistics.

---

The flip side of this, of course, is that if you upgrade your brain well enough to understand statistics, you can win a lot of money.

If a pro misses five in a row, feel free to wager money that they'll sink a ball on their next break. Your friends may actually give you odds, even though the expectation of winning is 75-25 in your favour.


This is a great explanation of the issues we have with statistics. You see this all the time in other sports too. As a hockey watcher, fans always want “explanations” for a loss or losing streak. More often than not, it’s just bad luck, and the kneejerk reactions that coaches and GMs take often just make things worse.

Nate Silver did a writeup showing the math around how the winner of the Stanley Cup comes down to little more than random chance.


Saying that it's an illusory pattern without checking the statistics is no more scientific than saying it's a conspiracy without checking the statistics.

> The brain is the greatest pattern matcher in the world.

You have obviously never tried to model the stock market with a neural net.


The other possibility is intern season (I'm 99.99% joking)

I'm 99.99% laughing (and 0.01% thinking 'uh oh').

As much as I don't like interns, I am sure that they wouldn't even touch a system of the scale like Twitter's in my opinion. /s

Taking down Twitter could be a huge boon for the economy, though.

Productivity skyrockets!

Software complexity escalating over time? Please! The new microservices architecture we have been migrating to over the last year or so is so stable and makes tracking down problems a walk in the park. Not to mention the NOSQL database is a dream come true, as long as you don't need to query anything other than the partition key.

It's summer time and everyone who knows how stuff works is halfway through a drink right now. Probably with their families. Is it a trend year over year for 7/4 +/- a week?

So storytime! I worked at Twitter as a contractor in 2008 (my job was to make internal hockey-stick graphs of usage to impress investors) during the Fail Whale era. The site would go down pretty much daily, and every time the ops team brought it back up, Twitter's VCs would send over a few bottles of really fancy imported Belgian beer (the kind with elaborate wire bottle caps that tell you it's expensive).

I would intercept these rewards and put them in my backpack for the bus ride home, in order to avoid creating perverse incentives for the operations team. But did anyone call me 'hero'?

Also at that time, I remember asking the head DB guy about a specific metric, and he ran a live query against the database in front of me. It took a while to return, so he used the time to explain how, in an ordinary setup, the query would have locked all the tables and brought down the entire site, but he was using special SQL-fu to make it run transparently.

We got so engrossed in the details of this topic that half an hour passed before we noticed that everyone had stopped working and was running around in a frenzy. Someone finally ran over and asked him if he was doing a query, he hit Control-C, and Twitter came back up.


I worked there at the time and ending up running the software infrastructure teams that fixed all these problems. The beer wasn't a reward, it was because people were stressed and morale was low. Nobody brought the site down on purpose.

What really made me mad was when we hired consultants and the contract would end, usually without much success because Twitter's problems were not normal problems, and then they would send us a fancy gift basket with our own wasted money.

Maciej, we are still waiting for you to ship the executive dashboard.


That dashboard supported something like a dozen people over its lifetime. One person would start writing it, then quit, and be replaced by another person who rewrote it in their preferred language, and then the cycle would repeat.

It was a VC-funded welfare program for slackers and I miss it greatly.


I lol'd at "welfare program for slackers" - That's the dream really... Find a chaotic workplace that lets you play with your favorite languages and no real tangible outcome.

To take the history of direct queries at Twitter even further back, I built a web interface at Odeo for the CEO to run direct queries against the database (and save them so he could re-run them). There were some basic security precautions, but this was totally cowboy.

That Odeo team was filled with best practices aficionados and the management (including me) was a bit cowardly about being clear that "WE ARE FAILING HARD AND FAST." Damn the practices.

So of course the engineering team freaked out, especially since the CEO managed to find lots of queries that did take the site down.

But I honestly credit that as one of the biggest things that I contributed to Twitter. Having easy SQL access let the CEO dig into the data for hours, ask any question he wanted, double check it, etc. He was able to really explore the bigger question, "Is Odeo working?"

The answer was no. And that's how he decided to fully staff Twitter (twttr then) as a side project, buy back the assets, and set Twitter up as it's own thing.

I think that it really was very close--if we'd moved any slower we would have run out of money before anyone was ready to commit to Twitter. Same story about Rails--without being able to do rapid prototyping we never would have convinced ourselves that Twitter was a thing.


Just a quick note not directed at OP but for any other engineers that may be unaware, these days AWS makes provisioning a read replica painless, and you can point the CEO to up-to-the-minute data while essentially firewalling the queries from customer operations.

how?

First Google result for "aws read replicas": https://aws.amazon.com/rds/details/read-replicas/

> Using the AWS Management Console, you can easily add read replicas to existing DB Instances. Use the "Create Read Replica" option corresponding to your DB Instance in the AWS Management Console.


Why not have it run against a replicated copy? I did that in the past, works amazingly, they can f* up all they want without any implications.

This was 2005. We had dedicated servers in our own cage. I can't remember if we already had replicas. It seems plausible. But actually spinning up a new one would have required more work and convincing than I wanted to do.

It's probably easy to do if you know it's an issue to begin with. I've run into this scenario before (running sql queries to read data that turned out to lock everything) and it caught me by surprise. Why would a read query cause the database to lock anything? I thought databases did stuff like multiversion concurrency control to make locks like that unnecessary.

Doing large queries on a Postgres standby had the potential to mess up the master, depending on configuration settings.

Thanks for sharing. Out of curiosity, why was the answer no? Was the issue the downtime or something more subtle?

I think in the end he lost faith over retention. We got a lot of traffic and new users but didn't keep any of it. He was already suspicious that iTunes was going to kill us and so the stats were the nail in that coffin. He was right. We were ten years too early to podcasting.

This reminded me of something too!

I used to work(on backend) on a popular app(in my country) which had a good number of users. One day I was asked to work with some infra/sysadmin folks who wanted to fix some issues with the servers in our inventory. We happily updated kernels and even rebooted servers a few time. I came back to my team and saw them deeply engrossed into production logs. Turns out few of the servers that were "fixed" were actually production servers. I almost shouted the F word when I listed all IPs. This confusion happened because the server guys used data IPs and we used management IPs. This exposed serious miscommunication among our teams. But fun times indeed!


> It took a while to return, so he used the time to explain how, in an ordinary setup ...

This one was visible from such a great distance, it's a wonder neither of you spotted it as it happened! I love your post — reminds me of BOFH :)


The guy had an amazing beard, with streaks of white in it! He looked like a great wizard to me. I remember even as we noticed people were frantic, saying to one another "oh man, another outage, thank goodness it's not us!"

And now it's a full-blown sitcom scene

A true BOFH would have either disposed of any witness or made them the culprit.

A true BOFH works with what he’s got, and when what he’s got is a fool willing to do all his work for him, then it’s time to implement Plan A: sit back and enjoy the fireworks.

> The site would go down pretty much daily, and every time the ops team brought it back up, Twitter's VCs would send over a few bottles of really fancy imported Belgian beer

Never understood this mentality but have seen it at many companies. Rewarding someone or some team for heroically fixing something after a catastrophic failure. Talk about misaligned incentives! Reminds me of the Cobra Effect [1]. When you reward “fixing a bad thing” you will get more of the bad thing to be fixed.

1: https://en.wikipedia.org/wiki/Cobra_effect


Seems like maybe you want to reward fire fighters and also reward fire prevention?

boy, do i have a podcast episode for you: https://casefilepodcast.com/case-98-the-pillow-pyro/

from a complete rando: thanks for posting this — will listen to it later today.

This gives me hope that one day I will be able to run a startup. The big tech companies aren't too different than the rest of us after all...

Agreed, the only thing that a showstopper for me is the money and talent, It is still a struggle to find talented people who want to work for a startup.

Even harder to find ones that wish to remain working for a startup!

This is hilarious, thanks for sharing. I used to work at companies like this, except they weren't worth billions of dollars.

Neither was twitter in 2008, it didn't reach $1b until the end of 2009

The story is most probably not true. Love the taco tunnel though :)

Edit: apparently the stories actually are true.


This is the same group of folks who wrote the infamous ranty blog shitting all over Rails back in...'11(?) when it was pretty clear that their workload wasn't suited to an RBDMS and ActiveRecord. They wrote their own message queue twice despite suggestions to use known tools before eventually giving up.

That’s hilarious. Reminds me of a clip from the show Silicon Valley.

I worked there for a bit. Sometime around 2014 I dropped a production DB table (via a fat finger, shouldn’t have even been possible in hindsight). It wasn’t consumer facing but the internal effect made it look like all of Twitter was simultaneously down. Mass hysteria there for 20 min or so.

Is that beer story satire?

No, it is true.

Is it actually really true? The second part, too? I thought this can't be true and must be a (good) story just to amuse the readers - I guess I was wrong.

Can someone explain the joke (about the beer) because I genuinely don't understand

edit: pretty please


Each time the ops team brought Twitter back up, they receive good beer. So it would also mean that each time Twitter goes down, they could expect to receive the beer. Without idleword's actions, they would have an incentive (good beer) for having Twitter keep going down and not doing work to improve the stability.

Under the guise of preventing the ops team from being incentivized to create outages, he was selflessly stealing all the nice beer for himself.

He took the beer because he wanted it. "Perverse incentives" are an excuse, because nobody is going to kill their production servers and all the panic that entails for like $10 worth of beer.

Sounds like the guy was bragging about his SQL skills to avoid locking the database but ended up locking the database anyway (thus, people running around)

If the ops team got beer every time the servers went down (as a reward for fixing them) then they'd have an incentive for the servers to go down.

We all understand the perverse incentives joke, I think what's confusing people here is whether there's some other hidden joke they're missing that suggests not to take OP at his word that yes, he did make off with someone else's gift, which is generally considered a dick move.

What the hell are all of you smoking, some moderately expensive alcohol is nowhere near enough reward to take down a service.

If it was a sure thing that the ops engineers were doing that, then sure, it'd be kinda funny. Otherwise it just seems like a dick move.

The alcohol was an incentive to bring the service back up quickly, but not an incentive to prevent it going down in the first place. Twitter was going down often enough on its own that nobody needed to be motivated to help it crash (except that bringing it back up sooner gives it another opportunity to crash again sooner).

Operant conditioning is a thing and it works.

While I and you would not do this I’m afraid that it would somehow find a way to work in this case too.


Ops engineers don't get paid enough to fix dev fuck ups enough as it is. No amount of beer is going to fix that.

He's taking home the special expensive beer and not telling them about it because he cares about the health and well being of his team so much, and yet they wouldn't even consider him a hero for this, how ungrateful they are!

If everytime the site was brought back up (because it had gone down), and the ops guys got free fancy beer, then the message pretty quickly turns into, "if the site goes down, I get rewarded."

In other words, that beer gave you the motive to bring twitter down, which you inevitably did by asking that question.

The second story had me in tears. Especially given that I'm building a similarly scary query right now (thankfully not against live).


Woo startups.

> We got so engrossed in the details of this topic that half an hour passed before we noticed that everyone had stopped working and was running around in a frenzy. Someone finally ran over and asked him if he was doing a query, he hit Control-C, and Twitter came back up.

This would not be out of place as a scene in Silicon Valley


idlewords, the user you're replying to, was listed as a consultant on the show

For a later season. This was one of my favorite scenes on the show.

Completely unrelated, but I find myself reading your post about Argentinian steaks at least once a year. It's perfect. https://idlewords.com/2006/04/argentina_on_two_steaks_a_day....

No joke, this post was largely the reason I wanted to travel to Argentina.

The food lived up to the mental image I had after reading the post.


I just found and read that article yesterday. The writing is on another level.

As an Uruguayan, I loved it and found it entirely accurate :)

Not the best quality, but there is a scene just like that: https://www.youtube.com/watch?v=Dz7Niw29WlY

"I would intercept these rewards and put them in my backpack for the bus ride home, in order to avoid creating perverse incentives for the operations team. But did anyone call me 'hero'?"

Wait so you stole rewards for a team that was spending time (I assume extra or stressful) on something you didn't do or have any part in. And you want a cookie?

I mean I get it, the company was probably not great in it's infancy. But what?


I think OP is saying the rewards were confiscated so the team wouldn't begin breaking things on purpose to get a reward when they fixed it.

Yeah but does anybody believe that the engineers would deliberately break things so they could have to work in a stressful environment bringing things back up just to get some free beer?

If your incentives are aligned w/firefighting as opposed to fire prevention b/c management is not motivating and rewarding the extra work that goes into avoiding these scenarios in the first place, you're encouraging fire.

Indeed, the usual motivation to try and be called a hero for putting out the fire you started is much more valuable than free booze: a title promotion with a pay bump.

I don't want a cookie; I want more $24/bottle Belgian beer.

You should submit a request to the Pinboard CEO...

Wouldn't that have made you the one with a "perverse incentive"?

That explains why he walked over to the DB guy and asked him to run an expensive query on the life system ;)

That's usually called stealing, or something a little softer than that. It's interesting that you shared that experience expected for us to laugh at it. The rest of the comment was hilarious and I'm happy you shared it, but that bit is very odd. I also see where you're coming from. But your act was ethically questionable.

Just wanted to say that I enjoy reading your blog.

It's a joke. Laugh, it's funny.

It's one of those jokes where if the story isn't true then the entire basis for it being funny disappears. (And if it is true then the joke isn't good enough to make up for the actions.)

Having worked on a lot of ops teams in unstable environments, it's just really dickish.

I also have. idlewords' post is one of the funniest things I've read this week.

yea as an ops engineer that's probably the worst violation of trust i've ever heard of.

Wait so you stole rewards for a team that was spending time (I assume extra or stressful) on something you didn't do or have any part in.

The HR department in my company does this, and then redistributes the gifts to everyone in a random drawing at the Christmas party.

One year some department got a bunch of PlayStations, and a couple of them ended up in my department. The only thing my department contributed to the kitty was candy. I bet some people in that other department were disappointed.


Finally we get the long awaited sequel to One Flew Over the Cuckoo's Nest...

One flew over the dubcanada's head.


Wait what did I miss something? lol

The joke.

Hero? You’re a villain who steps on teammates. The worst part is you thought it’d be okay to share that and think we’d be on your side. Have you no shame?

My job was to make growth graphs for investor slide decks, so by definition I had no shame.

Or, if you had any shame, its growth would be up and to the right!

>> he hit Control-C, and Twitter came back up.

Monolithic architecture. When I did security work I fought this every day. Moving away from it is a nightmare of technical debt and heated debate about who should control what. I'm reminded of a story from the early days of MSN. The legend goes that in the late 90s MSN ran out of one cabinet, a single server. The server had redundant power supplies, but only one physical plug.


> Monolithic architecture.

This particular problem had nothing to do with a monolithic architecture. Your app can be a monolith, but that still doesn't mean your BI team can't have a separate data warehouse or at least separate read replicas to run queries against.


It's not "nothing to do with". You're correct that a monolithic architecture does not imply that a single read query will lock the entire database. But it is a prerequisite.

Not really. I've seen more than one microservice-architected (admittedly, poorly) systems where, instead of the whole DB freezing up, just the one DB would freeze, but then all of the other microservices that talked to the frozen microservice didn't correctly handle the error responses, so now you had corruption strewn over multiple databases and services.

So, while true the failure mode would be different, "one bad query fucking up your entire system" is just as possible with microservices.


And of course this is standard practise. I've contracted on largish apps before (rails! Shock!) and of course we provided read-only replicas for BI query purposes. I wouldn't have provided production access even if asked.

Anything else is simple incompetence and the macro-organisation of the code and/or services is irrelevant.


If your website crashes because a single person ran a query, your system is too monolithic. You can have thousands of little microservices running all over the place, but a single query causing a fault proves that a vital system is running without redundancy or load sharing and that other systems cannot handle the situation. You have too many aspects of your service all tied together within a single system. It is too monolithic.

I think "monolithic" and "fragile" are orthogonal concepts.

> I would intercept these rewards and put them in my backpack for the bus ride home, in order to avoid creating perverse incentives for the operations team. But did anyone call me 'hero'?

Wait, I don't understand.

Why would anyone call you hero?

Are you suggesting that the team would deliberately crash the app to receive beers and that by stealing them you stopped this from happening?

Free drinks and free food is the standard here to reward teams when they spend extra unpaid time away from their families.

All of the posts asking the same question are being down voted. Am I missing something?

You said you were a contractor at the time. Unless you were on the management team I fail to see how this was your responsibility to choose what happened.


> Am I missing something?

That it is a joke.


The humor must be lost in translation then, I don't see anything resembling a joke.

> Are you suggesting that the team would deliberately crash the app to receive beers

https://en.wikipedia.org/wiki/Perverse_incentive


Yes, the cobra effect exists. Should this mean that everyone needs to stop all forms of positive reinforcement? I don't believe so.

I doubt anyone would risk a comfortable job at Twitter against a few bottles of beers. Even if they are really fancy, that's what... $20-50?

If this had been worded as a "Haha, I stole the bad team's beer" I would have laughed.

However, worded as "where is my reward for being smart and stopping the cobra effect?" that's just an humble brag and plain unfunny.


He’s joking.

Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: