> The first bug was that, when using the View As function to look at your
profile as another person would, the video uploader shouldn’t have actually
shown up at all. But in a very specific case, on certain types of posts that are
encouraging people to post happy birthday greetings, it did show up.
> The second bug was that this video uploader incorrectly used the single signon
functionally, and it generated an access token that had the permissions of
the Facebook mobile app. And that’s not the way the single sign-on
functionality is intended to be used.
> The third bug was that, when the video uploader showed up as part of View
As -- which it wouldn’t do were it not for that first bug -- and it generated an
access token which is -- again, wouldn’t do, except for that second bug -- it
generated the access token, not for you as the viewer, but for the user that you
are looking up.
> It’s the combination of those three bugs that became a vulnerability. Now,
this was discovered by attackers. Those attackers then, in order to run this
attack, needed not just to find this vulnerability, but they needed to get an
access token and then to pivot on that access token to other accounts and then
look up other users in order to get further access tokens.
This is the vulnerability that, yesterday, on Thursday, we fixed that, and we’re
resetting all of those access tokens to protect security of people’s accounts so
that those access tokens that may have been taken are not usable anymore.
This is what is also causing people to be logged out of Facebook to protect
Is it just me or does this sound like an terrible idea in the first place? Guess we can't know for sure, but why would anything unrelated to authentication generate access tokens?
That said, I've heard stories of similar bugs in the industry. The difference was that they were more shallow in the effort to reproduce; deep enough to get through QA but discovered quickly in production.
But honestly, Facebook has more resources to spend on security than any online bank. Banking security should be defense-in-depth: Strong first layer security, serious monitoring of suspicious activity & openness for reports by users, a certain level of manual approval of irrevocable transfers, a certain revocability of transfers that are able to be automatically processed, transfer size limits to deny one breach to have huge consequences.
And finally, a credible economic and legal system that ensures only a tiny minority of people want to rob a bank because there are much better options for making money, and banking regulations that leave the responsibility for security vulnerabilities squarely with the bank's shareholders.
Anyone can be owned with enough effort, so it's not just about creating software that's as secure as you can make it. You need to have sound policies as well.
Its so bad that for certain systems we check the origin of your connection and will only trust you if you've come from the DMZ rather than internal.
With that said, this is a bigger vulnerability precisely because Facebook is a free service - at banks, you need to be a customer with real-world identity to even begin to attempt to exploit this.
No effort needed, if you click your Facebook bookmark, or follow a link or whatever, the browser goes "Oh, this is Facebook" and traps it inside the box with the rest of Facebook without any extra steps from the user. There's a cute blue "Facebook" icon added to the URL bar so you can see it's working.
(I mean, or, stop using Facebook, but for many that isn't a reasonable option)
Set your preferences to show posts of your native language only, start poking around the timelines, and follow people who post something interesting. Follow, boost, reply, it only takes a few days before you have plenty of interesting content in your feed.
There's zero chance on Mastodon that you'll get caught up in a gigantic data breach like this. Probably less chance you get caught up in any kind of breach -- it's too obscure to be a target, plus the code is open source so many eyes on it, etc.
And you'll enjoy these guaranteed benefits, as well:
- No longer subject to the most sophisticated data vacuuming adtech in the world
- If you get bored/annoyed you can just take a break from Mastodon because it doesn't own your life the way Facebook tries to
Security through obscurity...
Open source != secure. I can guarantee that a hell of a lot more folks with a lot of security expertise have combed through the fb codebase than Mastodon.
...is not a solution by itself but is a perfectly valid part of a defense in depth strategy, for example running SSH on a port other than the default is a common and good practice.
> I can guarantee that a hell of a lot more folks with a lot of security expertise have combed through the fb codebase than Mastodon.
This is the same argument Microsoft always made in defense of Windows security back in the XP era. "We hire the best experts in the world so Windows must be fantastically secure." And Windows security turned out to be a train wreck. Now in Microsoft's defense it has improved considerably over the years, but Windows desktops still get owned far more often than Linux desktops do, for a reason that would probably apply to Mastodon today as well: not that many people use it, so it is not nearly as common a target for exploits.
I don't think I deserved downvotes for making these points btw, that button is way overused on HN.
This really depends on what kind of target you are. Are you a random person on the internet? Then making yourself a smaller target by using obscure services might help. Are you someone with sufficient value for a spear phishing attack? Not so much. “Sufficient value” might just be “you slighted the wrong person on the internet.”
There’s also a lot of trade offs involved, some of them less than obvious. For example mastodon servers may be run by a person/team who’s trustworthiness rating is harder to evaluate Tran facebooks. The server you’re on might by run by well-meaning but incompetent people. The server you’re on might have one participant that is a target of sufficient value for spear phishing and your data might be taken and leaked just to obscure the real target.
There was a time where you could read other peoples' chats using this feature.
As every feature on FB needs to take "View as" into account when handling their own permissions, a lot of developers on FB's payroll get a chance to f'up. We are all humans, so the probability of this happening is very high. The impact (for the users) is also high, given that it's automated and concerns every user on FB equally.
When dealing with a very probable, high impact risk in a software project, considerable additional effort is warranted to mitigate that risk: in this case maybe taint checking and additional implementations of the same feature in different programming paradigms, to ensure the system is fail-stop.
But in contrast to airlines and railways, the interests of FB and their users are not aligned. For Facebook, this risk is not (or was not deemed to be of) high impact, so we did't get any of this.
If they'd done a proper post mortem and corrected the fundamental issue, and made sure it wouldn't have re-occured, this should not have happened.
Nowhere on that banner does Facebook make it clear that there was recently a severe security issue that may have resulted in the loss of personal user information (Making it much less likely for the user to actually click 'Learn More'). It's misleading to title this with just "An Important Security Update" and make it seem like they've just updated their systems. No mention of the recent compromise until you click 'Learn More'.
Which is the issue at hand.
Also, it's "100%" the banner I saw for several days last week. Make of that what you will. I didn't click it so I don't know what it pointed towards.
I just checked fb, and went quickly to my first notification. I didn't really register what the banner was until maybe a second after it loaded - at which point I had already clicked on my first notification. By that point, the banner is gone forever. I can't find any way to get it back.
It's so, so easy to miss this message.
There was a conference call with reporters about the subject, so the press release public release was not the first the NYT knew about it. They likely had an embargo agreement.
It's often in the interest of the reporter to agree to stuff like this since publishing security issues ahead of time can have serious negative consequences.
CNN many years ago accidentally left some of their pre-written obituaries for (living) world figures publically accessible. https://en.wikipedia.org/wiki/List_of_premature_obituaries#T...
It's not uncommon. It's how you build "friendly" relationships with the media. I scratch your back, you scratch mine.
Actually, not "common" at all.
Obituaries for famous people are often done in advance, since everyone dies. It used to be one of the things that young journalists/interns did to cut their teeth.
But not every company has a massive security breach, so this was not pre-written.
It's not uncommon for big companies to fax (yes, fax) bad news to news organizations a few hours or days before posting it on their own web sites.
In the past, there would be embargoes on the information, but in the case of bad news, those are routinely ignored.
You should probably get on that.
Source: I spent years at a national PR agency
I suspected there was a breach of some sort, when my tokens expired in three places simultaniously, this morning. First thing I did was search google news, nothing had been written yet. I wasnt sure they would ever announce it, probably depends on the scale.
Facebook wrote it. They called their friend at NYT and handed over the article - then mentioned they would be sharing it with other outlets later. [just my guess].
Noam Chomsky wrote Manufacturing Consent decades ago.
Read, you fools!
(No, it's not that it was just two devices - I had to log in four times just on my phone. Once for Messenger, once for FB itself; during each occurrence.)
I am totally baffled in the post-fact world.
oh boy, what a mess.
Well, when it doesn't have a security hole.
The fact of the matter is... ACLs are hard to get right. It's even harder when you have various roles that can be checked against the ACL (logged in user, batch job, logged in user impersonating someone, etc.) . But in the end, complexity is what's scary, not some feature that depends on complexity.
This sounds similar to different distros of linux. Some are security focused where nothing is allowed until it is explicitly allowed. Other distros try to be more "user-friendly" and pretty much everything is open.
Starting from a wide open starting point and then trying to batten down the hatches afterwards does seem to the harder way to do it, but that's exactly where FB is. They wanted everything open, and then had to decide to start limiting that data. FB was designed as a place to share info. If you posted it, you wanted to share it. I totally get that mentality. However, as devs, I can imagine that we have all built something that the end users use in a way not envisioned, and we've probably all had "you're holding it wrong" lines of thinking. Once you get to that point, you can alienate users by telling them to stop doing it that way or embrace what's happening, and then make it work for them. Seems like the perfect situation to where bugs can get introduced.
(Not to mention it's not really user impersonation, it's just filtering your profile page based on computed access level of one of your friends.)
I guess even the best (at secure coding) sometimes mess up.
Have Amazon, Google, Twitter, Microsoft or Apple been on haveibeenpwned? That’s what I think of when I hear “big names”.
My understanding of the "get promoted or leave" thing is "engineers hired as juniors are expected to get to mid-level in under 5 years (with a half-way milestone at 2 years)"; once you're mid-level it's up to you if you want to carry on climbing. Personally once I got there I switched to a "work more efficiently in fewer hours and keep the same overall productivity" approach instead of trying to get promoted into the senior levels, and that's worked out nicely so far :)
Train me, please.
Unfortunately the number of lazy people far outweigh the number of hard workers.
It’s not a great practice in my opinion. But in practice only a small percentage of engineers fail to make the grade.
> What, exactly, is wrong with the expectation that people make senior level eventually?
The problem is when you base too much on promotion systems and performance reviews, that end up as a form of bias and favoritism not closely approximating the truth. Some amount of people are doing useful work for you (like cleaning up after people you think are the high performers) that does not surface there, and when you crap on them, pass them up, bust their morale, make them afraid of their next review, etc., you risk losing their valuable contributions.
Modus operandi in these companies is to rewrite/reintroduce whole products instead of fixing bugs from already discarded people. So if you lose a critical amount of worn out higher paid contributors, you just make a V2 or introduce a new product with a completely new fresh team that will get discarded after another 3 years. This requires fresh supply of motivated and hungry people willing to take sacrifices and a much smaller amount of people willing to exploit that.
I can't believe people put up with this. I really hope you got paid for that time.
You see, the parent poster said:
> They got pushed to check-in code at 12a.m. for example.
This is ENTIRELY different than having you, overly excited about some project, deciding to work late and pushing code at 12am of your own accord. That's absolutely nothing wrong with that.
Now, if you are EXPECTED to do it, outside major emergencies, then you have a problem.
Just because we have more "stuff" and more advanced "technology" doesn't make life more worth living. Happiness levels across society don't increase alongside productivity.
Okay. That's your choice. But having made this choice, don't complain when those of us who choose to devote more time to work receive greater rewards. There's nothing wrong with paying for performance.
I can’t stop my bosses from judging based on time spent working (which is silly, but hey, we’re all human), but I sure can try to stop my coworkers from subscribing to such insane work hours.
In fact, I need permission from my manager's manager's manager in order to stay past 7pm.
This company believes in a strong work-life balance, and this is one of the ways it achieves this.
Also, it "changes the world" in good ways, not by "connecting people" through bogus data siphoning addiction traps.
Personally I strongly prefer no fixed working hours. If you want to work at night, so that you can do things when it’s light out (especially in winter), and you still get the expected results, what’s wrong with that?
Also, lone wolves working at night are harder to manage and communicate with.
Probably not fired. But the interior motion sensor alarms go on automatically at 7pm, which would probably alert the security guards that roam the campus.
When I first started, I came in too early once and set off the alarms. People were nice about it, but I was super embarrassed because I was a n00b.
I worked at a place like that once. When I was hired I was told I could make my own hours. I prefer to work early mornings, so some days I came in long before anyone else. A couple of times around 3am. But I always worked at least eight hours, and often more.
In my exit interview, my supervisor was rabid about how I wasn't a good fit because I "come and go as [you] please." She was so full of crap about other allegations against me that I didn't even have a chance to bring up that making my own hours was part of my employment deal.
Yes my company benefits from it, but so do I. For instance, given a choice of trying to come up with an idea to learn about a feature of AWS and pay money for the resources I use, and take advantage of my work AWS (Dev) account where I am an admin, I would rather do a work related project where I have the resources and I don’t have to come up with an idea and I don’t have to pay for it.
What I don’t do is “signal”. I don’t stay at work late, I don’t send emails out after hours, and I pushback if they give me unreasonable deadlines.
On the other hand, say it would take me 50 hours and I knew I would have to work on the weekend because I’m not as experienced, but I thought I could still have it done by Monday.
I might be willing to volunteer, knowing it would take me longer but it would also be done on time. That extra 20 hours, I’m still working, committing code but zeal do trying to figure out the framework. I wouldn’t have a problem doing that because I am learning a new skill.
But, I wouldn’t work weekends to finish a project because I was given an unrealistic deadline.
The first scenario, the extra 20 hours benefits me and the company. The second, it just benefits the company.
Take those excited geniuses and have them work on preventing climate change from ruining all life on earth, instead of inventing new ways to profit off of people’s data.
If Facebook's a grind, then that's something the employee has to figure out.
we're talking about Facebook here
If you heard about the NCIX story where they basically abandoned their servers filled with users data (over 13 years of data) and someone scooped them up and tried to resell them on the black market, one could think that a similar fate is possible.
source : https://www.privacyfly.com/articles/ncix_breach/
Obviously if Facebook was going under it would probably trigger a huge legal process on how to handle the data but it clearly doesn't happen for smaller businesses...
Your data is their primary asset.
And it won't matter because the data will be rolled over to drive ads on Instagram and Snap and other attention-properties.
Facebook is the IBM of social media. It's too big to die, and too big to do anything good.
Interestingly, Facebook owns your data. I believe if they wanted to, they could close the company tomorrow and put a facebook.tar.xz of everything they collected on archive.org or somewhere else.
At least that written and TOS or so.
(Except if a European office of Facebook did it, then the nationality doesn't matter.)
I personally did not get any explanation as to why I had to log back in. It did surprise me to be logged out this morning and was wondering why.
It linked here: https://www.facebook.com/help/2687943754764396?ref=comms
I bet they're now really regretting keeping it around.
Did anyone else experience anything like that?
But does this mean most of the time that there was no active access token and she is mostly safe? (Excluding the windows of time where she was actively using FB) Do I have to take back all of my teasing?
They also disabled "View As" which is the actual fix for the time being.
Obviously, Facebook is an extremely complicated system. But I find it hard to believe a video uploading feature would impact 'View As'.
It's intuitively straightforward that modifying code for uploading videos could (read: not should) have authorization and authentication ramifications. One of those ramifications could then result in a vulnerability chain compromising user impersonation functionality.
I have seen far, far more incredulous head scratchers in penetration tests and code reviews. The interaction boundaries of, or middleware between, two seemingly unrelated systems is generally a good start to look for a security vulnerability.
I get this part. But why would it affect only videos and not other entities (photos, status etc.)? I would think creating (or uploading) any of the entities have the same authorization and authentication ramifications. What could be different for videos? Unless the privacy models are so fine grained that you can have different privacy settings for different entities (haven't used Facebook in years, so I don't really know). Your explanation makes sense, I'm just looking for a concrete example.
The problem is often that there are multiple sources of truth for who the user is. And if you have an impersonation feature, you by definition have two sources of truth: who the user actually is, and who the user is impersonating. It would just be a matter of a single mistake of using the wrong one.
Considering that "view as" requires your page view to render every control as the impersonated user but only when it comes to your profile, but renders all controls outside of your profile as the original user, I could see any engineering team dealing with some very carefully drawn and potentially confusing boundary cases.
Edit: just to elaborate, it's not just obvious impersonation contexts where this gets interesting. For example, linking your Humble Bundle account to your Steam account, or on Netflix which user you are vs. which email address is being billed. Many apps have a function to share some document using a one-time expiring token. If you're also logged in, then do you read permissions from the shared token or from your account? If you mix them, do you make sure anything that writes to this shared view can't touch your account itself on accident? We don't think about it much but I think you can see how these subtle distinctions are important when you are thinking about access control, and that makes it a breeding ground for subtle mistakes.
Along with React, GraphQL and a bunch of other technologies with various degrees of popularity https://opensource.fb.com
Along with various startups building around the projects incubated at Facebook - Asana, Interana, Phacility, Qubole, etc.
You don't think those technologies could have been developed by people at ethical companies, or even by the same people at ethical companies?
There's also financial support for building a community around improving the tech, by encouraging outside contributions via meetups, conferences, social events, better technical documentation, etc.
At smaller scale startup an engineer is surely welcome to work on his skunkworks project, but justifying expensive large-scale architectural undertakings on company's dime is problematic. Especially if a quicker fix is available and buys the company a chance to kick the problem down the road.
With that said, it's not impossible to build a major popular piece of technology within a small company (Joyent and Node.js being a good example), it's just harder.
This discussion is mostly irrelevant to the fact that this particular company is completely reckless and unethical. The technology they accidentally produce while building a dystopia to make people click on ads does not justify anything.
Where B is the sum of the set consisting of:
-Breaking democracy in the US and the UK by being _the_ platform for disinformation.
-Disinformation assisting genocide in Myanmar.
-Use correlating strongly with poor mental health
-Manipulating behaviour to encourage poor attention spans for the sake of ad-clicking
-Constantly violating basic standards of privacy
-(I could go on..)
Oh wait, excuse my arithmetic. I forgot to add another JS framework like Relay to the LHS of the equation, that makes it a net positive from Facebook! :D
It is a problem inherent in the structure of most social media companies. And Facebook is the most significant social media company, and thus contributor to the problem.
Blaming facebook for "breaking" democracy in the US and the UK is ridiculous. I can't understand how this can continue being a claim remotely considered valid. I agree (or may agree, at least in part) on some of the other points, but not on this.
Claiming that Trump won just because of the russians putting ads on facebook is at least naive - and ignores the fears/actual issues a very big* part of the US population experience daily. Isn't failing public schooling a problem there also? Does that give us citizen more or less prepared to actually participate in democracy?
Politicians (of all sides) in the UK have accused the EU of being the root of all evil since they "joined", again and again and again: you lost your job? Blame the EU! We can't cut taxes? Blame the EU! You really want to blame facebook and NOT the politicians themselves because people voted for brexit?
If the Russians tried to manipulate (and for sure they did, oh gosh, I'm pretty sure the US and the EU states never do - or did - anything to manipulate elections abroad! Evil Putin, why you do this to us? :cry:) we rolled out the red carpet for them!
Democracy was broken because actual journalists did not do their job. Stop doing what they (may) want you to do, using social media as a scapegoat for their own (willing, sometimes, for sure, at least if you read what Chomsky has to say) MASSIVE failure of being the "champions of truth" they claim (and blindly believe - I worked on somewhat close contact with them for years, I've seen that) to be.
Also, a bunch of recruiting venues exploited by Facebook are not that accessible to smaller startups.
E.g. one of the top previous employers for Facebook employees was Google (or some other outfit within Alphabet group, like YouTube). Most likely those people would've stayed at Google.
Another hiring source was university recruiting, which involves participating at job fairs at various universities, exhaustive days of back-to-back interviews, flying candidates for on-campus interviews, and eventually covering relocation costs (and potentially visas and immigration paperwork) for someone moving from Pittsburgh, Waterloo or Romania.
Would a smaller startup have the financial oomph to run a similar recruiting pipeline?
I know lots of people who feel they get and have got tremendous practical benefit from Facebook. It isn't "addictive" unless you use that term to mean anything some people make that other people enjoy.
"Our results showed that overall, the use of Facebook was negatively associated with well-being."
Naturally, even if this study is accurate it isn't definitive; the causation could go in the other direction, that the unhappy use Facebook more often than the contented. But it's still quite suggestive.
I saw this study referenced from this article:
So, yes, I believe they are trying to corner the market on the best programmers.
They may pay more, but they collude to make sure people couldn't leave without going far outside the bay. That's a monopolistic trait.
I agree they weren't putting a gun to people's heads but they were making the environment less available.
I hate that this has happened. The Bay Area used to be a place where working for the big, shiny company that makes your parents happy wasn't prestigious. It was safe. But taking a risk and starting something new was admired. The present state of affairs reminds me of Wall Street.
The tech industry, despite its shortcomings, is vastly superior to Wall Street in that regard. It's still a meritocracy above all else.
Plenty of smart people break into tech after doing something else for a few years. If you want to go into investment banking, you better come from a consulting or have already been working in finance. Your only last bastion of hope is to get an MBA and then join the rat race.
Outside the tech community, probably Amazon, Microsoft, and Instagram (most people don't know Facebook owns Instagram).
Are you saying the latter two don't do that?
Trusting these entities based on their noble intentions today makes no sense to me if there's no legal agreement or regulation to restrain them tomorrow, when they get desperate.
No, not at all. Their positive reputation was in many ways unearned, and it's a good thing to be glad that their own actions and attitudes are finally catching up with them.