Seems like the real point here is that it's not worse about Twitter, but about industry security in general. Which was my impression when I skimmed the initial whistleblower complaint. It clearly wasn't great, and "Twitter’s leadership was [...] apathetic, ineffectual, or dishonest" certainly resonated with my short stint there. But I've heard worse stories from bigger, more important places than Twitter.
Great question! And for me there are two kinds of good in partial conflict here: good for security, and good for getting things done. Having done contracts for places where I needed multiple levels of approval to install anything on my developer machine, both are important to me.
Banks have excellent security. But often the restrictions are so onerous that productivity suffers.
One of the banks I recently worked at flipped the model from heavily restricting what users and systems can do to allowing them to do anything (including admin access for developers) and aggressively monitoring them.
In comparison my current bank needed 4 weeks and manager approvals to get a mouse driver installed.
From the article "Nevertheless, the presence of Shadow IT in an organization speaks to some larger problems: (1) Lack of effective communication from security and IT teams about security risks (2) Employees who feel they aren’t given the tools they need and with no clear way to ask for them (3) No visibility into endpoints to reveal the presence of unapproved tools"
I have yet to work for a company that has IT security and IT management in the same org.
Which is insane.
Why are companies empowering someone to say "No" (CISO), without requiring that same individual to justify not saying "Yes"?
If a user requests a tool, IT sec/management should be leaning in and asking "What can we provide you will that will satisfy this need?"
Instead, and I assume my experience generalizes to everyone who's worked in regulated tech, the response is "No, that's not approved" and ends the conversation.
For the same reason why companies don't have their internal audit in the same org as their accountants and their financial controllers under business unit managers, inherent conflict of interest.
The job that the security org is doing for the board/owners/etc is controlling risk because they don't trust if their IT org says "everything is totally fine", and consider it likely that if the decision about whether to implement some activity or not is left purely to the IT org, then the convenience of implementation will override the level of risk control that the board/owners desire, so they choose to put these controls in a separate organization, empowering someone to say "Not until you do all this unwelcome work to implement it to the standards the company has chosen", preventing all the many mid-managers to save their effort or their costs by cutting corners at the expense of risks to "someone else's" money.
Of course, having these functions together is much more efficient in many ways! But the principal-agent problem is very real, especially so in large organizations, so that's why these choices get made this way, designing organizational structures that act as checks and balances on each other, not expecting everyone to magically cooperate for the greater good.
> For the same reason why companies don't have their internal audit in the same org as their accountants and their financial controllers under business unit managers, inherent conflict of interest.
That's bogus. Having the policies be in the same org as the provider just makes sense. You can run a separate security auditing department if you're keen to do that, nothing is stopping you. You can have independent oversight while you don't totally hamstring your organization.
Those auditors aren't also the ones approving expenditures, are they?
My thoughts exactly. I noticed a pattern in my personal behavior of “can I install an unapproved developer tool early in the onboarding process”. Even on enterprise machines in IT departments of firms you’ve certainly heard of, I’ve found that I can get away with my portable install / side loaded browser plug-in every single time.
This is just my rambling thoughts, not sure if I have a main point, but realizing this pattern has certainly contributed to my personal sense of disillusionment.
> Banks have excellent security. But often the restrictions are so onerous that productivity suffers.
I once worked a very short stint as an external dev at a private bank (6 weeks).
The side entrance we used had a revolving door-style airlock with enough space for a single person to stand, protected by a card reader using unlabelled RFID-style cards.
This was obviously to ensure that for every person entering, there was exactly 1 card swipe, no one could hold the door for anyone else, etc.. Real claustrophobic in there, to the point where it was impossible to step through the revolving door, you had to do stutter steps while the door revolved around you.
So obviously this airlock was broken at least 60% of the time. Its replacement was a normal door next to it, which was left completely open instead.
If by good you mean: the company would have no problems putting in their advertising the number the CISO knows about how much it would cost to completely invalidate their security and their customers and investors would be happy, or at least non-livid, if they were told; then none in the Fortune 500.
If by good you mean: somewhere where that number is more than $1M, the smallest unit that appears on their 10-Ks which usually use millions as their smallest unit, then probably none in the Fortune 500. If you raise it to $10M, then without a doubt there are none.
If by good you mean: "better" than other companies, but still trapped in a valley with the horde of hungry bears faster than them, then who cares, the bears are still going to eat them soon.
I worked for a company that had ALL those practices in place PLUS required annual security training PLUS they ran simulated phishing attacks to see if people would report incidents.
It’s not like twitter has sensitive personal or financial information to lose. I’m sure companies that store SSNs or financial records do a much better job /s
IT security in the healthcare and financial sectors is now generally pretty good. After a few high profile incidents, everyone is scared and systems are a lot more locked down on average. Although you can still find problems on occasion, mostly in smaller organizations.
But why should we even worry about security on social media sites? I really have zero sympathy for people who upload private data to those companies and then complain when it gets hacked. What the hell did they expect? Twitter was never under any legal or contractual requirement to provide good security for user data.
They shouldn't have to be legally obligated to make them have decent security as a simple business practicality.
Private data can be something as simple as DMs between people, which if leaked can cause plenty of trouble (an example that comes to mind is streamers having to deal with a lot of drama from their fanbase because leaks revealed they were acquainted with a streamer of the opposite gender).
On top of that with so many government officials and company CEOs on the platform it should be pretty obvious why access to the backend should be carefully controlled. There already was that incident a few years ago where someone got access to the backend via social engineering and tweeted out crypto scams from high profile accounts like Musk, Biden, Bezos, Apple etc.
1) I don't think having a single mono-repo and everyone having access is a major security concern.
2) Employees should only run corporate approved software.
considering he only worked there for 8 months, I doubt how much his information is credible.
I'm not reading the report as stating that everyone having access to the whole mono-repo is a security concern, but rather that the majority of people had unrestricted access to production systems. It's one thing to be able to read all of the code, it's entirely another thing to be able to deploy arbitrary changes to production without a review.
Given the amount of money involved in this issue between Twitter and Musk, and the general trend by large organizations and governments to hire PR firms to advance narratives in the court of public opinion, I wonder if this "blog post" is just that - part of a hit piece sanctioned by the Musk team?
There isn't anything ground breaking security-wise in this article, and the headline is definitely a lot of Trunped up hyperbole.
The blog "take" could be about any tech or Fortune 500 company.
The irony, I feel here, is that you are openly wondering about narratives seemingly as part of an effort to build a narrative. Literally every sentence you have written could also be turned to be viewed in that light.
PR firms try to shape the narrative by whatever means possible. Big publications are often better for that, but its not like they are above blog posts.
Once you seo=ed the PR in enough blogs the news just needs a little tip to run the PR as an emerging trend piece. Once you get the trend pub then the PR can get into Wikipedia and becomes disagreement settling science.
Hey the tinfoil can go wha deeper than that. Mudge brought up these complaints internally in January of this year which is right when Musk started buying lots of twitter stock.
I don't think they posit that they have new information, they just go through the existing whistle blower complaint and highlight why it's such a big deal. Also there's no reason to think this is a hit piece, and there's certainly no evidence for it.
In reading Mudges' complaint, it really paints the Twitter leadership (esp. Agrawal) as simply not caring about security enough to do anything about it. Instead you had an org with massive amounts of technical and operational debt, and leadership not willing to invest in it. There are always tradeoffs between fixing technical debt and building new features. Twitter leadership chose to ignore (and to some extent, hide) the problem rather than invest. They certainly aren't unique in having a security plan that is built around hope.
Engineers having full control over their dev machines up to and including preventing system updates is not ideal; but not out of the norm for tech. Poor data access controls, and out of date server fleets (where I'd expect updates to be pretty automated) are far more worrying to me.
I wonder if Mudge was fired for, basically, being too good at his job. He didn't toe the CEO's line, and was pointing out how the house was on fire, which is not what Agrawal wanted to hear(maybe Dorsey wanted to hear it when he hired Mudge, but Agrawal had different ideas). I suspect that most people who make it ultra high level as "Head of X", are hired more for their organizational/social talents, which oftentimes involves capitulating to those more powerful/higher on the food chain, rather than being actually talented at X. Mudge actually has the bona fides for the role, which is why he got fired (I'm guessing).
It's worth noting that being good at IT Security is in huge part a function of your soft skills, since you should be able to sell security to the org, since your job is to make the work happen, not to identify it and complain that it needs to be done
any amateur can run some automated scanners and issue security diktats to the rest of the organization
I mean...Twitter hired him as head of security. They ostensibly already cared about security. Or, at least Dorsey did, maybe Agrawal didn't. I suspect he wanted a yes man to offer some minor changes and say "Yup, everything's secure here". Before this, Mudge was facilitating the NSC in ultra high level briefings to provide accurate reports to POTUS. I suspect you don't end up in that position without some strong soft skills. But, as strong as they are, you can't convince someone who doesn't want convincing.
the head of security is responsible for getting buy-in from the organization on security measures, that's what makes them the head
"you can't convince someone who doesn't want convincing" is also a weak cop-out that would be totally unacceptable as an attitude of the head of anything. As head of IT Security, part of your JOB is convincing people who aren't convinced (easily played off as 'they don't want convincing' by people who fail to convince them)
if a head of IT Security came to me as a CEO and lamented "the organization isn't doing what I tell them to do", I feel like an appropriate question is, "what do you plan to do about it?" or "what options do you have in mind to get them to?" Every CEO knows security is a pain, they hire executives in order to delegate pains away
being supportive of an endeavor doesn't mean being okay with your executives laying key parts of their own job description (remember, it's the CISO's job yo get buy-in, not the CEO's) at your feet and telling you that it's hard to do because "some people don't want to be convinced"
in your example, the CEO might continue to listen while the head of security explains why it's worth more than that 30% loss to secure the systems
examples might include the cost of lawsuits, the cost of regulatory action, the risk of actual harm to people (customers or otherwise), the cost of reputational damage, etc... security has to economically justify its internal projects just like every other department does
Ok, and the CEO still isn't convinced, because he knows he will be fired and his lifetime earnings potential and reputation will be greatly diminished if the stock dumps like that, regardless of the readon.
Is that still the failure of head of security?
In this scenario, I feel like you've only left room for head of security failure and not CEO failure. Maybe I did the opposite, but it's based on mudge's long track record. Agrawal doesn't really have a track record outside of being promoted at near record pace to CEO in a company.
If the CEO's personal success is appropriately tied to the company's success, the CEO will be, for the most part, incentivized to do what's best for the company
if you don't have a benefit that outweighs the stock dumping like that (in other words, in the CEO's opinion, is the probability of bad stuff happening, multiplied by the downside of it happening, greater than that 30% drop?) then your proposal simply isn't something that should be done
that's not to say the CEO hasn't failed by hiring an executive who can't do their job when it requires soft skills and persuasion
What's good for Twitter the company and Twitter the stockholders is not necessarily what is good for Twitter users. Security breaches negatively affect the users whose data is breached. It only affects the company if it takes a reputational hit because it was announced that their security was breached. But, will India announce that they forced an insider in Twitter with access to all sorts of user data? Probably not. Will people swept up by India's secret police know that it was twitter that ratted them out? Probably not.
Let's look at a CEO of a cigarette company in the 1940s. The head of health comes to him with strong evidence that cigarettes cause lung cancer and are slowly killing their users. What would the appropriate action for a CEO be? Or for the head of health? Is the head of health a failure if he can't convince the CEO that they shouldn't be selling cigarettes? I don't think so. Because the head of the company might care more about money than about giving people cancer, and that is his choice to make.
Yeah, maybe the company may hit some rough times later, but if the CEO just hides this report, then the CEO can keep making money, and maybe the shit won't hit the fan until the CEO is already retired or dead.
Instead of stopping the sale of tobacco and shuttering the business, the CEO fires the head of health. Then, the head of health goes to a newspaper as a whistleblower saying that tobacco causes cancer and the CEO knows about it. In what world is the head of health a failure here?
I agree that cases involving harming people are exceptional ones for which both quitting in protest and whistleblowing should be on the table, but again, those are exceptional circumstances
an analogy in ITSEC would be knowledge of an actual (not potential) ongoing user data exfiltration and hiding knowledge of that
most ITSEC scenarios are not this, but rather a failure to explain why the potential loss of doing nothing is worse than the actual loss of doing something, just like a CRO must explain why the potential loss of not entering a market is worse than the cost of entering it
> In reading Mudges' complaint, it really paints the Twitter leadership (esp. Agrawal) as simply not caring about security enough to do anything about it.
I've worked in 3 Fortune 250 blue chip companies. My experience is that senior management is doing just enough about security to check the boxes that the trade press -- and the consultants they say we should hire -- say we need to check to have enough legal coverage to weather a possible lawsuit.
Given that Yahoo! had their ENTIRE user database hacked, and VISA, and endless other examples of major personal data breaches, and that none of these things ever results in anything more than a slap on the wrist, I'd say that even these paltry box-checking efforts are probably a waste of money.
I don't know how this situation would be materially any different at a "FAANG" company versus a 100-year-old manufacturing company.
Definitely. Twitter seems to have not been doing a lot of standard best practices for a company of their size.
My intent was pointing out that engineers with high level access to their dev machines is pretty common in tech. Not that other controls like policy enforcement are also often absent in tech (esp in larger companies). Hard to know how common that is -- seems unusual at least in big tech.
I read through the actual whistleblower complaint and to be honest I was shocked as well. If true, the security on their laptops was extremely lax or nonexistent. My current and previous companies will not let me install what I want on my laptop. Even something as innocuous as Signal is not allowed and I'm too scared to push the issue. I basically assume that everything I do is being watched and keylogged which actually makes life a lot easier for me.
The idea that Twitter would allow anything to be installed is kind of shocking to be honest. It sounds like they're not taking their place of prestige in the Internet hierarchy very seriously.
The fact that Mudge couldn't lock things down after Jan 6 or that there wasn't adequate logging etc is also extremely shocking. I would love to hear Twitter's side of it and the reason why it wasn't a priority. Was it because Jack Dorsey had taken his eye off the ball and was a missing leader that lead to this? Or was it Parag Agrawal's poor leadership as CTO and CEO? It certainly sounds like he was an obstacle in the way of Mudge, which doesn't sit well with me at all.
I've only had 1 employer (a DB company) so I can't speak to industry norms, but I've never had restrictions on things I could do or download on my computer. Hard for me to imagine working at a place where I need permission to download a tool or library when writing software. Seems like a huge burden.
Agree 100%, if I were not able to download the applications and tools that make me productive, I would be looking for a new job at a less employee hostile work place.
The problem is you install some random tool because you need it for a task, forget about it a few months later and then fail to keep it up to date and suddenly it exposes your device to vulnerabilities.
It’s not hostile for an employer to want to protect themselves from employee mistakes and there are ways to do this without invading an employees privacy.
I've worked at places like this (where you don't have admin access). Generally they don't mind you installing stuff that's legit or makes you more productive.
They just don't want you granting admin permissions to an email that you inadvertently clicked a strange links on, or some virus laden craking tools, you shouldn't be running.
It's understandable, but super annoying. Yeah, it'll probably question my long term view of the company.
Security and convenience are opposing forces. Security and reliability not so much.
My plea to all of us is get used to it. People's piece of mind (at the least) depends on us doing our job, and as this story is an example of us failing.
The age of programmers winging it should be long gone. We are professionals with responsibilities. Our jobs are not for our pleasure or satisfaction (even when they are fun and satisfying)
Would you feel the same if you were developing something intrinsically attractive to state-backed hackers like nuclear power plant control software? At some point you have to take into account hackers with the means to bribe/blackmail the developers of the "applications and tools that make you productive".
Libraries have licensing requirements that your company needs to comply with. Being able to download and use them without any oversight is an recipe for violating them.
In my experience, it’s impossible to assemble a group of 1000 programmers who are consistently able to read a license and reason correctly about whether the firm can follow it.
Do you really think everyone hired as a coder has enough intellectual property knowledge to understand all the complexities of all the various open source licenses in every potential usage?
My last large employer handled licensing and defects via periodic scans and background removals/uninstall.
There was specific software everyone could install with a service request, non-standard software could also be requested, temporary rights to usual stuff could be requested as could permanent access to those rights.
Never mind security. Maintaining a coherent approach to development strategies where anybody can "download ...[any] tool or library [they want] when writing software" is a terrifying idea. DO they let you use any computing language you want? Any OS?
While it is a burden, the point is that software needs to be audited before it is installed and that is an even bigger burden on the company. If it isn't essential to the business, it simply isn't going to be considered, At least that is the way in many of the non-IT companies I have worked for which handle massive amounts of data on other people. In this regard, Twitter's attitude is distressing. We may think of them as an IT company, but their actual business involves handling massive amounts of data on other people. While confidentiality of that information may not be the biggest concern, since much of it is intended to be published, the authenticity of that information is.
Software generally doesn’t need to be audited by a grad in a corporate IT department who knows next to nothing about software.
We have official repos for that, why Jack-of-all-trades IT oafs think they can do a better job than distros who literally package software for a living is beyond me.
It depends on the industry. However, in a good world you would have an internal repo with all the software users need to do their job and users would just download from that.
All of the jobs I've worked have not had any technical restrictions on what software engineers could install on their work laptops.
Most (but not all) had some sort of asset management tool and/or antivirus as standard on computers, but nearly all provided exceptions to having that functional/installed.
I, and I expect many developers, likely select companies with IT policies that avoid potentially harming individual productivity.
If you work in financial services (banks, brokerages, hedge funds, exchanges), heath insurance, or the defense industry… your device will be locked down.
I worked at a highly regulated financial company and they gave me root in prod and permission to install anything I wanted (on any machine I wanted). However, it was clear that EVERYTHING was being tracked, including SSL MITM attacks and the most invasive OS-level logging you can imagine, for compliance reasons.
Abuse of that trust was a fireable offense and someone did get fired for it.
I did an internship at a bank. They locked down developers laptops to the point of being unusable.
Except all the developers just got permission to install virtual box, and ran another OS inside virtual box that they could and did do whatever they wanted to.
This didn't improve security past not locking down devices, it substantially hurt it, but it was also the only way anything got done.
Sadly this is often due to regulations and industry expectations.
I work in finance and we get all these audits and questionnaires from regulators, insurers, intermediaries, clients, etc. and many of them are straight out of the 90s.
For example, when we replaced the VPN with a zero trust system, we got a ton of pushback. It didn't matter that the ZT implementation provided stronger guarantees than the VPN ever did as well as doing everything the VPN did. What mattered is that it wasn't called a VPN and they were expecting me to write "Cisco VPN" or similar.
Unfortunately, developers having admin access on their machines triggers so many red flags in these processes. Doesn't matter if you install a load of auditing and remote attestation stuff, admin access is instant ticket to bureaucratic hell.
I worked in health insurance for a stint and wasn't completely locked down. I was able to install brew and some libraries, BUT received a call from security to review what/why I was doing it and ultimately there were no problems.
MSFT also allows arbitrary installation of software and development systems. They do force updates, scanning, and other common security best practices.
On the machines people use for Livesite support ("Secure Access Workstations") it's a different story. Those bad-boys are locked down from the supply chain through day-to-day use.
>That’s not a diss to AWS security mind you, I’m sure it’s top of the line.
Top of the line security looks like door locks with daily changing codes and number pads with lcds on them that scramble the order of the digits. It looks like regular searches of your effects as you leave the building, badged and guarded entrances that operate like airlocks, security that roams the building and leaves your manager a nasty note if the wrong documents are left in the open on your desk along with a computer and network that actually won’t let you install software. This was my first job and it felt normal.
There is a place for that level of security, but twitter is not it.
Good risk management means knowing when to accept a risk. Twitter was too lax, but the opposite millitary approach would also be the wrong choice for something like twitter.
But top of the line security anywhere is not "meh, install whatever you want on the company laptop". And any company handing private communication and responsible for making social network decisions does need fairly good security to protect against compromised employees and compromised company hardware. That looks like solid policies for installing software, centralized management of hardware, and written policies that are well thought out, audited, and followed.
Finance, medicine, defense, ecommerce, etc. all have industry or government regulations for these kinds of security polices and compliance, it seems like social media companies are the wild west and clear signs of problems and abuse have come up repeatedly.
No, they probably don't need to go through your bags with random checks to see if you're exfiltrating classified materials or warn you about wearing your company badge visibly when you go out to lunch to avoid spies.
Security engineers have a tendency to see a hole and tell you to fill it, independent of the threat model that is required to exploit it. Security is a business decision.
That being said, Twitter's stance on insider risk was a little too lax for my taste: my colleagues at a big tech were often asked by governments for their work passwords at country borders if the border patrols noticed they worked for the company, and had no choice but to comply. Preventing insider access to systems mitigates this risk to your employees.
The 2 different FAANG level tech companies I’ve worked at both allowed me (as a developer) to install what I want on my laptop (despite being MDM controlled). I don’t think that fact alone is an indictment of Twitter’s security.
> My current and previous companies will not let me install what I want on my laptop.
I've worked for companies all along that spectrum.
I've found that the locked-down companies tend to be heavily bureaucratic in other ways as well, and in general make my life as a developer less satisfying. I gravitate towards the more liberal companies.
>The fact that Mudge couldn't lock things down after Jan 6 or that there wasn't adequate logging etc is also extremely shocking.
Remember when Twitter got real big because of the Arab Spring? Free Speech is good? How is “Twitter” supposed to know what to “lock down” and what to allow for good?
This is why you should never volunteer any information to any company (online or otherwise) that is not absolutely necessary in order to conduct business. Even if you personally trust them not to use your information in nefarious ways, they may not guard it very well from internal or external prying eyes.
A few years ago, I was part of a group buying a commercial building. Each partner needed to co-sign for their portion of the loan. The bank wanted to know all the assets and liabilities of every partner. I listed all my liabilities and also listed enough assets to cover the liabilities. When the loan officer asked if that was all my assets, I honestly replied no and that I did not intent to list everything I owned.
When I voiced security concerns as the main reason, he assured me that his bank had a secure system that was fool-proof. I just laughed and refused to give him the information he said he needed but obviously didn't. Luckily, they wanted the loan to go through and it did.
>and all engineers had some form of critical access to the production environment
Not to excuse the lack of monitoring or fine-grained control, but most engineers having some kind of prod access is what happens at companies where you are oncall for your own services / dev and ops are not separate roles.
Has the author not see. The collective shrug from society when it comes to security? Why are they surprised? People still flock to Facebook and google in droves. And now Tik tok. Clearly people do not give any f’s about privacy and security.
People shift there because the content creators they like are there. A huge portion of the Facebook and Twitter user base is into news and they can shrug about security but if the journalists don't then they won't get their fix.
> But putting that aside, employees installing “whatever software they want” is already a significant eyebrow-raiser.
How do you ensure you are not a gatekeeper but also keeping company secure? Vetting software installations would be killer for any productivity at that scale I guess. Does it include npm install too which can run scripts or just desktop apps, chrome extensions?
I agree with this. Allowing people to install software is not necessarily a security risk, but it can be if your company's overall security is vulnerable to hostile software on employee computers.
In software development, you need to install software in order to do your job. I've worked at two "big tech" companies, and both of them had a policy of allowing employees (at least in engineering) to have root on their devices and install what they need to do their work. 1. We're intelligent adults and trusted to not fuck everything up, and 2. Internal systems are properly secured against fuck-ups, so a rogue compromised laptop on an internal network should not be able to screw anything up anyway.
Then again, I've worked at smaller companies that thought their perimeter firewall and VPN for employees was adequate security. Once one got onto the internal network, there was no defense in depth. A compromised laptop (not to mention a disgruntled insider) that managed to get onto the internal network could literally wipe everything out.
> “…according to expert quantification and analysis in January 2022, over half of Twitter’s 8,000-person staff was authorized to access the live production environment and sensitive user data. Twitter lacked the ability to know who accessed systems or data or what they did with it in much of their environment.”
Half of any company having access to prod is insane, but half of a giant like Twitter is unimaginable to me. It really explains how compromising single employees seems so effective there.
In the political Twitter neighborhoods that I inhabit, there have been many cases over the years of people having their accounts yanked, with the only message being "You have been banned for violating the Twitter rules. This decision is final and cannot be appealed."
When these bannings get publicized, however, the bad publicity often leads to the "permanent ban" being reversed, with Twitter simply explaining that the original ban "was a mistake". How does this sort of thing happen so often, unless random employees have access to powerful tools that they are not authorized or trained to use?
Here's the obvious counter - social media companies are by nature trying to chase super-linear cost:return ratios. So they can't actualy accept that N users mean N/x moderators. So instead, you hire Y moderators, and any time anyone points out you've shit the bed you wave your hands in the air and whisper "AI" when really what's happening is Y < N/x so you have a shit moderation team.
If there aren't enough moderators, wouldn't the likely result be fewer rule-breaking accounts banned rather than more non-rule-breaking accounts banned?
Maybe, instead of hiring N skilled moderators for $D, they could be hiring N * \delta unskilled moderators for the same price.
The irony is that I had to help a client with a security audit because they wanted to provide Twitter with services that related to employee data. Judging from this article, Twitter would not have passed their own external service provider security audit.
Solution: Statutory damages for companies who release software with grossly negligent or malicious disregard for best security practices.
This form of civil liability has given us safer cars, home products, machinery, and professional services. Its most often preemptively enforced, not by our court system, but by insurance carriers who insure against the liability.
I think the challenge here is figuring out how regulators and insurers can tell the difference between best practices and things like security theater or former best practices that are now obsolete but are written into regulations or standards.
Right now there are only ~250 car models available in the US. Each one gets incredible amounts of attention from all sorts of people, both before and after launch. And cars are pretty stable technology. But there's a ton of software out there: 1.8 million entries in Apple's App Store alone. And our tools and technologies are still evolving at a rapid clip. Having to explain each new library to an insurance company functionary sounds like a major brake on innovation to me.
A good example is the password rotation policy. It is still enforced by some of the standards (PCI DSS comes to mind), despite existing research that shows it worsens your security.
Maybe a brake on innovation is a good thing. Social understanding and effective legislation on technological advances lags adoption by quite a long period.
Possibly, but I think higher security standards may not create the kinds of lag you're looking for there. Adam Neumann and Travis Kalanik are not going to give two shits about IT standards, and even if they did I doubt it would slow down their innovation in business models, ways of dealing with customers, and dealings with government.
I believe this is a good idea. I don't know a whole lot about law-making, but I know that a lot of ransomware cases could have been prevented by enforcing updates to various packages and operating systems.
Please note that I am not claiming that updates are a silver bullet. I am sure if someone managed to obtain credentials that allowed them to release an update for MS Windows, say, and that update installed a keylogger, it would be bad news.
But regular updates at the source code dependency, OS, firmware, networking hardware, device layers, mandatory code review, hardware FIDO/U2F tokens, training on best practices for handling data and for writing and reviewing secure code, use of modern encryption algorithms, encryption of sensitive data at rest, requirements to not store PII without a proven need, requirements to change default passwords (or even better yet - laws mandating non-default passwords at ship time), ongoing anti-phishing training, and the like will all help us stay safer. There are bad actors out there. We all make mistakes. Our industry is maturing at an altogether unacceptably slow rate, IMO - a lot of these techniques are well known in the industry, but they cost money and time to implement, and nobody seems to want to expend that effort until it's too damn late.
I am not a security expert. Just a developer who tries my best to pay attention.
In general I agree, although the scale of software would mean there has to be some pretty significant flex in this sort of thing. The Twitter/Facebook/etc. of the world can afford having civil liability hang over their heads. But we don't want to live in a world where this kind of thing destroys the incredible software innovation our field has today, mostly from startups and small companies that eventually grow to the size of a Twitter.
I feel similarly about the Section 230 issue - past a certain size, orgs are large enough to actually be able to be held to account on matters of speech on the platform - but defining that line is challenging and potentially very destructive.
Incorrect. Individual insurance companies make do underwriting screenings and audits of clients who present certain types of risks. They can deny coverage or deny claims based on the results. On this topic (and several others) I'm something of an expert.
I feel like the extent of insurance company “audits” tends to be overstated. Most of the time they want executive attestations that X, Y, and Z are happening. Whether and to what extent X, Y, and Z actually are happening isn’t something the insurance company is likely to investigate unless and until a claim is submitted under the policy. The insurance company doesn’t necessarily care that X, Y, and Z happen they care about mitigating risk. Specifically, their risk of paying out on the policy.
No. When there is a claim the insurance company will look for every reason not to cover and stick the insured with the defense and judgement bills. If the insured fails to meet the carrier's requirements they are screwed.
Bad idea, I can think of tons of problems off the top of my head: Who defines "best security practices"? How nimble could that process be to new technologies, new businesses ideas, new threats? Would it be too costly for startups to implement on launch? How could giant complex systems of big tech possibly be audited externally? I could go on.
Agreed. In my company, "best practices" means "what I want," "what I'm used to," "what the sales guy is selling", or "what my boss wants."
There is no such thing as "best practices," and certainly no "best security practices."
Such a thing would have to be developed, perhaps by NIST. And even then, it would only exist as a baseline so that middle managers could mark a checkbox off on a list.
The only solution I can think of is to make companies financially liable in a way that actually hurts them. One of the reasons that healthcare companies run around in a HIPAA hysteria is that HIPAA violations can actually cause them great financial harm. The same cannot be said for mot of the tech industry.
Exactly. Another important question is-- how often will this change? These kinds of regulations can also provide a 'moat' for larger companies by keeping smaller companies out, leading to monopolization.
I would argue that it really doesn't matter if Twitter is worse than most other companies, the problem is that none of these companies should be operating with such poor (non-existent) standards. Especially when they have as much data as Twitter. I personally "protect" my tweets so that others, like my employer, cannot see them. There are many more people with so much more sensitive information that are trusting in this system.
The best "security" is not to give data to "tech" companies. It is never too late to stop doing something stupid. It is unfortunate that governments will not help citizens in this regard by adequately regulating collection of data by "tech" companies. Laws limit data collection by governments from their citizens, for good reason. Meanwhile "tech" companies are under no such restrictions, which is beyond reason.
This "whistleblower" made me finally close one of my private accounts, turns out the official data export is much more complete nowadays - including private messages and an easy to use static HTML page. I'm surprised that such a "zero trust" system isn't established at Twitter yet, considering the size they were operating in.
Here's something you should know about whistleblowers in the US. If a whistleblower makes disclosures to the SEC that result in a judgement if the SEC successfully sues the subject of that disclosure. This can be worth tens of millions of dollars.
That doesn't mean their claims are false. But you should always be skeptical.
The second issue is the trial resulting from Elon Musk walking away from the Twitter acquisition. That acquisition agreement calls for "specific performance", which means Elon pretty much agreed to buy the company no matter what (he waived due diligence). He is expected to have to pay a fortune if not actually buy the company in his trial next month.
So it's at least worth noting that this disclosure happens right at a time when it might help Elon. Maybe unrelated but it's also a hell of a coincidence.
Employees that are fired (or are heading in that direction) can make all sorts of claims to deflect away from the fact that they were or soon would be fired, possibly with cause, possibly not.
The specific claims seem to be a little vauge like not giving employees tools to combat bots. Like what does that even mean? Twitter may have crappy infra for this but does it rise to the level of negligence or even malfeasance? That's not a slam dunk from what I've seen.
there's rewards for whistleblowers because often doing so will destroy your career and result in massive legal bills from much more powerful entities suing you.
Mudge was already an officer of the company, he had much more guaranteed money to gain from just shutting up and staying there.
That's a very short-sighted view of a career. In the short term, it is bad for your career, but being a whistleblower who is right can raise your profile significantly and make you much wealthier than if you stay at a company quietly (even if your salary is $X million). In Mudge's case, he didn't need the money, but the publicity probably helped.
Think about some of the past whistleblowers from these companies: Frances Haugen, Timnit Gebru, etc. These are all people for whom whistleblowing was a good career move.
> If a whistleblower makes disclosures to the SEC that result in a judgement if the SEC successfully sues the subject of that disclosure. This can be worth tens of millions of dollars.
I'm not sure what your point here is. That the lawsuit can be worth a lot of money doesn't take away from the fact that the SEC has to prove their case in court. How does the value of the settlement raise any question about the legitimacy of the judgement?
People who lose internal political battles at companies have strong incentives to leave and then become whistleblowers. It's not that the disclosures are automatically false, but they are definitely coming from someone with an axe to grind and a strong monetary incentive to find anything possible to complain about.
Yes, but so what? If the disclosures are false, the "whistleblower" gets no reward. If they are true, who cares what the motive of the whistleblower was?
"So what" is that if the disclosures are false, there is no cost other than potential reputation damage if the thing you blow the whistle about is unpopular. For this particular whistleblower on this particular issue (a highly respected and politically well-connected person talking about security and privacy), there is pretty much no chance that it costs him anything.
That means that being a whistleblower is strictly positive expected value in this case, which means that there is a heavy incentive to find something to blow the whistle about, no matter how flimsy. Additionally, the whistleblower has a strong incentive to add color about how bad these problems are.
People treat whistleblowers as unbiased sources of reliable information, but that could not be further from the truth. If you ask a tobacco salesman if their product has health consequences, they are going to say "no" (unless there is incontrovertible evidence of health consequences) the same way a whistleblower is going to say "this is super bad" about something that happens at a company (unless there is incontrovertible evidence that it was not bad). Understanding the bias of a source is very important.
Because it's a nearly risk-free way for someone to get very rich. If you could say "my old boss, who I didn't like, was bad at his job" and have a chance to get rich doing it, you would. You would do it if you thought you might be able to make an argument that your boss did a bad job. The chance to get rich induces bias.
Also consider this: whenever someone complains about something you're hearing one side. The complaints might be valid. The information could be real but incomplete. Perhaps there are things that person didn't know. Perhaps there are things that they chose not to disclose because it doesn't pain them in a good light or it simply contradicts the narrative they're presenting.
With the Elon lawsuit with Twitter, it presents an opportunity for people to raise their public profile. (Many) Elon stans will be inclined to believe it no matter what. You will have stories written about you. You may well get interviewed or even called as a witness.
So Mudge here has a financial incentive and may want to raise his public profile. This is selective release of information to serve both of those ends. Again, it could still all be true. But always consider the source and what they ahve to gain.
This is just basic media literacy, really.
So me pointing out how Mudge might (and I really do mean "might"; I'm not saying he is biased) be biased should prompt you to see if there any flaws in his claims. If someone is disagreeing with his assertions, look at what they're saying, what their motivations are and if there are any obvious flaws.
I find this a good rule to live by: if someone says someone that agrees with your preconceived notions, whatever those might be, be doubly skeptical.
>That acquisition agreement calls for "specific performance", which means Elon pretty much agreed to buy the company no matter what (he waived due diligence).
>So it's at least worth noting that this disclosure happens right at a time when it might help Elon.
I am having one hell of a time putting these two together.
these are all unavoidable issues that all corpos have. also, that "endpoint security" thing is straight up gimmick to make the boss feel like he's in control. unfortunately, with unsound operating systems that not even pros can use in a correct way, there is no chance your company will do better. also this article is annoying because all the good boys that agree with it will do straight up stupid shit like accepting unknown SSH keys instead of verifying them and believing sudo does anything, and whatever hare brained manifestations they came up with, the latest being "dependency confusion"
How does google handle all those mentioned problems? Would be great to contrast that to twitter, especially how easy it is to read Twitter DMs vs gmail messages.
The main difference I’ve observed is that Google uses BeyondCorp rather than a VPN and Santa runs in the background, plus a bunch of scripts that try to log everything you do. Some of these can be disabled with the right exemptions.
Mudge's report seemed to show that Argwal was engaged in a pattern of lying and hiding the state of affairs a Twitter. That was more shocking than the security issues by themselves. The result of the lying is that the shareholders got screwed. Musk's offer was $54.20 per share. Twitter was trading at roughly $39 when Musk made the offer, and it is now trading for $38 and some change.
> And most importantly, the software on these endpoint computers was reporting dire problems. Over 30% of the more than 10,000 employee computers were lacking the most basic security settings, such as enabling software updates.
Who are these people writing articles like this? Not enable [automatic?] software updates is equivalent to dire problems?
I wonder when the security practices mudge pushed for got put into place at various companies? The company I worked for put those into place between 5-7 years ago: limiting who had access to production, logging access, MDM, security scans on computers, security training, etc... Was twitter behind the curve?
A company who isn't doing anything against huge bot/spam problem that can be easily avoided to a great extent as there are obvious patterns in bot/spam behavior isn't a company with good intentions IMHO.
What I find most wild about this story is Mudge reported this in January which is right when Musk started buying Twitter shares. And now both of those things are coming to a head at the same time.
>>Twitter employees were repeatedly found to be intentionally installing spyware on their work computers at the request of external organizations.”
That right there is a forking nuclear showstopper . . . and nevermind the moire broad
>>employees could install “whatever software they want”
Effectively, twitter has zero security. For journalists, or anyone working in an environment that has kinetic consequences (opposition to authoritarian regimes, etc.), it is right up there in the frightening scale...
> Over 30% of the more than 10,000 employee computers were lacking the most basic security settings, such as enabling software updates.
Its always funny to me to see people pointing at companies for not installing updates, and yet there's always so many people complaining about how Windows is forcing them to update their computers all the time.
Dumb question I'm throwing out there. What "sensitive" information do users keep on Twitter beyond their email and phone number? I mean who really cares? Unless I missed something here...
In some countries, being personally identifiable and liking or interacting with tweets which are against the viewpoints of people in power is "sensitive" as it can get you into forced rehabilitation camps. Twitter DMs are considered private and probably contain sensitive information for a lot of people. Including but not limited to journalists, human rights activists or even simple corporate support accounts.
There are DMs and private accounts associated with sub-communities.
While not sensitive in the traditional sense (ie not credit card info or SSNs) it can still be sensitive for the purposes of the specific users, like business dealings or personal conversations.
Twitter direct messages are famously not end-to-end encrypted (unlike almost every other provider at this point) and are almost certain to be absolutely loaded with juicy political and criminal intrigue.
Furthermore, many Twitter accounts are pseudonymous or anonymous and disclosure of the true identity of the account owner could be scandalous or even physically hazardous.
And, assuming the identity of a famous person on Twitter would also be a powerful tool.
Many B2C / consumer-oriented companies have this "who cares" attitude and it is very toxic to the industry as a whole when it comes to security.
I said it on the original story, and I'll say it again here: No one is surprised that security at twitter is a shit show. We all know that it's a disaster zone. It's a tech company that somehow managed to add nothing of value for almost a decade. It's 2013, the company just IPO'd, people are suggesting an edit button. It's 2022, twitter is releasing a limited experimental roll out of the edit button for it's 7 paid subscribers.
So yeah, no shit it's dysfunctional, and insecure, and bad in a million different ways. The fear with facebook is that these incredibly smart, motivated, amoral monsters are going to figure out how to drip dopamine into our brains to manipulate us into destroying western democrcacy so they can advertise more sneakers. The fear with Twitter is that someone trips over the wrong cable and the entire site goes down and never comes back.
And those people should've been ignored. I feel like I'm the only person who actually used twitter back then and still remembers its original purpose. The restrictive character limit and lack of editing were deliberate features.
The reason they're adding an edit button now is the same reason they relaxed other deliberate restrictions before: money.
I've always wanted an edit button and I don't think it detracts from the "model" of twitter. Like GitHub, it should let you look at edit history though, because there are certainly interesting (and potentially terrible) implications to letting people edit stuff after the fact.
>it should let you look at edit history though, because there are certainly interesting (and potentially terrible) implications to letting people edit stuff after the fact.
It won't though, I think we all know that. The only way it can work is if editing is restricted to within a few minutes of the tweet being posted.
I can already imagine the drama and awkward situations that will inevitably arise from things like editing tweets in response to replies.
It does not contradict anything you said, just want to add IIRC the character limit of 140 characters was a limitation of sms (that they embraced). At the time there was all this talk about real time/streaming platforms. Someone at an event could post on twitter via sms and break the news first.
To have one stagnant social media company which offers you basically the same experience for 10 years has to be a defining feature. Everything else seems to change beyond recognition trying to clone whatever the new fad is.
Twitter has had stability and given users basically the experience they wanted. In any sane world the fact they can't grow a further 10x or morph into the next media juggernaut wouldn't matter if they have hundreds of millions of users who like their platform.
You can tell Twitter (and the other techies) don't care about "security" when they force you to use SMS for 2FA when we all know they're doing that to harvest phone numbers because they're a pretty o.k. way to link identities.
I think the scheme they used was to require a phone number to set it up the first time, and after that you could switch to TOTP or whatever. Authy also harvests phone numbers.
+1 for bringing up FB as a comparison. They also do it, but in my experience not as much as Twitter. Been monitoring both Twitter and FB and Twitter is just incredible biased towards left.
What's your data here? And what does "biased towards left" even mean?
In one sense, it's trivial to prove, in that Twitter bans a lot of hate and abuse, and these days targeted hate is a popular tool of the right across the globe. (See, e.g., anti-gay and anti-trans panics going on all over. Or specific ethnic hate depending on region.) So if cracking down on that is being biased toward the left, then yes, Twitter definitely is.
As you noted, the greater prevalence of right-wing violence, hate speech, and disinformation/misinformation causes the appearance of a moderation bias against conservatives. However, there has been an actual measured bias in favor of conservative voices (circa late 2021, the home timeline algorithm may have changed since then).
In other words, conservatives do more stuff to run afoul of the terms of service, and they also receive better treatment. GP is wrong/misinformed.
Have harvested data from Twitter, FB, Reddit and some other such websites for a few years, mostly for the purpose of using it for trading.
Around 18 months ago got involved into Covid data and started scraping also data related to this.
But anyhow, this is getting quite complex, and in the end is just my data, and tbh, you might just be right and I’m just looking at the wrong data. But on the other hand, such characters as the main people beyond the Great Barrington Declaration and others within their circles experience interesting phenomena (e.g. vanishing likes), that at least based on my scraping I can confirm. Consider this just some anecdotal evidence, this in itself is not what triggered my above statements.
But even if you can substantiate "vanishing likes" as a problem, that doesn't prove much. Likes vanish all the time as Twitter rolls up bot networks. You'd first have to demonstrate that this was happening, and then show it was happening disproportionately to some accounts and not others.
And then once you had that, you'd have to show that the problem wasn't just excess fake engagement supporting right-wing narratives, as that's a very plausible explanation here.
That is definitely not true. I have queer and trans friends, and they have legitimate worries. The "Don't Say Gay" bill in Florida is a fine example here, but there are so many more, including the very reasonable fear of Obergefell getting overturned.
The backlash isn't surprising; most civil rights advances end up with one. But the tenor of it is very concerning.
The funny thing about those sentiments is, aside from how exaggerated their proportions are (only a single judge in the Supreme Court remarked in passing about revisiting Obergefell, and the very hysterically-named Florida bill is literally just the tax payers telling you they don't want their children to hear about your sex life, straight or gay, in their tax-funded public schools. If you want your children to hear about sex lives, cool, do it on your own time and money. You can say gay as much as you want, just not to other people's kids.), is that they are largely self-inflicted.
The Florida bill they are so terrified of would have never been drafted if public school employees never put gay porn in the library, people would have never got tired of Obergefell if not for the month-long religious festivals that lgbt movements like to hold so much. Lgbt movements create the conditions of their own societal rejection.
> hysterically-named Florida bill is literally just the tax payers telling you they don't want their children to hear about your sex life, straight or gay
That right there is an insincere and obvious double standard. Let me know the minute that parents sue because a heterosexual teacher says they are married in a classroom. The problem is the gay teacher will be seen as other and potentially punished if they mention their spouse but the heterosexual teacher can speak out without repercussions.
Ah yes. You simultaneously claim there is no anti-gay panic and that gay people are at fault for "societal rejection", which is exactly the anti-gay panic I'm talking about. While also promoting an obvious anti-queer double standard of the exact sort used to oppress. A fine example of doublethink, but not one I'm going to take at all seriously.
Fred Thompson (R, TN), Chairman, Committee on Governmental Affairs: …If you gentlemen would come forward… We’re joined today by the seven members of the L0pht, Hacker think Tank in Cambridge Massachusetts. Due to the sensitivity of the work done at the L0pht, they’ll be using their hacker names of Mudge, Weld, Brian Oblivion, Kingpin, Space Rogue, Tan, and Stefan.
…
Sen. Thompson: I am informed that, you think that within 30 minutes the seven of you could make the internet unusable for the entire nation, is that correct?
Mudge: That’s correct. Actually one of us with just a few packets.
What does this mean for the sale of Twitter? I still can't quite figure that out. Does Musk really want it at a lower price, or is he truly walking away from the deal? If he walks away from the deal what happens then? Someone else buys it?
I guess the real question is: Can twitter be saved?
I suspect it doesn't mean much for the sale. From what I understand, it's mainly down to the agreements signed with Twitter and what the Delaware courts make of them in the trial starting October 17th. Those agreements are very Twitter-friendly, so my personal guess is that Musk is going to lose.
As to what he wants, Musk is mercurial enough that I'm not sure that's a meaningful question. Did he want Twitter for a little while? Yes. Did that ever make sense given his other responsibilities? No. Did it make sense to investors? Based on Tesla's stock price slide, definitely not. Is he having buyer's remorse? Clearly. If he got a really good deal on it, might he swing back? Who knows. Is Twitter so badly run that they might do better in somebody else's hands? Yes. Is that person Musk? I doubt it.
So if I were betting on Musk being smart, I'd be that Twitter will charge him a few billion dollars to get out of the deal, which would be better for everybody than Musk taking a $14 billion loss the instant the deal closes (which is is $44 billion bid minus the $30 billion that is Twitter's current market cap). But Musk could well be more prideful than smart, so I'm excited to see how all this turns out.
It is questionable whether he _can_ walk away from the deal. He signed all of the contracts and made legally binding promises to lenders with no clauses around misleading amount of spam bots or security issues. Of course, laws are more bendable when you're that rich and stand to lose billions.
If you're really interested in this stuff, it has been an ongoing centerpiece in Matt Levine's newsletter. Articles are free if you subscribe to the email, but reading prior pieces may require a subscription. e.g. https://news.bloomberglaw.com/securities-law/matt-levines-mo...