"Admins of <website> can read data on <website>" is just a tautology. It's true of everything you use on the internet where you don't own the server, and even then it's dubious.
If people don't get that about mastodon they probably don't get it about everything else they use either, so this recurring argument just seems like FUD...
[note: Edited <service> to <website> above because people keep coming at this from the angle of chat clients that run on your phone, and we're talking about websites here - a website can't have "e2e" encryption because it is both ends. That said, some of y'all believe way too hard in the perfectness of e2e in general and I addressed that in some of my replies]
> private messages aren't always necessary, as long as the platform is crystal clear about this
When sending a DM on Mastodon, there is literally a message that pops up saying: "Posts on Mastodon are not end-to-end encrypted. Do not share any sensitive information over Mastodon."
Signal is not a "hosted website," which is more the context we're talking about here. But even on those services, yes, there are ways that the owners of the service could tap or impersonate you through exploiting their own key exchange service. You are trusting that they won't do that.
This might be less true for matrix, since you could in theory be using an open source client where you have somehow guaranteed it will alert you to an attempt to add an unwanted device key to your e2e chat, but on signal you're running a binary you didn't compile against a service you can't see.
I don't think you shouldn't trust them. But you are doing so to some extent.
In the case of signal, they would have to forge the SGX enclave signature (by an intel held key) or release a client that didn’t validate that sig. Definitely possible but if I had an SGX bypass I’d want to use it on something known to be high value, and releasing a non-verifying client would at least be noticeable on android and desktop.
You don't need to release a non-verifying client. Just one that generates a key which is known to the other side. What about existing clients? "Your identity in the database became corrupted and can't be recovered. Would you like to generate a new key and continue using the service?" or just release a version which is both verifying and lying to you about which key has been verified... or a low effort "hey, new phone, key changed".
Sure. My point was more that - at least on android - you can compare decompiled binaries with the open source implementation, presumably making it reasonably likely that it’d be noticed and reported on.
I think the latter (manipulating the client) is far more likely than the former, and I think it would also be pretty difficult to detect in practice. But the point is less "I think they will do this" than "there is still an element of trust here, even if it is a much harder hoop to jump through." I don't think any situation where signal does anything like this is likely.
I don't know much about Mastodon, but I know that it's main selling point is that it's decentralized, and it's pretty easy to assume that decentralized means there isn't anybody with special privileges who can read private messages. The same way decentralized finance (blockchain) means there isn't anybody with special privileges who can take your money.
And I would certainly assume that in 2022, any service would be built using encryption for the parts that are private, and aren't DM's private? Why would admins be able to read them? Is there a justification for that?
> it's pretty easy to assume that decentralized means there aren't admins who can read private messages.
I'm not sure why you would assume that? It's not something you run on your computer, it's still a website (or set of websites). Admins of your email can also read your email, if they want, and even with gmail in the mix it's probably one of the most "federated" systems ever built.
> I would certainly assume that in 2022 it would built using encryption for the parts that are private, and aren't DM's private? Why would admins be able to read them? Is there a justification for that?
They could potentially be encrypted at rest, in the database, but that doesn't really help much. The owner of the site would have the keys to decrypt them, and on smaller sites it's very unlikely that there'd be any real chain of custody involved.
If you've ever sent a DM on a forum did you think that was encrypted? It wasn't. Or twitter or facebook for that matter. It's not really practical for any data stored on a central server to be encrypted in a way that irrevocably prevents the owner of the service from accessing it.
> I'm not sure why you would assume that?... If you've ever sent a DM on a forum did you think that was encrypted?
The whole assumption here is that Mastodon is supposed to be better than those, right? Or else why are we switching? Twitter is centralized and can read all your stuff and censor it too. So isn't the point that Mastodon isn't and can't do those bad things?
We expect WhatsApp and iMessage to provide E2EE. Similarly open-source Signal and Telegram are encrypted. So why wouldn't you assume another high-profile open source project isn't adopting those same best practices for the private-messages part of it?
> Mastodon is supposed to be better than those, right?
Here are the ways mastodon is better than twitter:
- It can't be bought by a billionaire man baby
- It can't be coerced into hosting awful people because they drive revenue
- It doesn't require advertising in order to continue existing
- Because of that I'm not being endlessly datamined by adtech every moment I'm using it.
- It can't die because one website goes down, and everyone on it doesn't experience awful performance just because one instance is falling over.
- If I don't like the admins of the instance I'm on, I can move to another instance and bring much of my data with me without having to exfiltrate it with tools that violate the TOS.
- I can use whatever clients I like with it and I never have to worry about the company deciding it doesn't like third party apps and killing them slowly with api rate limits.
There are also a lot of ways it's worse than twitter, though they're mostly along the lines of "some of my friends aren't on it". Things don't have to be "better in every way" than other things to be "better for me (or you)". There are always tradeoffs.
Re. WhatsApp, Signal, Telegram and iMessage are all apps you run on your phone. And if you can read the messages on them from a website (as you can if you turn on a feature for imessage), then the admins of the service also have access to your messages.
> Re. WhatsApp, Signal, Telegram and iMessage are all apps you run on your phone. And if you can read the messages on them from a website (as you can if you turn on a feature for imessage), then the admins of the service also have access to your messages.
Not true. Web clients for Matrix are open source, and you can self host them if you are afraid of the default host trying to inject spyware to the page
That's true. Though I guess that when my parent comment referred to "admin", the were referring to the admin for the homeserver (the one routing the messages), which is different from the one hosting of the web client.
This is why services like Matrix and Signal open source their client. Because for security minded people, securing the client is much easier than securing the server.
Decentralisation is orthogonal to privacy. Your messages can be end-to-end encrypted on a centralised system (e.g. Signal) and not on a decentralises system (apparently here Mastoson).
Decentralisation is about control: the Signal admins could shut down the server and kill the service. For Mastoson, if you stop one server, the others still work.
yeah, but on twitter you're probably a nobody, the staff have no incentive to read your dms. on mastodon, you're at least a friend-of-a-friend of the operator unless you're on a huge instance.
Others have pointed out that this is fallacious -- even on big services sometimes employees are creepy stalkers, sometimes they're malicious actors of other sorts -- but even if you ignore that on twitter you are a target of advertising and if you think they aren't slurping up all the data they can about you and storing it in a database somewhere, you're fooling yourself.
Anyways, down this logic path is an internet where we somehow put all our trust in megacorporations and absolutely none in our fellow human beings and I dunno about you but one of those sounds a lot more dystopian to me than the other.
twitter has hundreds of millions of users and 7,000 (less now, unfortunately) employees, not all of which have production access. i don't know anyone at twitter, so them accessing my data would need to be a random choice from the whole userbase. they'd need to evade internal checks (which, however weak they are, are infinitely stronger than the average fediverse instance). could it happen? could i be eaten by an escaped zoo leopard during my morning walk?
having my data "read" by a non-sentient advertising model, while irritating, is nowhere near the same as having it read by a human being. that is a false equivalence.
i made no argument about the absolute merit of twitter vs the fediverse. this is simply a downside of small communities that people excitedly migrating to the fediverse with little understanding of how it works or experience with its predecessors will soon run into—twitter is neutral ground, on mastodon you might be arguing with the instance admin's friend and find yourself retaliated against. anyone who used a forum or IRC in the 90's/00's knows what i'm talking about.
> having my data "read" by a non-sentient advertising model, while irritating, is nowhere near the same as having it read by a human being. that is a false equivalence.
I agree they're not equivalent.
One is a small harm guaranteed to be perpetrated every day against every single person using Twitter, numbering in the hundreds of millions of people, largely without those people knowing or consenting to it.
The other is a large harm sometimes inflicted on a small number of people, and when it does happen is isolated to a small community where people can find out about it and act on that knowledge how they see fit.
I've been around a long time. I've seen power abused on irc and forums and even small social networks. To me, the idea that we need supposedly benevolent megacorporations to keep us from doing harm to each other is a repulsive idea, far and away above "my instance admin might read something I wrote on their server."
If we've forgotten how to exist in community with each other, we should relearn that skill.
Forget about what instance owners can do within the confines of the common Mastodon server codebase - Mastodon in the end is a protocol, so there are NO guarantees about the behavior of individual instances.
It seems like Mastodon assumes that misbehaving nodes will be cut off and just ignored by well-behaved ones - but that assumes that abuse is detectable and that standards of behavior will be enforced even if it means cutting off potentially large communities.
Whatever your software, the person running it can read your messages. Unless you're using a non-web client that does E2EE, of which there are none right now.
The server has the bits to do it [1], but clients need to implement it (they are the "ends" in "end-to-end encryption" after all). However this hasn't seen much interest from app developers and users.
Most Mastodon instance are hosted by individuals. Granted, I would assume most people are hosting the service with good faith, but there is no binding way to ensure that. With Twitter, doing something feral will (at least was possible to bring) doom to the company and it's investment, which is far bigger balancing factor than just someone's honesty.
I'm not promoting Twitter here, but for Mastodon, something needs to be done to protect the integrity of the content posted, so the admin cannot modify it easily (moderation can still be done through deletion).
But there's also less of a reason for anyone with permission to read DMs @ Twitter to do so, possibly with logging for any audits into unauthorized access. For mastodon, chances are your instance is focused around some general interest and thus getting on an admin's bad side could mean abusing their power to extract personal information/DMs from your account.
> But there's also less of a reason for anyone with permission to read DMs @ Twitter to do so, possibly with logging for any audits into unauthorized access.
People pay employees and contractors at social media companies to delve into others' accounts and messages all of the time. Some of them do it on their own without any outside influence, as well.
It isn't really the same as email, because with the email it doesn't matter if you are on the same domain as other people. You can avoid this risk by hosting your own email but running your own Mastodon instance just for yourself wouldn't make much sense.
It's exactly the same as an email server. Whomever owns the server can send email from your address therefore impersonating you as per the original question.
It's the same in that way, yes: both are federated protocols where you need to trust the owner of your server. But with Mastodon there are within-server interactions like the local timeline (https://cfenollosa.com/blog/you-may-be-using-mastodon-wrong....) that have no equivalent in email, so if you decide to try to avoid this risk by self-hosting you're giving up something that a lot of Mastodon users really enjoy.
Also, instance owners can unilaterally control what you see and who may follow you (by blocking individual users and whole servers from federating with theirs). Whether or not such a block exists is transparent to you as a user (this is different from earlier, similar approaches, like NNTP-Servers, where it was pretty clear when a particular group was not being distributed by your server - also, differently from Mastodon, NNTP did not represent your identity identifier - using different servers for different groups was perfectly usable with the same identity (which came down to your email address).
They sell this as a feature and celebrate when "undesirable" servers get blocked.
Mastodon is a good idea turned bad by building in pretty dystopian functionality.
Calling it dystopian is a bit harsh. Some degree of moderation is unavoidable or you end up with 4chan (actually, even 4chan had moderation, I think; it's just unavoidable).
Ultimately of course you're supposed to choose a server that you like and trust. At least here you have that choice. On Twitter or Facebook you don't.
Of course it should have had end to end encryption. It sounds like a massive omission. I found a discussion about adding that to ActivityPub[0] where someone points out that if you don't want server admins able to read messages, you can't store private keys on the server, which sounds to me like it would hurt usability. Makes you wonder how unbreakable the end-to-end encryption of other systems really is. I'm not enough of an encryption guru to say how big of a problem this really is.
That breaks the idea of having a federated network, if you have no guarantees that sending a message on one end of it reaches its recipient. Now we just have all the same problems of email but with the mechanics of a multicast medium.
Eventually, you'll reach the point where a duopoly will effectively control who is allowed to be part of the larger network. And this was the original problem people were trying to solve by creating a federated network.
> That breaks the idea of having a federated network
What fresh nonsense is this? There's never been a federated system with anything like a perfect guarantee of connectivity to literally anything. There are always limits, both social and technical.
The argument is a case of begging the question: Start with your conclusion, then reason backwards to the justification.
It's the case that a federation tends to erode. Likewise, it's the case that a centralized system tends to collapse.
The question to ask is really one of backing a horse to ride on to be pragmatically useful; winning the technical argument is academic.
The fediverse has a few core principles in mind, and they tend to get centered in Mastodon's marketing. So it does make some sense that it gets attacked for being an imperfect realization of those principles, and for being unprincipled in other respects. But in a head-to-head comparison with other implementations of social media it can still claim to be more principled on those things it tried to pursue.
Having the inability to know if a message is propagated federation wide, based on an arbitrary value judgement based on ????, renders it useless as a communication medium.
For this to work it needs to be treated as a common carrier, and not a social network.
Federation implies variation on how things are operated, it's a feature against the monoculture and fragility of centeralized systems. That some instances operate with different standards is to be expected.
Dystopian would be dictating that everyone operates the software they choose to run using their own time and resources in the same exact way.
It's pretty easy to migrate your account from one instance to another. So if you don't like the policies of your current instance, there isn't anything keeping you there.
Just the pain of actually figuring out its going on or atively monitoring, and the the non-zero hassle of finding, choosing and transferring to another instance. And then be prepared to do it all again. This is not stuff the average user wants to ever be bothered with.
But Mastodon instances do tell you what is going on. They provide lists of the other instances they have banned from their own, with a statement on why. Here's the one I'm on right now: https://mstdn.social/about (scroll to the bottom and expand the "Moderated servers" section).
Centralized social media platforms also control everything you see, to the point that it can be very difficult to see the full range of posts from accounts in good standing that you follow within their platform, in favor of posts they think would keep you more "engaged". And they are famously opaque on how they do that. If you can guess at what they've done and decide you don't like it, you don't have any recourse. After you've spent time to build a network of people you follow and follow you, they have you locked into their platform, holding your subscriptions hostage against you.
I guess I don't understand what you want. Twitter is too much censorship, right? But we don't know how Twitter censors content, and even if we did, what would you do about it? Mastodon instances censor content, yeah. Why not? It's a person running their own site, they can let whatever they want through it. But they tell you how, when, and why they censor. And if you don't like it, you don't have to put up with it.
> scroll to the bottom and expand the "Moderated servers" section
Wow. As much as I disliked Twitter deciding who I can and can’t follow (via bans), and what people can and can’t say, at least they put some rigor into it. I don’t think I’d ever want some self anointed little server god king banning so many peers for such soft reasons. I thought it would be csam, armed groups - this has reasons like “Trump fanbase”, “offensive content”, and over and over again “a-holes”.
Everyone wants to return to the old internet. The old internet was run by self appointed gods who created little fiefdoms. Some form of censorship could happen similiar to reddit groups.
To come back to my NNTP example: Back in the day, servers were not banned by admins, individual users were killfiled (essentially a filter) by other individual users, meaning that if you really disliked someone, you would get rid of their output - without denying anyone else access to their musings, related to the topic at hand or anywhere else.
It was a more civilized time.
Today, server admins ban other servers for random issues. Users do not use the built-in killfile-esque blocking method, but instead go and complain to server admins (who then are incentivized to use massive action against other servers). This obviously is not an environment that will lead to societal harmony.
There are plenty of instances who makes it a selling point that they don't block anything. If the have free speech in their name, you can assume that's the case.
The end result however is what you'd expect and the experience on those instances are very different from what you see on the more mainstream ones, and most people wouldn't like it.
A few do. Mostly, though, those servers are blocked because of continued harassment from their users. https://xkcd.com/1357/ comes to mind.
Some of the "free speech", "low-moderation" instances have self-policing communities. Others are full of neo-nazis. Yet others are run by bigots and ban anyone who objects to the hate and vitriol. They are not all the same.
That was the comic that made me lose respect for Mr. Munroe, because he is too intelligent to miss the point so badly: Free Speech is an age-of-enlightenment concept, not a legal construct from the US constitution. Worse, while it is true that no-one HAS to listen, actively preventing OTHERS from listening is obviously ethically wrong.
If you feel harassed by an user, filter that user. If you feel harassed by every single account on an instance, filter that instance for yourself (this option currently does not seem to exist in Mastodon). But do not go crying to our admin and deny me, who happens to be on the same server as you and is semi-ok with some of the folks on that other server the ability to interact with them.
This is creating filter bubbles, your users will at first love the friendlyness and fluff, but it also is exactly what led to the right-wing successes we've seen all over the world in the last few years. Filter bubbles destroy democratic societies.
> Yet others are run by bigots and ban anyone who objects to the hate and vitriol.
You see the dangers of giving admins that level of control? Now you got an instance in which there are no more discussions, only self-assuring and groupthink, and which will slowly become more radicalised. If filtering was strictly an user's issue, other users may have seen the objection to hate and vitriol (which often starts underhanded), and have had another view.
By the way, that is true no matter which political fringe a server admin belongs to: I know a Mastodon instance on which some users become increasingly radicalised against car owners right now, to the point where mass executions are being normalised by joking about them. Guess what happens when you try to become a voice for reason... But are ALL of them just evil, unredeemable beings that need to be banned from talking? Should my Mastodon admin block that instance for spreading hate and vitriol, even though the majority still talks about other things? Obviously, that's a silly idea.
> But do not go crying to our admin and deny me, who happens to be on the same server as you and is semi-ok with some of the folks on that other server the ability to interact with them.
Here's another way to look at it: Our instance has moderation policies. You chose that instance in part because of those moderation policies, just as I did. Those moderation policies are there so we can interact with each other with the peace of mind that we're not going to have, say, child sexual abuse imagery DM'd to us.
Then an instance comes along with a few hundred follow bots that do that. So, after a few reports, our instance admin defederates from the instance.
If you don't like this, you're perfectly welcome to go to another instance, with different federation policies. Thanks to federation, you can do this and still stay in contact with me! It's a win-win!
> You see the dangers of giving admins that level of control?
No, not really. Those people are probably always going to exist. Honestly, much as I hate Nazis, if they're over there doing their thing, and not trying to harass or murder me and my friends, I'm happy just to let them do that. It's just a website; it's not like their kids won't have external influences and be able to figure out that, hey, Nazism is actually bad.
Turn on showdead on HN if you want to see the shadowbanning hell that most social media networks I know of can send their users to. In general, users have no idea that they're even banned. There's a few people on HN who have been posting nonsense here for years while shadowbanned.
Not saying it's good or bad, just that it isn't a dichotomy between Mastodon restricting things or centralized systems that don't.
I believe (dang feel free to correct me) some of what you are seeing are bots. They might be OK with only those with showdead seeing the links or to your point might not have bothered to check what sites have blocked them unless they get a status 403.
For example, on my silly hobby sites I provide all bots a status 200 for all GET and POST requests and just send them to a dummy virtual host to let them play in the sandbox for years on end. If they even once looked at the output they would see my silly ASCII art.
Some may be curious about search engines, but since I only allow HTTP/2.0, only bing can even connect with my site and robots.txt tells them to go away.
unfortunately I have. an instance was brigaded and flooded with very illegal content. and then the instance was taken down forever by authorities. all the instance users could do was try to flag everything they could but it was too much.
edit: it was called sinblr and rumors are that it was an instance called pawoo that flooded it with seriously illegal content.
When I first created an account on some instance, I saw other people’s content even though I hadn’t followed anyone yet, so that advice seems insufficient.
Content from all federated instances is added to the feed in your instance, so you personally don't need to follow illegal stuff for it to appear in your feed. This is actually the most off-putting feature of Mastodon for me.
Sure, on the federated feed; that one can be a bit of a free-for-all. Once you follow a good handful of people you can spend most of your time on the home feed and only see stuff from people you follow and the things they boost.
Have you ever tried to follow anyone from an instance that your instance refused to federate with? I am wondering how big of a problem this is. Not that I have encountered it, mostly just see this as potential a drawback of Mastodon's approach to federation.
Given how discovery in Mastodon works, by following "boosted" (think: re-tweeted) content creator or reading new people in a discussion, you wouldn't even know these accounts exist - sure, you can manually enter a mastodon id in the search bar to go directly to an account somewhere else (and thus try to access a potentially blocked node), but that's not the common use-case.
Now, if that increases or decreases the problem space is a philosophical question. I personally do have issues with blocking instances as a whole because one admin disagrees with another admin on unrelated issues.
If I get denied access to other people to organise around, e.g. building our own zen retreat because my admin and their admin disagree on political factions or because one admin nuked the connection to another instance for not being trigger-happy enough against their own users, everyone loses. Such technical possibilities and - in fact - recommendations run against the ideas of a civil society.
I see the situation you described in the last paragraph as a very real possibility. Not going to defend Twitter, but at least it can’t ban whole organized communities at once. Mastodon’s federation that is left up to instance admins to decide on is bad design.
I would prefer to use an instance that has no control over who I can follow. As an added benefit, this instance wouldn’t have to fully federate with every instance I follow users from, so wouldn’t add all updates to the federated feed.
This way the instance would only be responsible for user identities but would not have much control over who its users can follow. I still don’t fully understand why Mastodon did not choose this design.
Good question! I hadn't tried, but just found a random user on an instance that mine doesn't federate with and clicked Follow. It took me through the remote follow and that user did appear on my list and showed up in searches when they didn't before.
Unfortunately I seem to have chosen an abandoned or dormant account for my test, so I can't say for certain whether posts will show up in Home, but I would expect so given the other visibility changes. Hopefully someone else with some fedi experience can chime in.
In a sense, yes. Email contents in gmail are technically accessible to Google. But they are protected like hell via a bunch of dedicated systems that make it very difficult to access this material without an explicit auditable ticket associated with helping that user with some problem and permission to access their gmail contents. Attempts to circumvent this will get people fired.
This does rely on you trusting Google to implement and use these systems. The question is whether you trust a major tech company or whatever Mastodon server owner more to not peek at your DMs.
AFAIK if you access a user's private info at Facebook your employee's ID will be immediately flagged leading to very severe consequences (instant firing in most cases).
I really don't like that ActivityPub does not support encryption. I wanted to setup an instance of one of these platforms for friends and family to use, but hated the thought of having to tell them that, by the way I can read all of your messages. I wouldn't, but I hate that that's even possible. So, instead I'm trying to twist Matrix to work more like a social media platform. It's janky, but totally workable, so long as you're not looking for global engagement.
(Hypothetically) wouldn't it be possible for client devices to generate key pairs, and for messages to be stored on the server encrypted in such a way that recipients' client devices could decrypt them? (I think that's what Signal does?)
Not saying that that's what happens on Mastodon instances, I don't know enough about it's operation to comment.
Yes, end-to-end encryption is possible. It just needs support in clients, as well as a common protocol if you want it to work between different clients.
Chances of a centralized Twitter stealing your sensitive information is quite a bit lower than N number of federated Mastodon instances run by any number and types of actors.
If you don't own the key exchange (and you don't, even on the services most people consider secure), you're still, on some level or another, just relying on trust that this is the case.
At any rate, mastodon is a web app, not an IM client. No one who's ever raised this has even begun to explain how you could work e2e into something like it. Certainly no other microblogging platform has e2e anything, because that's not actually a thing that makes sense.
> No for micro-blogging, but Mastodon supports direct messaging, and if you support direct messaging, you should support end-to-end.
No other microblogging service with DM support has e2e anything. Because they're websites. To have meaningful e2e you need to have key exchange and device keys, and if you have a website you can look at your DMs on then the website has to have a key. If the website has a key the owner of the website can look at your DMs. This is just fundamental to hosted web services, and it's why if you use icloud messaging with imessage you're no longer guaranteed e2e, and why signal just doesn't even have a website for you to use.
LE has nothing to do with this? The key exchange I'm talking about is the end keys. User keys. LE doesn't provide those. For e2e IM systems a server has to manage user/device:key mappings, and are a central point of trust. They can potentially inject a "listening key" into your recipient list without you knowing and tap you or even impersonate you (but only in a forward way).
E2E is not a panacea, but it's also largely irrelevant to websites.
Eh, if you don't trust your masto instance admin to not read your DMs do you really trust them to not break the "your password never leaves the client" guarantees that protonmail for eg. promises?
This is the thing about this argument: Either you trust your instance admins or not. If they promise you e2e and you don't trust them, you should rightly look at that as snakeoil.
This is meaningless if you don't trust the site admins, and the reason to use e2ee in the first place is to avoid trusting the site admins. All it takes is for them to serve you different JavaScript one time that exfiltrates your messages, and I guarantee you'll never notice.
What I'm suggesting is that the same certificate infrastructure that is used to secure the connection between a server and a client could also be used to secure the connections between users.
There's nothing specific to HTTPS about CAs and trust chains.
But for encrypted DMs you need per user keys that are stored on the users computer, otherwise the owner of the server has control over the key and we're back at square one. Or am I somehow misunderstanding you?
Go look at that PR and read the details and ask yourself who you have to trust with a list of device keys you're encrypting your dm for.
You might be surprised to discover that you're still trusting an instance admin.
It does improve some things, potentially, in terms of intermediaries being able to read things, but there are a lot of things that are still reliant on trusting your admin, or are outright unclear how they'll work in practice.
That said, I take back that "no one has begun to explain..." - they've begun. But so far they've kinda just thrown some well established protocols at it but not done much to explain how it really helps the "trust your admin" problem.