Are you saying you think the Canadian government should pay more toward fertility and less toward LGBTQ procedures such as gender reassignment? I wonder how much money the latter costs. Seems like it wouldn't be as big a cost vs fertility treatments being covered.
What switches enterprise or consumer tend to support this LLDP? My guess is maybe almost none on the consumer side. I.e. Netgear, to link. Cisco probably does. How about ubiquti?
Anything with a management interface (even web) could do it from the HW side, just a question of SW support. Netgear does support it on managed switches.
The protocol is old enough and very well established by now, even modern Windows boxes run it by default.
I know mikrotik supports this. On the higher end, most of the Dells switches I interacted with as well as Aruba had LLDP. Different manufacturers tend to report their interfaces slightly differently though
Nope, you need switch silicon with a driver that punts 01:80:c2:0:0:0e to cpu. A lot can do this but not all (generally a driver issue, not HW limitation.)
Long term this is good for the software ecosystem as a whole. Especially open source options like proxmox. I think Broadcom is making a strategic business mistake not willing to negotiate in good faith. However this is the true cost of using closed source solutions. The more this happens the more it gets factored into business decisions.
> The July 2022 outage was not the result of a design flaw in the Rogers core network architecture.
Talk about some grade A gaslighting here. Reading the post mortem they first tell you it wasn't a design flaw then say they routed all their data through one core router ( including a lack of a management network). Then they say they are going to fix things by separating out the wireless and wired traffic.
Why would you fix things if it wasn't a design flaw?
Out of band access is like resilient architecture 101. Hell even homelabs generally have some way to do it. It's appalling that Rogers didn't have a way to access the core IP routers out of band. Yes it might mean having to use a competitors infrastructure but they ended up having to do it anyway. And with the failure of the service now all the infrastructure providers are under additional scrutiny. Rogers should be striking some agreements with other providers to carry core traffic in case of an outage such as in this DR situation. For example Visa, MC, Amex all have agreements in place to process each others auth data in case the other party goes down. The thinking here being an outage for credit cards makes everyone look bad.
So this basically means to scan for this exploit remotely we'd need the private key of the attacker which we don't have. Only other option is to run detection scripts locally. Yikes.
One completely awful thing some scanners might choose to do is if you're offering RSA auth (which most SSH servers are and indeed the SecSH RFC says this is Mandatory To Implement) then you're "potentially vulnerable" which would encourage people to do password auth instead.
Unless we find that this problem has somehow infested a lot of real world systems that seems to me even worse than the time similar "experts" decided that it was best to demand people rotate their passwords every year or so thereby ensuring the real security is reduced while on paper you claim you improved it.
Have to admit I've never understood why password auth is considered so much worse than using a cert - surely a decent password (long, random, etc) is for all practical purposes unguessable, and so you're either using a private RSA key that no-one can guess, or a password that no-one can guess, and then what's the difference? With the added inconvenience of having to pass around a certificate if you want to login to the same account from from multiple sources.
One of the biggest differences is that if you're using password auth, and you are tricked into connecting to a malicious server, that server now has your plaintext password and can impersonate you to other servers.
If you use a different strong random password for every single server, this attack isn't a problem, but that adds a lot of management hassle compared to using a single private key. (It's also made more difficult by host key checking, but let's be honest, most of us don't diligently check the fingerprints every single time we get a mismatch warning.)
In contrast, if you use an SSH key, then a compromised server never actually gets a copy of your private key unless you explicitly copy it over. (If you're have SSH agent forwarding turned on, then during the compromised connection, the server can run a "confused deputy" attack to authenticate other connections using your agent's identity. But it loses that ability when you disconnect.)
If a man in the middle relays a public key challenge, that will indeed result in a valid connection, but the connection will be encrypted such that only the endpoints (or those who possess a private key belonging to one of the endpoints) can read the resulting traffic. So the man in the middle is simply relaying an encrypted conversation and has no visibility into the decrypted contents.
The man in the middle can still perform denial of service, by dropping some or all of the traffic.
The man in the middle could substitute their own public key in place of one of the endpoint's public keys, but if each endpoint knows the other endpoint's key and is expecting that other key, then an unexpected substitute key will raise a red flag.
No, these schemes use the pub/private keys to setup symmetric crypto, so just passing it along does you no good because what follows is a bunch of stuff encrypted by a session key only the endpoints know.
If I am a server and have your public key in an authorized_keys file, I can just encrypt a random session key using that and only you will be able to decrypt it to finish setting up the session.
This is why passwords and asymmetric crypto are worlds apart in security guarantees.
> if you're using password auth, and you are tricked into connecting to a malicious server, that server now has your plaintext password and can impersonate you to other servers.
Why would the password be sent in plaintext instead of, say, sending a hash of the password calculated with a salt that is unique per SSH server? Or something even more cryptographically sound.
In fact, passwords in /etc/shadow already do have random salts, so why aren't these sent over to the SSH client so it can send a proper hash instead of the plaintext password?
If the hash permits a login then having a hash is essentially equivalent to having a password. The malicious user wouldn't be able to use it to sudo but they could deploy some other privilege escalation once logged in.
Even so, these protocols require the server to know your actual password, not just a hash of the password, even though the password itself never traverses the network. So a compromised server can still lead to a compromised credential, and unless you use different passwords for every server, we're back to the same problem.
Asymmetric PAKEs don't require the server to know your password. You and the server need to have a discussion to establish some parameters that work for your chosen password, without revealing what it is, and then in future you can supply evidence that you indeed know the password (that is, some value which satisfies the agreed parameters), still without revealing what it is. This is not easy to do correctly, whereas it's really easy to get it wrong...
> Have to admit I've never understood why password auth is considered so much worse than using a cert
Password auth involves sending your credentials to the server. They're encrypted, but not irreversibly; the server needs your plaintext username and password to validate them, and it can, in principle, record them to be reused elsewhere.
Public key and certificate-based authentication only pass your username and a signature to the server. Even if you don't trust the server you're logging into, it can't do anything to compromise other servers that key has access to.
> surely a decent password (long, random, etc) is for all practical purposes unguessable
Sadly that is not how normies use passwords. WE know what passwords managers are for. Vast majority of people outside our confined sphere do not.
In short: password rotation policies make passwords overall less secure, because in order to remember what the new password is, people apply patterns. Patterns are guessable. Patterns get applied to future password as well. This has been known to the infosec people since 1990's because they had to understand how people actually behave. It took a research paper[0], published in 2010, to finally provide sufficient data for that fact to become undeniable.
It still took another 6-7 years until the information percolated through to the relevant regulatory bodies and for them to update their previous guidance. These days both NIST and NCSC tell in very clear terms to not require password rotation.
It depends what happens to the password. Typically it's sent as a bearer credential. But there are auth schemes (not widely used these days) where the password isn't sent over the wire.
Even if you use a scheme where the password never traverses the wire, the schemes still require the server to know what your password is in order to perform the authentication. So a compromised server still leads to compromise of your secret credential. Public key authentication does not have this property.
Wow, really? Ten years ago, it was drilled into me to never send a password like that, especially since the server shouldn't have the plain version anyway (so no reason for the client to send it).
I didn't want to believe you, but man, I just checked a few websites in the network inspector... and it seems like GMail, Hackernews, Wordpress, Wix, and Live.com all just sent it in plaintext with only SSL encryption :(
That's a bit disappointing. But TIL. Thanks for letting me know!
If you want to hop into a rabbit hole, try taking look in Steam's login send the user and pass))
If TLS break then all is untrusted anyway! If you read hash as MITM you can replay it as pass equivalent and log in with hash, do not need knowledge of the original pass. You can just inject the script to exfilatrate original pass before hashing. CSP is broken, since you can edit header to give your own script a inline nonce. I think everything is reliant on TLS in end.
I think 10yr ago before TLS was 99%+ standard on all sites many people would come up with schemes, forums would md5 pass client side and send md5, all sorts were common. But now trust is in TLS.
> Salted hash for transmitting passwords is a good technique. This ensures that the password can not be stolen even if the SSL key is broken
I'm a little confused with this recommendation
How server is supposed to verify user's password in this case? Store the same hash with exactly the same salt in the database, effectively making the transmitted salted hash a cleartext password?
Yes, the server should never have the cleartext password. In this case the salted hash is the same as a password to you, but it protects users who reuse the same password across different sites. If your entire password DB gets leaked, the attacker would be able to login to your site as your users, but they wouldn't be able to login as those users to other sites without brute forcing all the hashes.
Edit: I guess the reverse is also true, that is, leaked user passwords from other sources can't be easily tested against your user accounts just by sending a bunch of HTTP requests to your server. The attacker would have to at least run the passwords through your particular salted hash scheme first (which they can get by reverse engineering your client, but it's extra labor and computation).
That page seems to be a community wiki, and I think the original authors are somewhat confused on that point.
If you salt and hash the password on the client side, how is the server going to verify the password. Everything I can think of either requires the server to store the plaintext password (bad) or basically makes the hashed bytes become the plaintext password (pointless).
But I think the point of salting + hashing the password isn't quite the same as what TLS offers. It's not necessarily to prevent MITM eavesdropping, but to help protect the user from credential re-use from leaks.
What was I taught is that your server should never have the user's cleartext password to begin with, only the salted hash. As soon as they set it, the server only ever gets (and saves) the salted hash. That way, in the worst case scenario (data leak or rogue employee), at most your users would only have their accounts with you compromised. The salted hashes are useless anywhere else (barring quantum decryption). To you they're password equivalents, but they turn the user's weak reused password (that they may be using for banking, taxes, etc.) into a strong salted hash that's useless anywhere else.
That's the benefit of doing it serverside, at least.
Doing it clientside, too, means that the password itself is also never sent over the wire, just the salted hash (which is all the server needs, anyway), limiting the collateral damage if it IS intercepted in transit. But with widespread HTTPS, that's probably not a huge concern. I do think it can help prevent accidental leaks, like if your auth endpoint was accidentally misconfigured and caching or logging requests with cleartext passwords... again, just to protect the user from leaks.
It doesn’t actually do anything because if SSL is compromised then all of the junk you think you are telling the client to do to the password is via JavaScript that is also compromised.
If you’re worried about passive listeners with ssl private keys, perfect forward secrecy at the crypto layer solved that a long time ago.
For browsers at least, sending passwords plainly over a tls session is as good as it gets.
It's not to protect against MITM but against credential reuse. It offers no additional security over SSL but what it does protect against is user passwords being leaked and attackers being able to reuse that same password across the user's other online accounts (banks, etc.).
No. Everything you do on the client side you can also keep not doing at all.
You can imagine the salted and hashed password in your scheme to be "the password". Because the server will still know it, and could use it to log in somewhere else (it just has to skip the salt-and-hash step).
On that last point, I wouldn't pass around the certificate to log in from multiple sources, rather each source would have its own certificate. That is easy & cheap to do (especially with ed25519 certs).
Ah right, that's useful, thanks. Presumably if you need to login from an untrusted source (e.g. in an emergency), then you're out of luck in that case? Do you maybe keep an emergency access cert stashed somewhere?
That's a very good question. Likely depends on the circumstances. I don't quite know any ways of using untrusted sources safely. Maybe something where you can use temporary credentials (say 2FA), or the the likes of using AWS's EC2 Instance Connect, but there's always a problem of _something_ has to be on an untrusted location, I guess?
Having some emergency access certs in a password manager might be a good backup (and rotating it after using it on an untrusted source?).
The best way is, however, removing the need in emergencies to access a machine (e.g. more of the "cattle vs pets" way of thinking). But that's hard for sure.
> ...rotating it after using it on an untrusted source?...
> ...the "cattle vs pets" way of thinking...
Good points both... To the former, of course you're right that once used, an emergency cert should be replaced, which could be onerous either from the point of view of having double the number of certs to manage (rather than one master key), or else having to rotate the master key on all servers. To the latter, I'm definitely thinking about pets, so I hadn't considered just throwing away the VM and starting again; that neatly sidesteps the issue.
A lot of it has to do with centralizing administration. If you have more than one server and more than one user, certificates reduce a NxM problem into N+M instead.
Certificates can be revoked, they can have short expiry dates and due to centralized administration, renewing them is not terribly inconvenient.
On top of that they are a lot more difficult to read over the shoulder, to some degree that can be considered the second factor in a MFA scheme. Same reasons why passkeys are preferred over passwords lately. Not as secure as a HW-key, still miles better than “hunter2”.
It might be possible to use timing information to detect this, since the signature verification code appears to only run if the client public key matches a specific fingerprint.
The backdoor's signature verification should cost around 100us, so keys matching the fingerprint should take that much longer to process than keys that do not match it. Detecting this timing difference should at least be realistic over LAN, perhaps even over the internet, especially if the scanner runs from a location close to the target. Systems that ban the client's IP after repeated authentication failures will probably be harder to scan.
According to[1], the backdoor introduces a much larger slowdown, without backdoor: 0m0.299s, with backdoor: 0m0.807s. I'm not sure exactly why the slowdown is so large.
The effect of the slowdown on the total handshake time wouldn't work well for detection, since without a baseline you can't tell if it's slow due to the backdoor, or due to high network latency or a slow/busy CPU. The relative timing of different steps in the TCP and SSH handshakes on the other hand should work, since the backdoor should only affect one/some steps (RSA verification), while others remain unaffected (e.g. the TCP handshake).
However only probabilistic detection is possible that way and really 100us variance over the internet would require many many detection attempts to discern.
The tweet says "unreplayable". Can someone explain how it's not replayable? Does the backdoored sshd issue some challenge that the attacker is required to sign?
What it does is this: RSA_public_decrypt verifies a signature on the client's (I think) host key by a fixed Ed448 key, and then if it verifies, passes the payload to system().
If you send a request to SSH to associate (agree on a key for private communications), signed by a specific private key, it will send the rest of the request to the "system" call in libc, which will execute it in bash.
So this is quite literally a "shellcode". Except, you know, it's on your system.
That sounds repayable though. If I did a tcpdump of the attacker attacking my system, I could replay that attack against someone other system. For it to not be replayable, there needs to be some challenge issued by the backdoored sshd.
Of course since the backdoor was never widely deployed and is now public, I think it's unlikely the attacker will attempt to use it. So whether it's replayable doesn't have a practical impact now. I'm only asking about replayability because I'm curious how it's unreplayable.
“Any sufficiently advanced technology is indistinguishable from magic.”
Absolutely insane the level of wizardry being applied here to turn a lump of blackened, charred scrolls into readable text.
Having only cursory knowledge with machine learning are some of the techniques used in the article only recently discovered or have they been around for a while?
Is it due to us having reached an inflection point with these types of algorithms that they have become more popular and thus we are seeing new ways to apply them to old problems?
There has definitely been a virtuous cycle between GP-GPU processing capability, algorithms, libraries and software that use that hardware, and researchers working with those tools.
The part that I do not understand there is: why would anyone/everyone else want you to be alive again? And do not understand me wrong, I would very much be interested to talk to people from various ages. Specific ones, but also commoners. But why should someone want to restore thousands of random people from (e.g) 1284 on their own cost? That only works if those people have big stashes of money that are legally still theirs. And while I understand that some may want to keep ownership after death, I think it is super dangerous to have something like that. Just image, 10% of the world today still belonging to Dschinghis Khan. That cannot be good.
But that'll depend on things going smoothly and non-evil, non-dictators winning in the end. It'd be horrific if evil and malicious entities won and decided they just wanted to fuck with everyone.
Because death is the enemy, and the Jean Le Flambeur series by Hannu Rajiemi touches on this in pretty good ways. Won't spoil the plot, though.
It does put into perspective though, how distracted and selfish our species can be; oh, let's fight each other because of skin colour, sexuality, ethnicity, religion, fighting wars over resources, etc. Meanwhile people are getting cancer, have disabilities like blindness and paraplegia and generally just...dying, especially when it comes early, after a hard life. It's just so sad and disappointing that we have the resources to give everyone a pretty decent life while we work on solving these bigger problems...but we just don't.
Well, that could go lots of ways. Maybe some rich trillionaire buys you and spawns you into an endless horror simulation. They might be into torture and get off on it.
("No real humans harmed.")
But if the future can reverse the light cone, nobody is immune to that fate.
Who knows what the future holds. These are just sci-fi flights of fancy.
I remember learning about ancestor simulations by the vile offspring in accelerando, but reversing the light cone is quite chilling - is there any sci-fi novel that deals with that you would recommend?
reply