Hacker News new | past | comments | ask | show | jobs | submit | FreakLegion's comments login

That's monkey patching, and it actually would've worked fine. There isn't enough context in the write-up to say for sure, but presumably he was just doing it too late, after the third-party library was already imported. At that point the third-party library has its own reference to the original function(s), so patching the reference(s) in the source module doesn't do anything. If the source module had been patched first, though, it all would've worked out.

I think he was saying something else was calling it and that was busting other things. Gevent did some crazy antics to get the whole tcp interface patched up. https://www.gevent.org/api/gevent.monkey.html#gevent.monkey....

Right, but the reason something else was able to call it was that he patched it too late. The same thing can happen with gevent. From the docs:

> Patching should be done as early as possible in the lifecycle of the program. For example, the main module (the one that tests against __main__ or is otherwise the first imported) should begin with this code, ideally before any other imports:

    from gevent import monkey
    monkey.patch_all()
A corollary of the above is that patching should be done on the main thread and should be done while the program is single-threaded.

It's possible to patch later on, but much more involved. If you patch module A after you've already loaded module B, which itself loads module A, then you have to both patch module A and track down and patch every reference to module A in module B. Usually those will just be global references, but not always.


This assumes there are no calls to random functions from C extensions. Still, I would have started with the above.

Less so that, since he says he knows the sources of randomness, but it does assume esoteric import methods aren't used. If for some reason the third-party library is e.g. loading modules with importlib, all bets are off.

> Rust was literally invented to solve the security and concurrency issues inherent in using C/C++ for a browser engine.

Are most browser vulnerabilities not still found in engines like V8? Rust can help with something like last year's buffer overflow in libwebp (although that's overkill when a project like https://github.com/google/wuffs exists), but I'm unclear on how it gets you a better JIT.


You're objecting to the premise, not the conclusion*. The deduction is valid for the premise (the part in the 'if'). Well, assuming you accept that an idea that can be "more complete" isn't "fully formed", but I'd say that's definitional.

* Although it's not really right to use this kind of language here (premise, conclusion, deduction). It's a casual statement, so I suppose people can somewhat reasonably argue about it, but the assertion is tautological ('if something is incomplete, it isn't fully formed').


These are the main threat models to be aware of:

1. Credential phishing. Where you store your TOTP secret doesn't matter.

2. OAuth phishing. Where you store your TOTP secret doesn't matter. (This also runs right through FIDO and has been growing in popularity for about a decade now: https://www.trendmicro.com/en_us/research/17/d/pawn-storm-ab...)

3. Data breaches. Where you store your TOTP secret doesn't matter unless your password manager backs up online, that's what gets breached, and the data isn't properly encrypted. Which has happened.

4. Malware. Where you store your TOTP secret matters. This is why U2F and FIDO2 have a user presence test, but the real-world value here is overstated. Malware can always just steal your tokens.

5. Physical access. Where you store your TOTP secret matters. If you care about security though you'll have other measures in place to keep someone with physical access out, which is enough against basically anyone except governments.

Summary: A "true" second factor doesn't really matter. What matters is deciding which scenarios you care about and making sure you have security you'll actually use given those constraints.


I’d add another risk here because I think it’s the main one the TOTP actually ends up protecting against in this case: Credential Stuffing.

Since the TOTP secret is generated by the service and not the user, it prevents credential stuffing. It can’t be reused between sites. If you’re still reusing the same password on every site this is a huge increase in security given how many services are breached.

… but using a password manager and a unique password for each service already does that.

So storing your TOTP secrets alongside your password in your password manager is a _very_ marginal benefit. Prevents a replay of the login if it got captured in the clear and… not much else I can think of.

Where-as a “true” 2FA adds protection against malware, data breaches, physical access, and other risks. That seems like a pretty big benefit for minimal work.

I think for most people using TOTP to avoid the “this shady download/keylogger controls my entire life now” scenario is a pretty clear win on effort versus risk.

So I mostly agree with what you wrote here except the summary. I’d say everyone should be using “true” TOTP where they can, and if you’re going to store them alongside your password they’re little more than security theatre so don’t bother.


For the most part, 2FA is just a hack to mitigate the fact that passwords are a weak form of authentication: people reuse passwords, and websites leak them.

They're better than nothing, but outside of enterprise contexts (e.g., enterprise-managed CAC cards and other hardware tokens), I think expecting 2FA to defeat malware, physical access, etc. is too much.


> For the most part, 2FA is just a hack to mitigate the fact that passwords are a weak form of authentication: people reuse passwords, and websites leak them.

Yep, and "using a password manager with a randomly generated password for each site" already mitigates that just as well.

Barring a user using 1Password and still reusing the same password on every site (why are you using 1Password then?) the only use-case I can think of for storing TOTP tokens alongside your password in 1Password is "the service requires me to use TOTP and I don't care about security".

That said, full disclosure, I've kinda got a bone to pick here because I was a long time 1Password user that was forced to switch off when 1Password removed the ability to have multiple vaults because, as they have said on their support forums, "It's called 1Password and our users only want one password." which was a... mind-boggling response from a company that should be focused on security. And this blog post really doesn't instill much more confidence than that response did.

These days I have my passwords in a KeePassX database. I have my TOTP secrets stored in two places--on my phone in an authenticator app where they're on a very secure platform and practically unrecoverable, and in an entirely separate KeePassX database that has nothing in common with my password database. The only time the KeePassX TOTP database is opened is when I set up a new service and want to add my TOTP secret. It's only there to "break glass in case of emergency" and allow me to recover access if my phone were to become unavailable. It's the right balance of risk _for me_, and in most practical sense two _factor_ authentication.


When did they remove the ability to have multiple vaults? I'm assuming this is really recent as I'm currently using it with 2 accounts and each has 5 or 6 vaults in it.

Unless they added it back in, you have 5 or 6 "vaults" that all open with a single password?

For a single user, they're equivalent in almost every way to just having 5 or 6 folders in one vault, so the "separate vault" distinction is pretty much gone.

Basically, what I want is two separate, independent bank vaults. Opening one should not, in any way, help you open the other. I want to keep my spending money in the main vault, and my stack of gold bars in the other vault that I only open in case of an emergency. That way, the risk of anyone seeing the combination over my shoulder or sneaking in when I've got the vaults open is minimized.

What 1Password gives you is the ability to easily create other vaults, but every time you build a new vault, they put the combination for that vault on a sticky note inside your first vault and give you no way to remove it. The security of your second, third, fourth, etc vaults is irrevocably linked to that of your first vault. All your gold bars are, no matter what you do, forced to always be exactly as secure as your spending money because as soon as someone gets into your spending money they just need to look at your sticky notes and they've got access to everything else.

There's no real security purpose for having a second vault in this system. The only purpose is to more selectively control access to what other people can access (i.e., vault sharing / team features). If you want to make some of your spending money available to your wife but not all of it and not your gold bars, you can build _another_ vault, put some cash in it (and the sticky note in your first vault so you remember how to get in!) and then also give her a sticky note she can stick in her vault with the same combination. Now you can both easily access the shared vault, but she can't access your main vault or your gold bars and you have no way to know what she has in her vault at all.

Things like this and the enforced cloud sync are decisions that weren't in any way a trade-off (the path they chose does not preclude the more secure path). The fact that they keep removing the more secure option regardless really seems like a security company that's more focused on market and revenue growth than security now.


So you don't like 1PW - but what's the alternative in your opinion?

I don't like 1Password _for my use case_.

If your use-case aligns with what 1Password is selling, I'm sure they still make a great product.

If sharing's a huge part of what you're looking for, then 1Password or Bitwarden are probably your best options. If it's not and you're comfortable managing your own sync, then I don't think KeepassXC can be beat.

Pretty much the only thing I'll say concretely is Don't. Use. Lastpass. Like, if you've got the option of putting all your passwords in a plain text file on your desktop or in LastPass... pick the plain text file.


Yeah I view different vaults as a way to selectively share access and somewhat organization as opposed to 2 different independent password stores. I can understand the need for your use case and it makes sense. I think the only way to do that with 1Password now would be multiple accounts which makes no sense from a user perspective. It would be nice if they went back to supporting your use case as well.

> ... but the real-world value here is overstated. Malware can always just steal your tokens.

Stealing a TOTP secret means being able to generate all the tokens. Which allows, for example, to take over an user account and lock the legitimate user out of his account. MITMing a single U2F/FIDO2 allows you to... log in? From one of the system uses to access his account (the one that's compromised).

I've got sites where the FIDO2 to log in and the one to change any security setting require two different security keys. And even when it only requires one security key, the procedure to take over an account requires several tap on the physical security key. A savvy user won't be tricked into tapping his security key for no reason.

Let's take, say, a brokerage account. There's a security key for withdrawals. Without that security key an attacker would be MITMing me by stealing one token could... Consult, what, see my balance and trade for me!? But he still wouldn't be able to steal a cent (and my account ain't big enough to move the market).

The real-world value of an HSM sitting on a device with a tiny attack surface has a lot of value.

The very reason they exist being that compromised computers are a thing. Compromised Yubikeys are not.


> MITMing a single U2F/FIDO2 allows you to... log in?

You mean token theft? Most sites let you add factors without step-up auth once you're logged in [1]. That's what attackers prefer to do, vs. locking users out, which just gives away the game.

To be clear, end-running around hardware keys with stolen tokens isn't hypothetical. See for example the fallout from the last Okta breach at Cloudflare: https://blog.cloudflare.com/thanksgiving-2023-security-incid.... I make my team use YubiKeys, but there's no upside to being naive about what they are and aren't good for.

Anyway the kinds of sites you're describing are vanishingly few. What people mostly have to deal with are sites like Fidelity's. If you want something other than SMS or push notifications, you have to call Fidelity on the phone, and then all you get is Symantec's bloated wrapper around TOTP.

1. It's even the default in some major identity providers, like Microsoft's (https://learn.microsoft.com/en-us/entra/identity/conditional...). Okta is better, but then their default session length is forever, so moot point.


Interesting, how about in the Passkeys scenario where you use biometrics?

It helps but isn't perfect. The attacker can't just pick up the secret. With a passkey they would need to trick you into touching it to authenticate them. Probably by timing it when you think you are about to do a regular authentication.

Passkeys are the software version ('platform authenticator') of hardware keys ('roaming authenticator'), so no touching. Biometrics in this case are an attempt in software at something like the user presence test hardware has, but this is an OS-level guard, so the guarantees are much weaker. (Of course you can also have a biometric lock on a hardware key, which would be an improvement under the physical access threat model.)

Very often there's no particular intent, other than to be consistent. You haven't lived until you've traced discrepancies in an ML model deployed across a bunch of platforms back to which SIMD instructions each happens to support (usually squirreled away in some dependency).

That sounds about right. I routinely dig through cybersecurity data from around 100k companies, and Microsoft has no real competition. Occasionally I see Google at a little 500-person up-and-comer, but almost never in the enterprise, and they show up less and less as the companies get bigger. I don't think I've ever seen them in the Fortune 100, for example, other than at Google itself. The world runs on Exchange and Office.


It's CloudKitchens that was last tagged at $15 billion. Uber's valuation was much higher.


Most have voting control, subject to certain investor veto powers, after the A. Very few have it after the B.


Isn’t each round 10-20% to investors? Even in the worst case of Seed, A, and B at 20% each, founders still have 80% -> 64% -> 51% ?

And in the best case it’s just one series A taking ~15%, thus founders still have 85%


Each round carves out 10+% for employee options, on top of 10-30+% to investors (Seed can be anywhere from 10-30%, Series A is typically 20% to just the lead, Series B 10%+).

Equity ownership and voting control are also different things. After the B you commonly have 2 investors and an independent director on the board, alongside 1-2 founders.


It can get much worse. Companies can have multiple “seed” rounds. It may not be doing well enough for a real “A” round. The naming of the rounds doesn’t matter. Valuation at each round does. If your valuation goes lower from one round to the next (“down round”) you’ll give up more equity, diluting everyone else faster.


Searle's arguments aren't that broad. If you're thinking of the Chinese Room, the point there is simply that computation isn't understanding. It's not a claim that AGI is nonsense in principle, and in fact most of these older arguments are narrowly scoped to GOFAI. Of course nobody serious thinks LLMs have minds, either, but unless you believe in ghosts it's hard to make a case that human-like AGI shouldn't ultimately be possible.


> unless you believe in ghosts it's hard to make a case that human-like AGI shouldn't ultimately be possible

The argument that any non-materialist position (panpsychism, substance dualism, monadism, etc.) must "believe in ghosts" is comically reductive. Metaphysics is a thing, you know. So is moral philosophy. I mean, heck, so is logic. In all these subfields, arguing that their respective first principles are purely materialist is an uphill battle. I get a feeling that post-2000s materialism is basically logical positivism 2.0 (and we know how that ended).

> It's not a claim that AGI is nonsense in principle

You're playing a semantic game here; but in any case, understanding is a function of intelligence, so (working backwards), you're actually shooting yourself in the foot, anyway.


> The argument that any non-materialist position (panpsychism, substance dualism, monadism, etc.) must "believe in ghosts" is comically reductive.

"Ghost in the machine" is from Ryle. It's quite apropos here.

> You're playing a semantic game here; but in any case, understanding is a function of intelligence, so (working backwards), you're actually shooting yourself in the foot, anyway.

I honestly don't know what point you're trying to make.

To recap, you said that:

1. You don't believe in this AGI nonsense.

2. Searle elaborated on why AGI is nonsense in the 80s.

And I am clarifying for you that:

1. Searle did not elaborate on why AGI is nonsense in the 80s. He doesn't claim that AGI is nonsense at all.

2. His arguments are specific to the methods that were called AI, and a bunch of technical and philosophical claims about those methods and about the human mind, at the time he was writing. Ditto for Dreyfus.

What you can get Searle to commit to is the idea that human-like AGI can't be a program executed on computers as we think of them today. He's an ultra-materialist, like Ryle, and thinks the physical reality of brains is essential to the minds they produce. To make a mind, you need to make a brain. (In other words, he thinks "Lena" is nonsense: https://qntm.org/mmacevedo.)

So he would say that ChatGPT is no closer to being human-like AGI than ELIZA was, since it's no closer to having something like a human brain, and if all you meant by "the whole AGI nonsense" is that ChatGPT isn't a step toward human-like AGI, then he'd agree. But he wouldn't say that human-like AGI itself is nonsense, precisely because he doesn't believe in ghosts.


Just to clarify, I disagree that materialism is the one true way (re: your "believe in ghosts" quip), even though I'm quite aware that Searle is not a cartesian dualist (which is why I made the comment about what our "brains do"). Philosophy of mind-wise, I'm much more in line with folks like Chalmers, though not fully bought in there, either.

And yes, by "AGI nonsense" I meant that "ChatGPT isn't a step toward human-like AGI" since the main chatter these days is about how ChatGPT/Claude/etc. is the harbinger of AGI. I think Searle's argument is particularly strong because you don't even have to believe in something "spooky" happening in our heads. In fact, I'd probably extend "AGI nonsense" to "AGI is flat out not possible" but that's more of a hunch and the full argument invokes things like the non-computability of various physical phenomena.


> unless you believe in ghosts it's hard to make a case that human-like AGI shouldn't ultimately be possible.

Thanks for chiming in as I'm completely unfamiliar with Searle.

I don't believe in immaterial human souls, therefore self-aware AGI appears to be a near inevitability of technological advancement from my perspective. To suggest that it's impossible is, in my opinion, absurd.


I don't think it should be flagged either, but it's wrong in a lot of ways. Some of those ways are boring, e.g. paying investor board members a salary isn't really a thing. Less boring observations:

Cybersecurity is somewhat unique in that historically most of the big VCs haven't paid much attention to it, relative to other things. Greylock, specifically Asheem Chandna and more recently Saam Motamedi, is the main exception that comes to mind. (This broad lack of interest is changing given the overall growth of the industry and the success of companies like Palo Alto Networks and CrowdStrike, though.)

Smaller VCs have spent a lot more time in the space, both because they haven't had to compete with the big firms and because cybersecurity has lower variance than other industries. The companies generate massive returns a lot less often, but also rarely fail, which makes them attractive to angels, second- and third-tier firms, and specialists.

It's an interesting dynamic, but I'd say the most problematic aspect of cybersecurity investing today is the existence of groups like SVCI (https://www.svci.io) and CyberStarts (https://cyberstarts.com). SVCI's investors are tech CISOs, i.e. potential buyers. It's pay-to-play with extra steps. CyberStarts and a few other Israeli VCs are even sketchier, flying CISOs around the world on lavish vacations billed as industry events where they socialize with the portfolio companies.

I know a handful of CISOs who've walked into new gigs only to discover a dozen CyberStarts portco products laying around, basically unused, that they then had to rip out when the contracts were up. It's all deeply unethical.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: