Hacker News new | past | comments | ask | show | jobs | submit | the8472's comments login


Do they also take this position about biological intelligence? Because humans most certainly have an alignment problem too.

The concerning AGI properties include recursive self-improvement to superhuman capability levels and the ability to mass-deploy copies. Those are not on the horizon when it comes to humans. If hypothetically some human acquired such properties that would be equally concerning.

The enterprise versions usually aren't affected by things like these.

Software bluray player DRM used to use SGX. But Intel discontinued that on desktop chips so they no longer can do that.

In the sense that any other government regulation is also ultimately backed by the state's monopoly on legal use of force when other measures have failed.

And contrary to what some people are implying he also proposes that everyone is subject to the same limitations, big players just like individuals. Because the big players haven't shown much of a sign of doing enough.


> In the sense that any other government regulation is also ultimately backed by the state's monopoly on legal use of force when other measures have failed.

Good point. He was only (“only”) really calling for international cooperation and literal air strikes against big datacenters that weren’t cooperating. This would presumably be more of a no-knock raid, breaching your door with a battering ram and throwing tear gas at the wee hours of the morning ;) or maybe a small extraterritorial drone through your window


... after regulation, court orders and fines have failed. Which under the premise that AGI is an existential threat would be far more reasonable than many other reasons for raids.

If the premise is wrong we won't need it. If society coordinates to not do the dangerous thing we won't need it. The argument is that only in the case where we find ourselves in the situation where other measures have failed such uses of force would be the fallback option.

I'm not seeing the odiousness of the proposal. If bio research gets commodified and easy enough that every kid can build a new airborne virus in their basement we'd need raids on that too.


To be honest, I see summoning the threat of AGI to pose an existential threat to be on the level with lizard people on the moon. Great for sci-fi, bad distraction for policy making and addressing real problems.

The real war, if there is one, is about owning data and collecting data. And surprisingly many people fall for distractions while their LLM fails at basic math. Because it is a language model of course...


Freely flying through the sky on wings was scifi before the wright brothers. Something sounding like scifi is not a sound argument that it won't happen. And unlike lizard people we do have exponential curves to point at. Something stronger than a vibes-based argument would be good.

I consider the burden of proof to fall on those proclaiming AGI to be an existential threat, and so far I have not seen any convincing arguments. Maybe at some point in the future we will have many anthropomorphic robots and an AGI could hack them all and orchestrate a robot uprising, but at that point the robots would be the actual problem. Similarly, if an AGI could blow up nuclear power plants, so could well-funded human attackers; we need to secure the plants, not the AGI.

It doesn't sound like you gave serious thought to the arguments. The AGI doesn't need to hack robots. It has superhuman persuasion, by definition; it can "hack" (enough of) the humans to achieve its goals.

AI mind control abilities are also on the level of an extraordinary claim, that requires extraordinary evidence.

It's on the level of "we better regulate wooden sticks so Voldemort doesn't use the imperious curse on us!".

That's how I treat such claims. I treat them the same as someone literally talking about magic from Harry potter.

There isn't nothing that would make me believe that. But it requires actual evidence and not thought experiments.


Voldemort is fictional and so are bumbling wizard apprentices. Toy-level, not-yet-harmful AIs on the other hand are real. And so are efforts to make them more powerful. So the proposition that more powerful AIs will exist in the future is far more likely than an evil super wizard coming into existence.

And I don't think literal 5-word-magic-incantation mind control is essential for an AI to be dangerous. More subtle or elaborate manipulation will be sufficient. Employees already have been duped into financial transactions by faked video calls with what they assumed to be their CEOs[0], and this didn't require superhuman general intelligence, only one single superhuman capability (realtime video manipulation).

[0] https://edition.cnn.com/2024/02/04/asia/deepfake-cfo-scam-ho...


> Toy-level, not-yet-harmful AIs on the other hand are real.

A computer that can cause harm is much different than the absurd claims that I am disagreeing with.

The extraordinary claims that are equivalent to saying that the imperious curse exists would be the magic computers that create diamond nanobots and mind control humans.

> that more powerful AIs will exist in the future

Bad argument.

Non safe Boxes exist in real life. People are trying to make more and better boxes.

Therefore it is rational to be worried about Pandora's box being created and ending the world.

That is the equivalent argument to what you just made.

And it is absurd when talking about world ending box technology, even though Yes dangerous boxes exist, just as much as it is absurd to claim that world ending AI could exist.


Instead of gesturing at flawed analogies, let's return to the actual issue at hand. Do you think that agents more intelligent than humans are impossible or at least extremely unlikely to come into existence in the future? Or that such super-human intelligent agents are unlikely to have goals that are dangerous to humans? Or that they would be incapable of pursuing such goals?

Also, it seems obvious that the standard of evidence that "AI could cause extinction" can't be observing an extinction level event, because at that point it would be too late. Considering that preventive measures would take time and safety margin, which level of evidence would be sufficient to motivate serious countermeasures?


Less than a month ago: https://arxiv.org/abs/2403.14380 "We found that participants who debated GPT-4 with access to their personal information had 81.7% (p < 0.01; N=820 unique participants) higher odds of increased agreement with their opponents compared to participants who debated humans."

And it's only gonna get better.


Yes, and I am sure that when people do a google search for "Good arguments in favor of X", that they are also sometimes convinced to be more in favor of X.

Perhaps they would be even more convinced by the google search than if a person argued with them about it.

That is still much different from "The AI mind controls people, hacks the nukes, and ends the world".

Its that second part that is the the fantasy land situation that requires extraordinary evidence.

But, this is how conversations about doomsday AI always go. People say "Well isn't AI kinda good at this extremely vague thing Y, sometimes? Imagine if AI was infinitely good at Y! That means that by extrapolation, the world ends!".

And that covers basically every single AI doom argument that anyone ever makes.


If the only evidence for AI doom you will accept is actual AI doom, you are asking for evidence that by definition will be too late.

"Show me the AI mindcontrolling people!" AI mindcontrolling people is what we're trying to avoid seeing.

The trick is, in the world in which AI doom is in the future, what would you expect to see now that is different from the world in which AI doom is not in the future?


> If the only evidence for AI doom you will accept is actual AI doom

No actually. This is another mistake that the AI doomers make. They pretend like a demand for evidence means that the world has to end first.

Instead, what would be perfectly good evidence, would be evidence of significant incremental harm that requires regulation on its own, independent of any doom argument.

In between "the world literally ends by magic diamond nanobots and mind controlling AI" and "where we are today" would be many many many situations of incrementally escalating and measurable harm that we would see in real life, decades before the world ending magic happens.

We can just treat this like any other technology, and regulate it when it causes real world harm. Because before the world ends by magic, there would be significant real world harm that is similar to any other problem in the world that we handle perfectly well.

Its funny because you committing the exact mistake that I was criticizing in my original post, where you did the absolutely massive jump and hand waved it away.

> what would you expect to see now that is different from the world in which AI doom is not in the future?

What I would expect is for the people who claim to care about AI doom to actually be trying to measure real world harm.

Ironically, I think the people who are coming up with increasingly thin excuses as for why they don't have to find evidence are increasing the likelyhood of such AI doom much more than anyone else because they are abandoning the most effective method of actually convincing the world of the real world damage that AI could cause.


Well, at least if you see escalating measurable harm you'll come around, I'm happy about that. You won't necessarily get the escalating harm even if AI doom is real though, so you should try to discover if it is real even in worlds where hard takeoff is a thing.

> What I would expect is for the people who claim to care about AI doom to actually be trying to measure real world harm.

Why bother? If escalating harm is a thing, everyone will notice. We don't need to bolster that, because ordinary society has it handled.


> You won't necessarily get the escalating harm even if AI doom is real though

Yes we would. Unless you are one of those people who think that the magic doom nanobots are going to be invented overnight.

My comparisions to someone who is worried about literal magic, from harry potter, is apt.

But at that point, if you are worried about magic showing up instantly, then your position is basically not falsifiable. You can always retreat to some untestable, unfalsifiable magic.

Like there is actually nothing I could say, no evidence I could show to ever convince someone out of that position.

On the other hand, my position is actually fasifiable. There is absolutely all sorts of non world ending evidence that could convince me to think that AI is dangerous.

But nobody on the doomer side seems to care about any of that. Instead they invent positions that seem almost tailor made to avoid being falsifiable or disprovable so that they can continue to believe them despite any evidence to the contrary.

As in, if I were to purposeful invent an idea or philosophy that is impossible to be disproved or convinced out of the "I can't show you evidence because the world will end" position is what I would invent.

> you'll come around,

Do you admit that you won't though? Do you admit that no matter what evidence is shown to you, that you can just retreat and say that the magic could happen at any time?

Or even if this isn't you literally, that someone in your position could dismiss all counter evidence, no matter what, and nobody could convince someone out of that with evidence?

I am not sure how someone could ever possibly engage with you seriously on any of this, if that is your position.


> Like there is actually nothing I could say, no evidence I could show to ever convince someone out of that position.

There is, it is just very hard to obtain. Various formal proofs would do. On upper bounds. On controllability. On scalability of safety techniques.

The manhattan project scientists did check whether they'd ignite the atmosphere before detonating their first prototype. Yes, that was much simpler task. But there's no rule in nature that says proving a system to be safe must be as easy as creating the system. Especially when the concern is that the system adaptive and adversarial.

Recursive self-improvement is a positive feedback loop, like nuclear chain reactions, like virus replication. So if we have an AI that can program then we better make sure that it either cannot sustain such a positive feedback loop or that it remains controllable beyond criticality. Given the complexity of the task it appears unlikely that a simple ten-page paper proving this will show up on arxiv. But if one did that'd be great.

>> You won't necessarily get the escalating harm even if AI doom is real though

> Yes we would.

So what does guarantee a visible catastrophe that won't be attributed to human operators using a non-agentic AI incorrectly? We keep scaling and the systems will be treated as assistants/optimizers and it's always the operators fault. Until we roughly reach human-level on some relevant metrics. And at that point there's a very narrow complexity range from idiot to genius (human brains don't vary by orders of magnitude!). So as far as hardware goes this could be a very narrow range and we could shoot straight from "non-agentic sub-human AI" to "agentic superintelligence" in short timescales once the hardware has that latent capacity. And up until that point it will always have been a human error, lax corporate policies, insufficient filtering of the training set or whatever.

And it's not that it must happen this way. Just that there doesn't seem anything ruling it and similar pathways out.


What do you think mind control is? Think President Trump but without the self-defeating flaws, with an ability to stick to plans, and most importantly the ability to pay personal attention to each follower to further increase the level of trust and commitment. Not Harry Potter.

People will do what the AI says because it is able to create personal trust relationships with them and they want to help it. (They may not even realize that they are helping an AI rather than a human who cares about them.)

The normal ways that trust is created, not magical ones.


> What do you think mind control is?

The magic technology that is equivalent to the imperious curse from Harry Potter.

> The normal ways that trust is created, not magical ones.

Buildings as a technology are normal. They are constantly getting taller and we have better technology to make them taller.

But, even though buildings are a normal technology, I am not going to worry about buildings getting so tall soon that they hit the sun.

This is the same exact mistake that every single AI doomers makes. What they do is they take something normal, and then they infinitely extrapolate it out to an absurd degree, without admitting that this is an extraordinary claim that requires extraordinary evidence.

The central point of disagreement, that always gets glossed over, is that you can't make a vague claim about how AI is good at stuff, and then do your gigantic leap from here to over there which is "the world ends".

Yes that is the same as comparing these worries to those who worry about buildings hitting the sun or the imperious curse.


Then it's just a matter of evolution in action.

And while it doesn't take a God to start evolution, it would take a God to stop it.


You might be OK with suddenly dying along with all your friends and family, but I am not even if it is "evolution in action".

Historically governments haven't needed computers or AI to do that. They've always managed just fine.

Punched cards helped, though, I guess...


gestures at the human population graph wordlessly

Agent Smith smiles mirthlessly

You say you have not seen any arguments that convince you. Is that just not having seen many arguments or having seen a lot of arguments where each chain contained some fatal flaw? Or something else?

> I see summoning the threat of AGI to pose an existential threat to be on the level with lizard people on the moon.

I mean to every other lifeform on the plant YOU are the AGI existential threat. You, and I mean homosapiens by that, have taken over the planet and have either enslaved and are breeding any other animals for food, or are driving them to extinction. In this light bringing another potential apex predator on to the scene seems rash.

>fall for distractions while their LLM fails at basic math

Correct, if we already had AGI/ASI this discussion would be moot because we'd already be in a world of trouble. The entire point is to slow stuff down before we have a major "oopsie whoopsie we can't take that back" issue with advanced AI, and the best time to set the rules is now.


>If the premise is wrong we won't need it. If society coordinates to not do the dangerous thing we won't need it.

But the idea that this use of force is okay itself increases danger. It creates the situation that actors in the field might realize that at some point they're in danger of this and decide to do a first strike to protect themselves.

I think this is why anti-nuclear policy is not "we will airstrike you if you build nukes" but rather "we will infiltrate your network and try to stop you like that".


> anti-nuclear policy is not "we will airstrike you if you build nukes"

Was that not the official policy during the Bush administration regarding weapons of mass destruction (which covers nuclear weapons in addition to chemical and biological weapons). That was pretty much the official premise of the second Gulf war


If Israel couldn't infiltrate Iran's centrifuges, do you think they would just let them have nukes? Of course airstrikes are on the table.

> ... after regulation, court orders and fines have failed

One question for you. In this hypothetical where AGI is truly considered such a grave threat, do you believe the reaction to this threat will be similar to, or substantially gentler than, the reaction to threats we face today like “terrorism” and “drugs”? And, if similar: do you believe suspected drug labs get a court order before the state resorts to a police raid?

> I'm not seeing the odiousness of the proposal.

Well, as regards EliY and airstrikes, I’m more projecting my internal attitude that it is utterly unserious, rather than seriously engaging with whether or not it is odious. But in earnest: if you are proposing a policy that involves air strikes on data centers, you should understand what countries have data centers, and you should understand that this policy risks escalation into a much broader conflict. And if you’re proposing a policy in which conflict between nuclear superpowers is a very plausible outcome — potentially incurring the loss of billions of lives and degradation of the earth’s environment — you really should be able to reason about why people might reasonably think that your proposal is deranged, even if you happen to think it justified by an even greater threat. Failure to understand these concerns will not aid you in overcoming deep skepticism.


> In this hypothetical where AGI is truly considered such a grave threat, do you believe the reaction to this threat will be similar to, or substantially gentler than, the reaction to threats we face today like “terrorism” and “drugs”?

"truly considered" does bear a lot of weight here. If policy-makers adopt the viewpoint wholesale, then yes, it follows that policy should also treat this more seriously than "mere" drug trade. Whether that'll actually happen or the response will be inadequate compared to the threat (such as might be said about CO2 emissions) is a subtly different question.

> And, if similar: do you believe suspected drug labs get a court order before the state resorts to a police raid?

Without checking I do assume there'll have been mild cases where for example someone growing cannabis was reported and they got a court summons in the mail or two policemen actually knocking on the door and showing a warrant and giving the person time to call a lawyer rather than an armed, no-knock police raid, yes.

> And if you’re proposing a policy in which conflict between nuclear superpowers is a very plausible outcome — potentially incurring the loss of billions of lives and degradation of the earth’s environment — you really should be able to reason about why people might reasonably think that your proposal is deranged [...]

Said powers already engage in negotiations to limit the existential threats they themselves cause. They have some interest in their continued existence. If we get into a situation where there is another arms race between superpowers and is treated as a conflict rather than something that can be solved by cooperating on disarmament, then yes, obviously international policy will have failed too.

If you start from the position that any serious, globally coordinated regulation - where a few outliers will be brought to heel with sanctions and force - is ultimately doomed then you will of course conclude that anyone proposing regulation is deranged.

But that sounds like hoping that all problems forever can always be solved by locally implemented, partially-enforced, unilateral policies that aren't seen as threats by other players? That defense scales as well or better than offense? Technologies are force-multipliers, as it improves so does the harm that small groups can inflict at scale. If it's not AGI it might be bio-tech or asteroid mining. So eventually we will run into a problem of this type and we need to seriously discuss it without just going by gut reactions.


Just my (probably unpopular) opinion: True AI (what they are now calling AGI) may never exist. Even the AI models of today aren't far removed from the 'chatbots' of yesterday (more like an evolution rather than revolution)...

...for true AI to exist, it would need to be self aware. I don't see that happening in our lifetimes when we don't even know how our own brains work. (There is sooo much we don't know about the human brain.)

AI models today differ only in terms of technology compared to the 'chatbots' of yesterday. None are self aware, and none 'want' to learn because they have no 'wants' or 'needs' outside of their fixed programming. They are little more than glorified auto complete engines.

Don't get me wrong, I'm not insulting the tech. It will have it's place just like any other, but when this bubble pops it's going to ruin lives, and lots of them.

Shoot, maybe I'm wrong and AGI is around the corner, but I will continue to be pessimistic. I am old enough to have gone through numerous bubbles, and they never panned out the way people thought. They also nearly always end in some type of recession.


Why is "Want" even part of your equation.

Bacteria doesn't "want" anything in the sense of active thinking like you do, and yet will render you dead quickly and efficiently while spreading at a near exponential rate. No self awareness necessary.

You keep drawing little circles based on your understanding of the world and going "it's inside this circle, therefore I don't need to worry about it", while ignoring 'semi-smart' optimization systems that can lead to dangerous outcomes.

>I am old enough to have gone through numerous bubbles,

And evidently not old enough to pay attention to the things that did pan out. But hey, those cellphone and that internet thing was just a fad right. We'll go back to land lines at any time now.


> I'm not seeing the odiousness of the proposal. If bio research gets commodified and easy enough that every kid can build a new airborne virus in their basement we'd need raids on that too.

Either you create even better bio research to neutralize said viruses... or you die trying...

Like if you go with the raid strategy and fail to raid just one terrorist that's it, game over.


Those arguments do not transfer well to the AGI topic. You can't create counter-AGI, since that's also an intelligent agent which would be just as dangerous. And chips are more bottlenecked than biologics (... though gene synthesizing machines could be a similar bottleneck and raiding vendors which illegally sell those might be viable in such a scenario).

Time to publish the next book in "Stealing the network" series.

That's like complaining about artificial, non-organic flight being a fantasy before the wright brothers. Which would not have been entirely wrong, but hardly a knockout argument as we now know with certainty.

There is a vast difference between "engineering challenges have not yet been proven moot by a working prototype" and "precluded by physics or mathematics".

The (in)animate distinction is more akin to net-positive fusion in a man-made vessel vs. in a gravity well rather than obeys the 2nd law of thermodynamics vs. perpetuum mobile.


That's like complaining about artificial, non-organic flight being a fantasy before the wright brothers.

Nope.

Before the Wright brothers, we knew it was scientifically possible to suspend objects heaver than air in an air current. For example, kites and balloons.

The only examples we have of "intelligence" are organic in nature.

Since we have no examples to the contrary; for all we know, "life" and "intelligence" could be somehow inter-related. And we don't currently fully understand or know how to engineer either one.

Yet people have "faith" --- just like the alchemists. "Believe" what you want --- but it's not science.


> The only examples we have of "intelligence" are organic in nature.

High-abstraction category-words such as "intelligence" are not fundamental properties of nature. They're made by man. Which means we currently happen to define intelligence in a way that we primarily observe in organic objects. So there's some circular reasoning in your argument. If we look at all the more fundamental building blocks such as information-processing, memory, adaptive algorithms, manipulating environments then we know all of them already are implementable in non-organic systems.

It is not a blind belief decoupled from reality. It is an argument based on the observation that the building-blocks we know about are there and they have not been put together yet due to complexity. There also is the observation that the "putting together" process has been following a trajectory that results in more and more capabilities (chess, go, partial information games, simulated environments, vision, language, art, programming, ...), i.e. there's extrapolation based on things that are already observable. Unless you can point at some lower-level piece that is only available in organic systems. Or why only organic systems should be capable of composing the pieces into a larger whole. I am not aware of any such limiting factor.

Just like "suspending" objects in air was possible and self-propelled machines were possible, even if not self-propelled flight had not yet been done at that time.


Evolution is an incredibly dumb, very parallel search over long timescales and that managed to turn blobs of carbon soup into brains because that's a useful (but not necessary) tool for its primary optimization goal. But that doesn't make carbon soup magical. There's no fundamental physics that privileges information-processing on carbon atoms. We don't have the time-scales of evolution, but we can optimize so much harder on a single goal than it can.

So I don't see how it's a movie fantasy any more than bottling up stars is (well, ICBMs do deliver bottled sunshine... uh, the analogy is going too far here). Anyway, the point is that while brains and intelligence are complicated systems there isn't anything at all known that says it's fundamentally impossible to replicate their functionality or something in the general category. And scaling will be a necessary but perhaps not sufficient component of that, just because they're going to be complex systems.


But that doesn't make carbon soup magical.

The equivalent of a human brain just "emerging" from inert silicon switches without any real understanding of how or why --- that's PFM (Pure Friggin' Magic). There is no logical reason to believe it is even possible or practical --- yet people still believe.

It's the modern day version of alchemy --- trying to create a fantastical result from a chemical reaction before it was understood that nuclear physics is the mechanism required. And even with this understanding, we have yet to succeed at turning lead into gold in any practical way.


"It's the modern day version of alchemy --- trying to create a fantastical result from a chemical reaction before it was understood that nuclear physics is the mechanism required. And even with this understanding, we have yet to succeed at turning lead into gold in any practical way."

Well, sure. But then, we never observed nature turning lead to gold in any cheap way, whereas we do observe nature running complex intelligence on relatively cheap hardware (our brains).

There's a pretty big difference between trying to do something never Nature did (alchemy), versus trying to replicate in a kinda-parallel kind of way something we've observed Nature doing (intelligence).


It seems evolution developed intelligence as a way to for organisms to react and move through 3D space. An advantage was gained by organisms who can "understand" and predict, but biology just reused the same hardware for location, moving and modeling the 3D word for more abstract processes like thinking and pattern recognition.

So evolution came up with solutions for surviving on planet Earth, which isn't necessarily the same as general problem solving, even though there are significant overlaps. Just the $.02 of a layperson.


The question is if other animals even do have the kind of intelligence that most people call AGI. It seems that this kind of adaptive intelligence may be unique to mammals (and perhaps a few other outliers, like the octopus), with the vast majority of life being limited to inborn reflexes, immitating their conspecifics, and trial and error.

This is actually a great perspective to have. The idea that there is no fundamental law of physics that should prevent us from replicating or exceeding the functionality of the human brain through artificial means makes this a question of when, not if.

The idea that there is no fundamental law of physics that should prevent us ...

Maybe ... someday.

But right now, we don't even fully understand the physics that makes a human brain work. Every time someone starts investigating, they uncover surprising new complexity.

https://theness.com/neurologicablog/is-the-brain-analog-or-d...


The understanding is not a prerequisite to build one though. Evolution didn't need it. And all the functionality that already emerges from ANNs doesn't need it either. You mostly need to understand the optimizer, not the result. Of course understanding the result is desirable to steer the optimization process better. But if all you care for is getting any result, even potentially undesirable ones, then building bigger optimizers appears to work.

Evolution didn't need it.

Yes, it only took a few billion years of trial and error. And it didn't manage to do it with inanimate objects either.

We know that the human brain is organic, mostly analog and much more complex than a digital computer.


That's an argument about complexity. And "organic" simply means based on molecules with a carbon backbone. They're versastile, but that does not give them any known information processing advantage over silicon.

If analog processing were relevant (it most likely isn't) then Artifical Neural Networks could also be implemented on analog silicon circuits.

As for the complexity, well, the systems are getting more complex. Straight lines on log charts. That's what scaling is about. Getting there, to that complexity.

So I'm just not seeing any knockout argument from you. We can't predict the future with certainty. But the factors you present do not appear to be fundamental blockers on the possibility. You're pointing at the lack of an existence proof... which is always the case before a protoype.

It sounds like you're basically abandoning forward-thinking and will only acknowledge AGI when it hits you over the head. A sign of intelligence is also the ability to plan for an unseen future.


Can you clarify what you mean by "no information processing advantage?" Does increased speed or memory capacity provide such an advantage? Or would you also claim that 2024 silicon has no advantage over 1994 silicon despite several orders of magnitude more speed & memory?

Since we're looking towards the future and asking whether there is an advantage of the basic materials (organic vs. inoraganic) I'm talking about the physical limits of those substrates, whether there's anything special about them that allows one to process information in a way that the other can't. I.e. how many bits can be stored in a cubic centimeter of nano-structured silicon compared to a cube of carbon, oxygen, hydrogen and other organic chemistry arranged into (for example) neurons. How many TFlop/s can be fit into such a cube in principle etc. etc. Or if there are some other physical processes relevant to information processing that make carbon special. Those fundamental limits have not changed over time, the physics are the same. All that changes are how much use engineering makes of those possibilities.

If you want security, pay for independent code audits (not compliance bullshit). Repeatedly. Don't offload your desires onto one-man-shows that the world decided are useful tools.


Note that the xz CLI does not expose all available compression options of the library. E.g. rust release tarballs are xz'd with custom compression settings. But yeah, zstd is good enough for many uses.


github already suspended the account


github already mandates MFA for members of important projects


Doesn't it mandate it for everyone? I don't use it anymore and haven't logged in since forever, but I think I got a series of e-mails that it was being made mandatory.


It will soon. I think I have to sort it out before April 4. My passwords are already >20 random characters, so I wasn't going to do it until they told me to.


If you are using pass to store those, check out pass-otp and browserpass, since GitHub still allows TOTP for MFA. pass-otp is based on oathtool, so you can do it more manually too if you don't use pass.


My solution is far shittier than any of those.


It mandates it for everyone. I'm locked out of Github because fuck that.


Why opposed to MFA? Source code is one of the most important assets in our realm.


Most people will have to sync their passwords (generally strong and unique, given that it's for github) to the same device where their MFA token is stored, rendering it (almost) completely moot, but at a significantly higher risk of permanent access loss (depending on what they do with the reset codes, which, if compromised, would also make MFA moot.) (a cookie theft makes it all moot as well.)

The worse part is that people think they're more protected, when they're really not.


Bringing everyone up to the level of "strong and unique password" sounds like a huge benefit. Even if your "generally" is true, which I doubt, that leaves a lot of gaps.


Doesn't help that a lot of companies still just allow anyone with access to the phone number to gain access to the account (via customer support or automated SMS-based account recovery).

It's inconvenient, SMS 2FA is arguably security theater, and redundant with a real password manager. Hopefully Passkeys kills 2FA for most services.


SMS 2FA is the devil. It’s the reason I haven’t swapped phone numbers even though I get 10-15 spam texts a day. The spam blocking apps don’t help and intrude on my privacy.


FYI github already supports using passkeys as a combined login/2FA source. I haven't used my 2FA codes for a while now.


Freedom is far more important.


But you can use any totp authenticator. The protocol is free and open.


It's more to make the point that "no means no." An act of protest.

(I have written a TOTP implementation myself. I do not have a GH account, and likely never will.)


It's ridiculous to say "no means no" about not wanting to use a password to get an account, right?

What makes TOTP different from a password in terms of use or refusal?


Browsers don't save the TOTP seed and auto fill it for you for one, making it much less user friendly than a password in practice.

The main problem I have with MFA is that it gets used too frequently for things that don't need that much protection, which from my perspective is basically anything other than making a transfer or trade in my bank/brokerage. Just user-hostile requiring of manual action, including finding my phone that I don't always keep on me.

It's also often used as a way to justify collecting a phone number, which I wouldn't even have if not for MFA.


We are talking of non-SMS MFA


Mine does. Yours doesn't?


Should you have the freedom to put in a blank password, too?


My password is a single 'a'. Nobody will ever guess that one.


They have the freedom to request whatever authentication method they want.


My source code is important to others but not to me. I have backups. but 2FA is annoying to me.

It's very easy to permanently lose accounts when 2FA is in use. If I lose my device my account is gone for good.

Tokens from github never expire, and can do everything via API without ever touching 2FA, so it's not that secure.


"If I lose my device my account is gone for good."

Incorrect, unless you choose not to record your seeds anywhere else, which is not a 2fa problem.

2fa is in the end nothing more than a 2nd password that just isn't sent over the wire when used.

You can store a totp seed exactly the same as a password, in any form you want, anywhere you want, and use on a brand new device at any time.


> Incorrect, unless you choose not to record your seeds anywhere else, which is not a 2fa problem.

You know google authenticator app introduced a backup feature less than 1 year ago right?

You know phones break all the time right?


There has been other apps doing the same as Google Authenticator for 10 years and they didn’t require you not to lose your phone.


And why would I invest the time to figure this all out when the only one being advantaged are others?


You know google authenticator doesn't matter right? You know you could always copy your totp seeds since day one regardless of which auth app or it's features or limits right? You know that a broken device does not matter at all, because you have other copies of your seeds just like the passwords, right?

When I said they are just another password, I was neither lying nor in error. I presume you can think of all the infinite ways that you would keep copies of a password so that when your phone or laptop with keepassxc on it breaks, you still have other copies you can use. Well when I say just like a password, that's what I mean. It's just another secret you can keep anywhere, copy 50 times in different password managers or encrypted files, print on paper and stick in a safe, whatever.

Even if some particular auth app does not provide any sort of manual export function (I think google auth did have an export function even before the recent cloud backup, but let's assume it didn't), you can still just save the original number the first time you get it from a qr code or a link. You just had to know that that's what those qr codes are doing. They aren't single-use, they are nothing more than a random secret which you can keep andbcopy and re-use forever, exactly the same as a password. You can copy that number into any password manager or plain file or whatever you want just like a password, and then use it to set up the same totp on 20 different apps on 20 different devices, all working at the same time, all generating valid totp codes at the same time, destroy them all, buy a new phone, retrieve any one of your backup keepass files or printouts, and enter them into a fresh app on a fresh phone and get all your totp fully working again. You are no more locked out than by having to reinstall a password manager app and access some copy of your password db to regain the ordinary passwords.

The only difference from a password is, the secret is not sent over the wire when you use it, something derived from it is.

Google authenticators particular built in cloud copy, or lack of, doesn't matter at all, and frankly I would not actually use that particular feature or that particular app. There are lots of totp apps on all platforms and they all work the same way, you enter the secret give it a name like your bank or whatever, select which algorithm (it's always the default, you never have to select anything) and instantly the app starts generating valid totp codes for that account the same as your lost device.

Aside from saving the actual seed, let's say you don't have the original qr code any more (you didn't print it or screenshot it or right-click save image?) there is yet another emergency recovery which is the 10 or 12 recovery passwords that every site gives you when you first set up totp. You were told to keep those. They are special single-use passwords that get you in without totp, but each one can only be used once. So, you are a complete space case and somehow don't have any other copiesbof your seeds in any form, including not even simple printouts or screenshots of the original qr code, STILL no problem. You just burn one of your 12 single-use emergency codes, log in, disable and re-enable totp on that site, get a new qr code and a new set of emergency codes. Your old totp seed and old emergency codes no longer work so thow those out. This time, not only keep the emergency codes, also keep the qr code, or more practical, just keep the seed value in the qr code. It's right there in the url in the qr code. Sometimes they even display the seed value itself in plain text so that you can cut & paste it somewhere, like into a field in keepass etc.

In fact keepass apps on all platforms also will not only store the seed value but display the current totp for it just like a totp app does. But a totp app is a more convenient.

And for proper security, you technically shouldn't store both the password and the totp seed for an account in the same place, so that if someone gains access to one, they don't gain access to both. That's inconvenient but has to be said just for full correctness.

I think most sites do a completely terrible job of conveying just what totp is when you're setting it up. They tell you to scan a qr code but they kind of hide what that actually is. They DO all explain about the emergency codes but really those emergency codes are kind of stupid. If you can preserve a copy of the emergency codes, then you can just as easily preserve a copy of the seed value itself exactly the same way, and then, what's the point of a hanful of single-use emergency passwords when you can just have your normal fully functional totp seed?

Maybe one use for the emergency passwords is you could give them to different loved ones instead of your actual seed value?

Anyway if they just explained how totp basically works, and told you to keep your seed value instead of some weird emergency passwords, you wouldn't be screwed when a device breaks, and you would know it and not be worried about it.

Now, if, because of that crappy way sites obscure the process, you currently don't have your seeds in any re-usable form, and also don't have your emergency codes, well then you will be F'd when your phone breaks.

But that is fixable. Right now while it works you can log in to each totp-enabled account, and disable & reenable totp to generate new seeds, and take copies of them this time. Set them up on some other device just to see that they work. Then you will no longer have to worry about that.


> since day one

But if you forgot to do it on day one, you can't do it on day two because there is no way of getting them out other than rooting the phone.

Giving how your premise was wrong, I won't bother to read that novel you wrote. I'll just assume it's all derived from the wrong premise.


My original, correct, message was perfectly short.

You don't like long full detailed explainations, and you ignore short explainations. Pick a lane!

A friend of mine a long time ago used to have a humorous classification system, that people fell into 3 groups: The clued, The clue-able, The clue-proof.

Some people already understand a thing. Some people do not understand a thing, but CAN understand it. Some people exist in a force bubble of their own intention that actively repels understanding.


I see that in your classification system an important entry is missing. The ones who disagree.

In your quest to convince me you forgot to even stop to ponder if you're right at all. And in my view, you aren't.

Perhaps the problem isn't that I don't understand you. Perhaps I understand you perfectly well but I understand even more, to realise that you're wrong :)


You have not shown me being wrong.

No one is stopping you.

This is a silly thing to argue about but hey I'm silly so let's unpack your critique of the classification system

There is no 4th classification. It only speaks of understanding not agreeing.

Things that are matters of opinion may still be understood or not understood.

Whether a thing is a matter of opinion or a matter of fact, both sides of a disagreement still slot into one of those classes.

If a thing is a matter of opinion, then one of the possible states is simply that both sides of a disagreement understand the thing.

In this case, it is not a matter of opinion, and if you want to claim that I am the one who does not understand, that is certainly possible, so by all mrans, show how. What fact did I say that was not true?

Keep trying soldier. You never know. (I mean _I_ know, but you don't. As far as you know, until you go find out, I might be wrong.)

Whatever you do, don't actually go find out how how it works.

Instead, continue avoiding finding out how it works, because holy cow after you've gone this far... it's one thing to just be wrong about something, everyone always has to start out not understanding something, that's no failing, but to have no idea what you're talking about yet try to argue about it, in error the whole time..., I mean they (me) were such an insufferable ass already trying to lecture YOU, but for them (me) to turn out to have been simply correct in every single fact they spoke, without even some technicality or anything to save a little face on? Absolutely unthinkable.

Definitely better to save yourself from that by just never investigating.


> No one is stopping you.

You are too busy writing novels and raging at me to read what I wrote with an open mind.


My original statement was only that this is not a 2fa problem, which was and still is true.

The fact that you did not know this does not change this fact.

I acknowledged that web sites don't explain this well, even actively hide it. So it's understandable not to know this.

But I also reminded that this doesn't actually matter because you WERE also given emergency recovery passwords, and told to keep them, and told why, and how important they were.

You were never at risk of being locked out from a broken device EVEN THOUGH you didn't know about saving the seed values, UNLESS you also discarded the emergency codes, which is not a 2fa problem, it's an "I didn't follow directions" problem.

And even if all of that happened, you can still, right now, go retroactively fix it all, and get all new seed values and save them this time, as long long as your one special device happens to be working right now. It doesn't matter what features google authenticator has today, or had a year ago. It's completely and utterly irrelevant.

My premis remains correct and applicable. Your statement that 2fa places you at at risk was incorrect. You may possibly be at risk, but if so you did that to yourself, 2fa did not do that to you.


> But I also reminded that this doesn't actually matter because you WERE also given emergency recovery passwords, and told to keep them, and told why, and how important they were.

Ah yes those… the codes I must go to a public library to print, on a public computer, public network and public printer. I can't really see any problem with the security of this.

And then I must never forget where I put that very important piece of paper. Not in 10 years and after moving 3 times…


You can save a few bits of text any way you want. You can write them in pencil if you want just as a backup against google killing your google drive or something. Or just keep them in a few copies of a password manager db in a few different places. It's trivial.

What in the world is this library drama?

No one is this obtuse, so your arguments are most likely disingenuous.

But if they are sincere, then find a nephew or someone to teach you how your computer works.

Libraries? Remembering something for 10 years? Moving? Oh the humanity!


> You can save a few bits of text any way you want.

So I can keep them on the same device that I keep my passwords, right?

> Or just keep them in a few copies of a password manager db in a few different places.

Great now you no longer have 2FA but just a longer password.


Yes, you can keep them on the same device if you choose to.

Or not. You decide how much effort you want and where you want to place the convenience vs security slider.

Yes, if you keep both factors not only on the same device but in the same password manager, then both factors essentially combine into nothing but a longer password.

I did say from the very first, that the seeds are nothing other than another password.

Except there is still at least one difference which I will say for at least the 3rd time... the totp secret is not transmitted over the wire when it is used, the password is. That is actually a significant improvement all by itself even if you do everything else the easy less secure way.

And you do not have to store the seeds the convenient less secure way. You can have them in a different password app with a different master password on the same device, or on seperate devices, or in seperate physical forms. You can store them any way you want, securely, or less securely.

The point is that even while opting to do things all the very secure way, you are still not locked out of anything when a single special device breaks, because you are not limited to only keeping a single copy of the seeds or the emergency passwords in a single place like on a single device or a single piece of paper.

You are free to address any "but what about" questions you decide you care about in any way you feel like.

The only way you were ever screwed is by the fact that the first time you set up 2fa for any site, most sites don't explain the actual mechanics but just walk you through a sequence of actions to perform without telling you what they actually did, and so at the end of following those directions you ARE left with the seeds only stored in a single place. And in the particular case of Google Authenticator, stored in a more or less inaccessible place in some android sqlite file you can't even manually get to without rooting your phone probably. And were never even told about the seed value at all. You were given those emergency passwords instead.

That does leave you with a single precious drvice that must not break or be lost. But the problem is only a combination of those bad directions given by websites, and the limitations of one particular totp app when that app didn't happen to display or export or cloud-backup the seeds until recently.

Even now Googles answer is a crap answer, because Google can see the codes unencrypted on their server, and Google can kill your entire gooogle account at sny time and you lose everything, email, drive , everything, instantly, no human to argue with. That is why I said even today I still would not use Google Authenticator for totp.

Except even in that one worst case, you still had the emergency passwords, which you were always free to keep in whatever way works for you. There is no single thing you must or must not do, there is only what kinds of problems are the worst problems for you.

Example: if what you are most concerned about is someone else getting ahold of a copy of those emergency passwords, then you want to have very few copies of them and they should be off-line and inconvenient to access. IE a printed hard copy in a safe deposit box in switzerland.

If what you are most concerned about is accidentally destroying your life savings by losing the password and the investment site has no further way to let you prove your ownership, then keep 10 copies in 10 different physical forms and places so that no matter what happens, you will always be able to access at least one of them. One on goggle drive, one on someone else's google drive in case yours is killed, one on onedrive, one on paper at home, one on paper in your wallet, one on your previous phonr that you don't use but still works, etc etc.

You pick whichever is your biggest priority, and address that need however you want, from pure convenience to pure security and all possible points in between. The convenient way has security downsides. The secure way has convenience downsides. But you are not forced to live with the downsides of either the convenient way or the secure way.


> Why opposed to MFA? Source code is one of the most important assets in our realm.

Because if you don't use weak passwords MFA doesn't add value. I do recommend MFA for most people because for most people their password is the name of their dog (which I can look up on social media) followed by "1!" to satisfy the silly number and special character rules. So yes please use MFA.

But if your (like my) passwords are 128+bits out of /dev/random, MFA isn't adding value.


Sure it is. If your system ever gets keylogged and somebody gets your password you are compromised

With MFA even if somebody has your password if they don't have your physical authenticator too then you're relatively safe.


If you have a keylogger, they can also just take your session cookie/auth tokens or run arbitrary commands while you're logged in. MFA does nothing if you're logging into a service on a compromised device.


Keyloggers can be physically attached to your keyboard. There could also be a vulnerability in the encryption of wireles keyboards. Certificate-based MFA is also phishing resistant, unlike long, random, unique passwords.

There are plenty of scenarios where MFA is more secure than just a strong password.


These scenarios are getting into some Mission Impossible level threats.

Most people use their phones most of the time now, meaning the MFA device is the same device they're using.

Of the people who aren't using a phone, how many are using a laptop with a built in keyboard? It's pretty obvious if you have a USB dongle hanging off your laptop.

If you're using a desktop, it's going to be in a relatively secure environment. Bluetooth probably doesn't even reach outside. No one's breaking into my house to plant a keylogger. And a wireless keyboard seems kind of niche for a desktop. It's not going to move, so you're just introducing latency, dropouts, and batteries into a place where they're not needed.

Long, random, unique passwords are phishing resistant. I don't know my passwords to most sites. My web browser generates and stores them, and only uses them if it's on the right site. This has been built in functionality for years, and ironically it's sites like banks that are most likely to disable auto fill and require weak, manual passwords.


I mean, both can be true at the same time. I have to admit that I only use MFA when I'm forced to, because I also believe my strong passwords are good enough. Yet I can still acknowledge that MFA improves security further and in particular I can see why certain services make it a requirement, because they don't control how their users choose and use their passwords and any user compromise is associated with a real cost, either for them like in the case of credit card companies or banks, or a cost for society, like PyPI, Github, etc.


But password managers typically don't send keyboard commands to fill in a password, so a physical device would be useless.

> There are plenty of scenarios where MFA is more secure than just a strong password.

And how realistic are they? Or are they just highly specific scenarios where all the stars must align, and are almost never going to happen?


I don't think phishing is such an obscure scenario.

The point is also that you as an individual can make choices and assess risk. As a large service provider, you will always have people who reuse passwords, store them unencrypted, fall for phishing, etc. There is a percentage of users that will get their account compromised because of bad password handling which will cost you, and by enforcing MFA you can decrease that percentage, and if you mandate yubikeys or something similar the percentage will go to zero.


> I don't think phishing is such an obscure scenario.

For a typical person, maybe, but for a tech-minded individual who understands security, data entropy and what /dev/random is?

And I don't see how MFA stops phishing - it can get you to enter a token like it can get you to enter a password.

I'm also looking at this from the perspective of an individual, not a service provider, so the activities of the greater percentage of users is of little interest to me.


> And I don't see how MFA stops phishing - it can get you to enter a token like it can get you to enter a password.

That's why I qualified it with "certificate-based". The private key never leaves the device, ideally a yubikey-type device.


> That's why I qualified it with "certificate-based". The private key never leaves the device

Except that phishing doesn't require the private key - it just needs to echo back the generated token. And even if that isn't possible, what stops it obtaining the session token that's sent back?


Doesn't work for FIDO-based tokens, they auth the site as well, so won't send anything to phishing site.


From my understanding, FIDO isn't MFA though (the authenticator may present its own local challenge, but I don't think the remote party can mandate it).

There's also the issue of how many sites actually use it, as well as how it handles the loss of or inability to access private keys etc. I generally see stuff like 'recovery keys' being a solution, but now you're just back to a password, just with extra steps.


The phisher will not receive a valid token, though, because you sign something that contains the domain you are authenticating to.


The phisher can just pass on whatever you sign, and capture the token the server sends back.

Sure, you can probably come up with some non-HTTPS scheme that can address this, but I don't see any site actually doing this, so you're back to the unrealistic scenario.


No, because the phisher will get a token that is designated for, say, mircos0ft.com which microsoft.com will not accept. It is signed with the user's private key and the attacker cannot forge a signature without it.


A password manager is also not going to fill in the password on mircos0ft.com so is perfectly safe in this scenario. You need a MitM-style attack or a full on client compromise in both cases, which are vulnerable to session cookie exfiltration or just remote control of your session no matter the authentication method.

I think we're talking past each other a bit here.

If I were trying to phish someone, I wouldn't attack the public key crypto part, so how domains come into play during authentication doesn't matter. I'd just grab the "unencrypted" session token at the end of the exchange.

Even if you somehow protected the session token (sounds dubious), there's still plenty a phisher could do, since it has full MITM capability.


These days I wonder about all the cameras in a modern environment and "keylogging" from another device filming the user typing.


Session keys expire and can be scoped to do anything except reset password, export data, etc…that’s why you’ll sometimes be asked to login again on some websites.


If you're on a service on a compromised device, you have effectively logged into a phishing site. They can pop-up that same re-login page on you to authorize whatever action they're doing behind the scenes whenever they need to. They can pretend to be acting wonky with a "your session expired log in again" page, etc.

This is part of why MFA just to log in is a bad idea. It's much more sensible if you use it only for sensitive actions (e.g. changing password, authorizing a large transaction, etc.) that the user almost never does. But you need everyone to treat it that way, or users will think it's just normal to be asked to approve all the time.


Some USB keys have a LCD screen on it to prevent that. You can comprise the computer that the key was inserted to, but you cannot comprise the key. If you see the things messages shows up on your computer screen differs from the messages on the key, you reject the auth request.


Haha yes they do. Everyone stores their 2fa in 1Password so once that’s stolen by a key longer they’re fucked.


The slogan is "something you know and something you have", right?

I don't have strong opinions about making it mandatory, but I turned on 2FA for all accounts of importance years ago. I use a password manager, which means everything I "know" could conceivably get popped with one exploit.

It's not that much friction to pull out (or find) my phone and authenticate. It only gets annoying when I switch phones, but I have a habit of only doing that every four years or so.

You sound like you know what you're doing, that's fine, but I don't think it's true that MFA doesn't add security on average.


> It only gets annoying when I switch phones

Right. I don't ever want to tie login to a phone because phones are pretty disposable.

> I don't think it's true that MFA doesn't add security on average

You're right! On average it's better, because most people have bad password and/or reuse them in more than one place. So yes MFA is better.

But if your password is already impossible to guess (as 128+ random bits are) then tacking on a few more bytes of entropy (the TOTP seed) doesn't do much.


Those few bits are the difference between a keylogged password holder waltzing in and an automated monitor noticing that someone is failing the token check and locking the account before any damage occurs.


I think your missing parents point, both are just preshared keys, one has some additional fuzz around it so that the user in theory isn't themselves typing the same second key in all the time, but much of that security is in keeping the second secret in a little keychain device that cannot itself leak the secret. Once people put the seeds in their password managers/phones/etc its just more data to steal.

Plus, the server/provider side remains a huge weak point too. And the effort of enrolling/giving the user the initial seed is suspect.

This is why the FIDO/hardware passkeys/etc are so much better because is basically hardware enforced two way public key auth, done correctly there isn't any way to leak the private keys and its hard has hell to MITM. Which is why loss of the hw is so catastrophic. Most every other MFA scheme is just a bit of extra theater.


> both are just preshared keys

Exactly, that's it. Two parties have a shared secret of, say 16 bytes total, upon which authentication depends.

They could have a one byte long password but a 15 byte long shared secret used to compute the MFA code. The password is useless but the MFA seed is unguessable. Maybe have no password at all (zero length) and 16 byte seed. Or go the other way and a 16 byte password and zero seed. In terms of an attacker brute forcing the keyspace, it's always the same, 16 bytes.

We're basically saying (and as a generalization, this is true) that the password part is useless since people will just keep using their pets name, so let's put the strenght on the seed side. Fair enough, that's true.

But if you're willing to use a strong unique password then there's no real need.

(As to keyloggers, that's true, but not very interesting. If my machine is already compromised to the level that it has malicious code running logging all my input, it can steal both the passwords and the TOTP seeds and all the website content and filesystem content and so on. Game's over already.)

> This is why the FIDO/hardware passkeys/etc are so much better

Technically that's true. But in practice, we now have a few megacorporations trying to own your authentication flow in a way that introduces denial of service possibilities. I must control my authentication access, not cede control of it to a faceless corporation with no reachable support. I'd rather go back to using password123 everywhere.


Your password is useless when it comes to hardware keyloggers. We run yearly tests to see if people check for "extra hardware". Needles to say we have a very high failure rate.

It's hard to get a software keylooger installed on a corp. machine. It's easy to get physical access to the office or even their homes and install keyloggers all over the place and download the data via BT.


> Your password is useless when it comes to hardware keyloggers.

You are of course correct.

This is where threat modeling comes in. To really say if something is more secure or less secure or a wash, threat modeling needs to be done, carefully considering which threats you want to cover and not cover.

I this thread I'm talking from the perspective of an average individual with a personal machine and who is not interesting enough to be targeted by corporate espionage or worse.

Thus, the threat of operatives breaking into my house and installing hardware keyloggers on my machines is not part of my threat model. I don't care about that at all, for my personal use.

For sensitive company machines or known CxOs and such, yes, but that's a whole different discussion and threat model exercise.


> But if your (like my) passwords are 128+bits out of /dev/random, MFA isn't adding value.

no. a second factor of authentication is completely orthogonal to password complexity.


Which helps with some kinds of threats, but not all. It keeps someone from pretending to be the maintainer -- but if an actual maintainer is compromised, coerced, or just bad from the start and biding their time, they can still do whatever they want with full access rights.


You probably should have replied that to the GP, not me. I only clarified that what they were suggesting already is the case.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: