Hacker News new | past | comments | ask | show | jobs | submit login

I agree it's a fallacy when the probability is like 10^-10 but in this case I believe that the probability is more like 1%, in which case the argument is sound. I'm not trying to make a pascal's wager argument.

That might be why Sam Altman signed it but why do you think that all the academics did so as well. Do you think he just bribed them all or something?


I wonder if this is a good opportunity to incorporate some form of LLM built in? Could be a killer feature.

Not well. Very few updates since then. https://github.com/keybase/client/graphs/code-frequency tells the story well.

This longtermist and Effective Altruism way of thinking is very dangerous. Because using this chain of argumentation, it's "trivial" to say what you're just saying: "So what if there's racism today, it doesn't matter if everybody dies tomorrow.

We can't just say that we weigh humanity's extinction with a big number, and then multiply it by all humans that might be born in the future, and use that to say today's REAL issues, affecting REAL PEOPLE WHO ARE ALIVE are not that important.

Unfortunately, this chain of argumentation is used by today's billionaires and elite to justify and strengthen their positions.

Just to be clear, I'm not saying we should not care about AI risk, I'm saying that the organization that is linked (and many similar ones) exploit AI risk to further their own agenda.


While I'm not on this "who's-who" panel of experts, I call bullshit.

AI does present a range theoretical possibilities for existential doom, from teh "gray goo" and "paperclip optimizer" scenarios to Bostrom's post-singularity runaway self-improving superintelligence. I do see this as a genuine theoretical concern that could even potentially even be the Great Filter.

However, the actual technology extant or even on the drawing boards today is nothing even on the same continent as those threats. We have a very vast ( and expensive) sets of probability-of-occurrence vectors that amount to a fancy parlor trick that produces surprising and sometimes useful results. While some tout the clustering of vectors around certain sets of words as implementing artificial creation of concepts, it's really nothing more than an advanced thesaurus; there is no evidence of concepts being weilded in relation to reality, tested for truth/falsehood value, etc. In fact, the machines are notorious and hilarious for hallucinating with a highly confident tone.

We've created nothing more than a mirror of human works, and it displays itself as an industrial-scale bullshit artist (where bullshit is defined as expressions made to impress without care one way or the other for truth value).

Meanwhile, this panel of experts makes this proclamation with not the slightest hint of what type of threat is present that would require any urgent attention, only that some threat exists that is on the scale of climate change. They mention no technological existential threat (e.g., runaway superintelligence), nor any societal threat (deepfakes, inherent bias, etc.). This is left as an exercise for the reader.

What is the actual threat? It is most likely described in the Google "We Have No Moat" memo[0]. Basically, once AI is out there, these billionaires have no natural way to protect their income and create a scaleable way to extract money from the masses, UNLESS they get cooperation from politicians to prevent any competition from arising.

As one of those billionaires, Peter Theil, said: "Competition is for losers" [1]. Since they have not yet figured out a way to cut out the competition using their advantages in leading the technology or their advantages in having trillions of dollars in deployable capital, they are seeking a legislated advantage.

Bullshit. It must be ignored.

[0] https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...

[1] https://www.wsj.com/articles/peter-thiel-competition-is-for-...


> the first 90% takes 10% of the time, the last 10% takes 90%.

Is the second part of that statement necessary?


I agree that a postal carrier would be criminally liable for knowingly delivering a live bomb, but I think you’re wrong on the end part. There is no legal obligation to try to stop crimes you know about. In most cases there isn’t even an obligation to report them.

Are you saying that the internet at large could pose a 'legitimate risk to human life such that we would go extinct in the near future,' or do you disagree that AI, when used as a tool by humans, could pose such a risk?

My solution is to eat a heaping spoonful of peanut butter.

No idea why it works but it does.


The really disgusting bit is not the favt of being carried up a mountain, it is the neglect of giving the local guides the credit they deserve. Last time i bothered to look, Sherpas held all Everest records: number of succesfull ascents, speed, you name it.

That Sherpas, just to pick prominent example as locals in other regions share the same fate, don't get the recognition they deserve is not the people going on paid expeditions. It also the press and media not giving them credit. Basically all succesfull Hymalayan expeditions, before Messner started to go full solo, relied on Sherpas or other local groups. Not properly mentioning those incredible lountaineers has a long tradition. One extended, too a much lesser degree, to guides from former Russia (looking at ypu, Krackauer). The Chinese alpinists, and to a lesser extent the Indians as well, defy that fact somewhat by virtue of simply being rich.


The irony of critiquing Bulverism as a concept, not by attacking the idea itself, but instead by assuming it is wrong and attacking the character of the author, is staggeringly hilarious.

Warnings-as-errors is a great policy in general, but these are non-deterministic warnings based on compile time wall timing. Coworker working on a slower computer than you? He can run into hundreds of errors after a pull that you introduced but couldn't see, what now? Running something intensive in the background? You can't compile successfully any more. Not ideal.

Out of curiosity, were those CTE issues on the current PG version (13.4)? I know they changed some materialization stuff with them in 12 or so but was wondering if that issue was from back then or if you're running into some stuff that currently still happens.

Same here.

I can't forget reading declassified case of testing drugs (probably LSD) on unsuspecting people in a motel and surveying them. One particular detail that stuck in my head was the pleasure some agents were getting out of this "test" (documented by other, more adequate agent).


Clearly a meter of beer is better than a yard of beer.

The most obvious paths to severe catastrophe begin with "AI gets to the level of a reasonably competent security engineer in general, and gets good enough to find a security exploit in OpenSSL or some similarly widely used library". Then the AI, or someone using it, takes over hundreds of millions of computers attached to the internet. Then it can run millions of instances of itself to brute-force look for exploits in all codebases it gets its hands on, and it seems likely that it'll find a decent number of them—and probably can take over more or less anything it wants to.

At that point, it has various options. Probably the fastest way to kill millions of people would involve taking over all internet-attached self-driving-capable cars (of which I think there are millions). A simple approach would be to have them all plot a course to a random destination, wait a bit for them to get onto main roads and highways, then have them all accelerate to maximum speed until they crash. (More advanced methods might involve crashing into power plants and other targets.) If a sizeable percentage of the crashes also start fires—fire departments are not designed to handle hundreds of separate fires in a city simultaneously, especially if the AI is doing other cyber-sabotage at the same time. Perhaps the majority of cities would burn.

The above scenario wouldn't be human extinction, but it is bad enough for most purposes.


The spooky part is we think encrypted/secure communications protect us, but with quantum computers just around the corner and a near unlimited appetite for storing records, very soon all that historical data will be broken into.

Quantum resistant encryption doesn't protect the past.


Yup.

While I'm not on this "who's-who" panel of experts, I call bullshit.

AI does present a range theoretical possibilities for existential doom, from teh "gray goo" and "paperclip optimizer" scenarios to Bostrom's post-singularity runaway self-improving superintelligence. I do see this as a genuine theoretical concern that could even potentially even be the Great Filter.

However, the actual technology extant or even on the drawing boards today is nothing even on the same continent as those threats. We have a very vast ( and expensive) sets of probability-of-occurrence vectors that amount to a fancy parlor trick that produces surprising and sometimes useful results. While some tout the clustering of vectors around certain sets of words as implementing artificial creation of concepts, it's really nothing more than an advanced thesaurus; there is no evidence of concepts being weilded in relation to reality, tested for truth/falsehood value, etc. In fact, the machines are notorious and hilarious for hallucinating with a highly confident tone.

We've created nothing more than a mirror of human works, and it displays itself as an industrial-scale bullshit artist (where bullshit is defined as expressions made to impress without care one way or the other for truth value).

Meanwhile, this panel of experts makes this proclamation with not the slightest hint of what type of threat is present that would require any urgent attention, only that some threat exists that is on the scale of climate change. They mention no technological existential threat (e.g., runaway superintelligence), nor any societal threat (deepfakes, inherent bias, etc.). This is left as an exercise for the reader.

What is the actual threat? It is most likely described in the Google "We Have No Moat" memo[0]. Basically, once AI is out there, these billionaires have no natural way to protect their income and create a scaleable way to extract money from the masses, UNLESS they get cooperation from politicians to prevent any competition from arising.

As one of those billionaires, Peter Theil, said: "Competition is for losers" [1]. Since they have not yet figured out a way to cut out the competition using their advantages in leading the technology or their advantages in having trillions of dollars in deployable capital, they are seeking a legislated advantage.

Bullshit. It must be ignored.

[0] https://www.semianalysis.com/p/google-we-have-no-moat-and-ne...

[1] https://www.wsj.com/articles/peter-thiel-competition-is-for-...


Looks like swiftc needs to embed a performant SAT solver ;)

(I could not find the link anymore, but I remember reading a blog post about how Rust `match` statement are in fact solving an instance of SAT).


A VR MMO social software is bound to be soulless if it is designed and operated by a company whose customers are advertisers that want to minimise risks and have the broadest possible audience.

Advertising is cancer, marketing is a blight upon every facet of humanity.

It should be no surprise a medium designed to facilitate marketing first, and "be barely engaging enough to ensure most people can visit and be exposed to marketing without having a strikingly negative impression" is going to be soulless.

Even in real life, businesses and stores are built where there is already demand. stores do not create demand. The commercial-first approach to a metaverse is akin to building a shopping mall deep in the Sahara desert and expecting people to move next to it and found a town. At most you'll get a town of the mall's employees.

The Internet was successful because it didn't start out as what it is today. It was about people and their creations first. I hypothesise if e-commerce was indeed the first feature of the Internet, vastly less people would be engaging with the Internet nowadays. Why go to a store with the most unrelenting salesman ever by your side, on a medium you have absolutely zero idea how to use and interpret?


Thanks for pointing it out. I totally missed it. Sorry.

At least now if it turns out they are right they can’t claim anymore that they didn’t know.

35k (bear in mind we have a public health system, the universities are like 3k a year and the rent is less than 1k, if you are fired you have up to 2 years of your salary, etc)

cannot tell if </s> or .....

It's not clear at all that we have an avenue to super intelligence. I think the most likely outcome is that we hit a local maximum with our current architectures and end up with helpful assistants similar in capability to George Lucas's C3PO.

The scary doomsday scenarios aren't possible without an AI that's capable of both strategic thinking and long term planning. Those two things also happen to be the biggest limitations of our most powerful language models. We simply don't know how to build a system like that.


You are almost correct. The claims are called SCTs (precertificates are something else). The SCT is a promise by the log that they have (or will soon) publish the certificate. The promise could be lie. To detect that, the client is supposed to audit the SCT. If the audit fails, the client reports it so the log can be distrusted.

So yes, it's true that if an adversary can permanently silo off a client, it can prevent log misbehavior from being detected, either by blocking the reporting of audit failures, or by presenting a completely different view of the log to to the client. However, in many cases it would be impractical for an adversary to keep up such an attack forever, so CT still has value and I'm a huge fan of it. But it's true it can't stop literally all attacks.

Source: I run the CT monitor which has detected misbehavior in multiple CT logs.


I’m making an absurdly large generalization here, but I’m feeling like kids are having less curiosity for specifically the devices they own. As things get more integrated into their lives, it’s become a lot of more a utilitarian tool, rather than something worth digging into to see how it does those things.

Not to say that setting a single screen lock time setting is the pinnacle of curiosity, but it’s reflection of the banal utilitarian way they look at devices. “It does what it does”


Protobuf has caused me nothing but trouble once I have even moderately large data sets. If what you are storing is large matrices or rasters, honestly nothing beats HDF5.

Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: