Hacker News new | past | comments | ask | show | jobs | submit | MarkusQ's comments login

Source?

I get Starship 34,000,000 kg + 12,000,000 kg vs 747 ~200,000 liters ≅ 150,000 kg, or about 1/300 th of what Starship holds.


Oh shit I'm sorry I am so wrong. I had calculated falcon 9. Thank you for the correction

> it shouldn’t be possible to derive the formulas for chemical or nuclear weapons.

This information is widely available in textbooks, so...what exactly is the problem? If it's just a lossy-compression parrot "AI" like ChatGPT it won't be able to spit out anything that wasn't somewhere (implicitly or explicitly) in the training data anyway.


That is current generation (very early) tech. The hypothetical model the bill is worried about is significantly more advanced.

The key word here being hypothetical.

Getting to a significantly more advanced model will require significantly more work, and yet again we don't really know which direction to go in. Every time we crest a hill and see the ocean in the distance, we all start screaming "I see the beach! We're almost there!" but without a map we have no idea how much further we have left to travel.


Ships _spread out_ over 100,000km will literally cover interstellar distances as a swarm by _traveling to another star_. That's the whole point. A bunch of insects can swarm a thousand km from the time it forms to when it breaks up, even if the swarm itself is never more than a couple of km in diameter.


It doesn't appear to be heavily dependent on the distribution. It mostly depends on how large your buffer is compared to the number of unique items in the list. If the buffer is the same size or larger, it would be exact. If it's half the size needed to hold all the unique items, you drop O(1 bit) of precision; at one quarter, O(2 bits), etc.


You're the one who brought up the "crack wars" narrative; the article doesn't mention crack at all.

It does, however, state (contrary to your claim) that murders are _down_ (by 34%). If you are taking a narrower view of "still" and only counting the Covid era spike (generally agreed to be due to people being forced to spend more time in small, socially isolated groups) I would note that 1) this still has nothing to do with crack and 2) it would be an example of exactly the sort of cherry picking you claim to object to.


> > I just got into the habit of saying “yes” to pretty much anything.

> Isn't this also just a form of survivorship bias? You wouldn't be posting this from prison for example.

Note that "pretty much everything" is a proper subset of "everything" and could well be defined as "everything except the obviously stupid stuff that's highly likely to lead to bad outcomes", and this from a sample that's already biased in your favour -- e.g. the vast majority of places you can go in the universe would kill you instantly, but the vast majority of places someone might suggest going to, or offer to go to with you, won't.

If you filter out the "let's go see the Titanic in my home-made submarine" ideas, your odds are actually quite good.


Yeah, I have my limits - and while I have agreed to stuff that was in hindsight Quite A Bad Idea, the majority of my “sure, why not” moments have lead to the fantastic.


Not really relevant though. The airship to orbit concept aims to get both high and fast, whereas the XKCD/What-if you linked hinges on the fact that just getting high is not sufficient.

(This still leaves open the question of whether the proposed airship could actually overcome drag to get as fast as they hope.)


Seems lots of people consider just getting high to be sufficient for them. /j


Astronomers in general consider being at the bottom of an ocean of air a major nuisance.


> These models are obviously capable of acts of intelligence.

Except they aren't. They are capable of language and pattern manipulation, and really good at it. But if you concoct a novel problem that isn't found anywhere on the internet and confront them with it, they fail absurdly. Even if it's something that a kid could figure out.

The Eliza effect strikes again!


> But if you concoct a novel problem that isn't found anywhere on the internet and confront them with it, they fail absurdly.

Often they do, but sometimes they don't.

> Even if it's something that a kid could figure out.

Intelligent and smart aren't synonyms. Modern LLM's are obviously pretty stupid at times, but even a human with an IQ of 65 has some intelligence.


But when LLM answers a logic or math question that no seven-year-old would figure out, would you flip the table and say this is the evidence that the seven-year-old is not intelligent?

Otherwise, it sounds like circular reasoning, where we simply say "Of course a human being is intelligent because they are intelligent, and LLM is not because it isn't."


and also fails at math about 7% of the time which is abysmal compared to a $7 calculator that fails about 0.00000000001% of the time. that's far more discounting of intelligence than the times it gets the math right is affirming.


> But if you concoct a novel problem that isn't found anywhere on the internet and confront them with it, they fail absurdly.

Can you give an example?


Of course not, because then the AI would get trained on the proper answer! /s



I disagree. It clearly tells us _what_ is different, it just doesn't tell us _how_ it differs from case to case. So it's not a full explanation (which would, as you note, require a model of how it differs and, optimally, _why_) but it is a step towards a explanation, and not to be sneered at.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: