Hacker Newsnew | past | comments | ask | show | jobs | submit | crooked-v's commentslogin

It's actually an explicit plot point in the original book that the containment is insufficient because Hammond thinks he's a big brain brilliant genius who can do all this stuff from scratch better than any boring old normal zookeepers. The movie lost that in translation as part of the attempt to make him a kindly grandfather making bad decisions instead of a two-faced showman who's completely full of himself.

Yes, the book got this and did a much better job with it. I'm not even necessarily upset with the first movie dropping that as part of the adaptation per se. Crap like that happens in the real world all the time, and even if the movie didn't call it out very well it still at least fits the characters. HN knows all about SV startups trying to move into this or that space thinking they're the smart young hotshots who are going to revolutionize some space with technology only to get ROFLstomped by the reality in the field and the people who have been doing it for decades and could have told them for free why what they were trying to do isn't going to work if they'd bothered to do the slightest research first.

However, the repeated errors are just silly.

Most particularly the repeated error of not bringing big enough guns [1]. Guns big enough to bother a T-Rex are certainly inconvenient, but they're readily available to anyone who already breaking international laws about not visiting these islands in the first place. Of course simply bringing big enough guns doesn't guarantee a solution to all the problems and it would not be hard to still tell stories about people getting eaten, but without that as a foundation the characters just read as suicidally-stupid bozos to me from the get-go. (Where's that alleged infatuation Hollywood has with guns?)

But the second park really has no reason in my eyes to have collapsed the way it did either. It wasn't really that well designed and they still had to contrive some really, really stupid stuff to get it to fail, like crashing a helicopter into the pteradactyl pen.

[1]: https://www.youtube.com/watch?v=7Pf6E8yjMAI


It's funny, but I actually kind of like the helicopter crash because it's caused entirely because the CEO is too smug about being the cool hero, without any obvious moments that any average person doing their job might have counteracted it. It really gets at the whole 'greed and arrogance' theme in a very punchy way without requiring any of the normal people on the ground to be really dumb.

It's a result of greed and arrogance in the book. It's even called out with the framing that has Hammond claiming he's 'spared no expense' to the investors, even as Nedry's whole subplot kicks off because Nedry's already the low bidder and Hammond's threatened to sue him into bankruptcy if he doesn't do extra work for free.

You're assuming the answer is yes, but the anecdotes about people going off the deep end from LLM-enabled delusions suggests that "first, do no harm" isn't in the programming.

The ability to (in theory) easily get second-order behavior out of simple definitions in Inform 7 is something I would find really fascinating if only it didn't require knowing all the specific magic invocations to do so.

> the same kind of stagnation people faced under the gold standard

The entire reason that basically the entire world switched away from the gold standard just about the second the US proved it was logistically practical was that specie-backed currency had no long-term stability at all.


For me, "AGI" would come in with being able to reliably perform simple open-ended tasks successfully without needing any specialized aid or tooling. Not necessarily very well, just being capable of it in the first place.

For a specific example of what I mean, there's Vending-Bench - even very 'dumb' humans could reliably succeed on that test indefinitely, at least until they got terminally bored of it. Current LLMs, by contrast, are just fundamentally incapable of that, despite seeming very 'smart' if all you pay attention to is their eloquence.


If someone handed you an envelope containing a hidden question, and your life depended on a correct answer, would you rather pick a random person out of the phone book or an LLM to answer it?

On one hand, LLMs are often idiots. On the other hand, so are people.


That's not at all analogous to what I'm talking about. The comparison would be picking an LLM or a random person out of the phone book to, say, operate a vending machine... and we already know LLMs are unable to do that, given the results of Vending-Bench.

More than 10% of the global population is illiterate. Even in first world countries, numeracy rates are 75-80%. I think you overestimate how many people could pass the benchmark.

Edit - rereading, my comment sounds far too combative. I mean it only as an observation that AI is catching up quickly vs what we manage to teach humans generally. Soon, if not already, LLMs will be “better educated” than the average global citizen.


And yet, I would be completely confident that an average illiterate person could pass the Vending-Bench test indefinitely if you gave them interfaces that don't depend on the written word (phone calls, abacuses, piles of blocks, whatever), and that the "smartest" LLM in the world couldn't. It's not about level of education, beyond the bare minimum needed to have any kind of mental model of the world.

I'd learn as much as I could about what the nature of the question would be beforehand and pay a human with a great track record of handing such questions.

Trying to just "get over it" with the neurodivergence example noticed is the kind of thing likely to result in a panic attack or other uncontrollable expression of emotion. It's not something you can change just by wanting it hard enough.

It also makes it impossible to find via searching.

Check out the raising of Chicago (https://en.wikipedia.org/wiki/Raising_of_Chicago). From buildings up to entire city blocks were raised, moved on rollers, or both, usually while businesses and residents stayed in them for normal day-to-day life.

Chicago also reversed the flow of the Chicago River.

https://en.wikipedia.org/wiki/Chicago_River#Reversing_the_fl...

They also rebuilt much of the city because it was wiped out during the Great Chicago Fire of 1871, and now the grid system is one of the most commonsensical ones in any major American city.

Chicago is an example of a (more or less) clean-slate engineered large city -- one that arose as a result of tragedy (fire) and failure (cholera).


In five days the entire assembly was elevated 4 feet 8 inches

At a constant rate that's approximately 1.3 tenths (3.3um) per second, definitely far below the threshold for people noticing.


As someone who's on the autism spectrum, I think there's an immense qualitative and quantitative difference between someone's brain working differently and the straightforward presence or lack of a specific physical capability.

I'd still be cautious because there's the long-running tendency for any kind of 'cure' for anything inheritable to be used as a eugenics bludgeon, but that's about society rather than the direct effects.


> I think there's an immense qualitative and quantitative difference between someone's brain working differently and the straightforward presence or lack of a specific physical capability.

In this case, the lack of a specific physical ability results in that person's brain working differently.


Not really. The brain compensates in communication skills since it has no auditory processing to deal with.

But it’s otherwise normal. They don’t magically become extremely technical or have other specific positive traits that come from being deaf.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: