Hacker News new | past | comments | ask | show | jobs | submit | jackothy's comments login

Because it's hard to fully comprehend or feel complete certainty of what is going to happen.


"how someone so smart can be so naive"

Do you really think Ilya has not thought deeply about each and every one of your points here? There's plenty of answers to your criticisms if you look around instead of attacking.


I actually do think they have not thought deeply about it or are willfully ignoring the very obvious conclusions to their line of thinking.

Ilya has an exceptional ability extrapolate into the future from current technology. Their assessment of the eventual significance of AI is likely very correct. They should then understand that there will not be universal governance of AI. It’s not a nuclear bomb. It doesn’t rely on controlled access to difficult to acquire materials. It is information. It cannot be controlled forever. It will not be limited to nation states, but deployed - easily - by corporations, political action groups, governments, and terrorist groups alike.

If Ilya wants to make something that is guaranteed to avoid say curse words and be incapable of generating porn, then sure. They can probably achieve that. But there is this naive, and in all honesty, deceptive, framing that any amount of research, effort, or regulation will establish an airtight seal to prevent AI for being used in incredibly malicious ways.

Most of all because the most likely and fundamentally disruptive near term weaponization of AI is going to be amplification of disinformation campaigns - and it will be incredibly effective. You don’t need to build a bomb to dismantle democracy. You can simply convince its populace to install an autocrat favorable to your cause.

It is as naive as it gets. Ilya is an academic and sees a very real and very challenging academic problem, but all conversations in this space ignore the reality that knowledge of how to build AI safely will be very intentionally disregarded by those with an incentive to build AI unsafely.


It seems like you're saying that if we can't guarantee success then there is no point even trying.

If their assessment of the eventual significance of AI is correct like you say, then what would be your suggested course of action to minimize risk of harm?


No, I’m saying that even if successful the global outcomes Ilya dreams of are entirely off the table. It’s like saying you figured out how to build a gun that is guaranteed to never fire when pointed at a human. Incredibly impressive technology, but what does it matter when anyone with violent intent will choose to use one without the same safeguards? You have solved the problem of making a safer gun, but you have gotten no closer to solving gun violence.

And then what would true success look like? Do we dream of a global governance, where Ilya’s recommendations are adopted by utopian global convention? Where Vladimir Putin and Xi Jinping agree this is for the best interest of humanity, and follow through without surreptitious intent? Where in countries that do agree this means that certain aspects of AI research are now illegal?

In my honest opinion, the only answer I see here is to assume that malicious AI will be ubiquitous in the very near future, to society-dismantling levels. The cat is already out of the bag, and the way forward is not figuring out how to make all the other AIs safe, but figuring out how to combat the dangerous ones. That is truly the hard, important problem we could use top minds like Ilya’s to tackle.


If someone ever invented a gun that is guaranteed to never fire when pointed at a human, assuming the safeguards were non-trivial to bypass, that would certainly improve gun violence, in the same way that a fingerprint lock reduces gun violence - you don't need to wait for 100% safety to make things safer. The government would then put restrictions on unsafe guns, and you'd see less of them around.

It wouldn't prevent war between nation-states, but that's a separate problem to solve - the solutions to war are orthogonal to the solutions to individual gun violence, and both are worthy of being addressed.


> how to make all the other AIs safe, but figuring out how to combat the dangerous ones.

This is clearly the end state of this race, observable in nature, and very likely understood by Ilya. Just like OpenAI's origins, they will aim to create good-to-extinguish-bad ASI, but whatever unipolar outcome is achieved, the creators will fail to harness and enslave something that is far beyond our cognition. We will be ants in the dirt in the way of Google's next data center.


I mean if you just take the words on that website at face value, it certainly feels naive to talk about it as "the most important technical problem of our time" (compared to applying technology to solving climate change, world hunger, or energy scarcity, to name a few that I personally think are more important).

But it's also a worst-case interpretation of motives and intent.

If you take that webpage for what it is - a marketing pitch - then it's fine.

Companies use superlatives all the time when they're looking to generate buzz and attract talent.


A lot of people think superintelligence can "solve" politics which is the blocker for climate change, hunger, and energy.


Do you have some more recent examples?


Cuba and Iran are ongoing so they're not exactly ancient history. Other recent examples would definitely include Venezuela and arguably Syria and Libya. Iraq and Afghanistan were more direct attempts at regime change (rather than using proxies) but still fit in most ways.


I grew up in Norway. The way I see it is that weekends are for spoiling yourself with the most delicious food. Another common tradition would be making pizza on weekends. Taco Friday is seen as a special weekend treat.

I know taco and pizza aren't really that special or fancy meals, but I guess they turned out that way in Norway since they came from abroad. They're still not seen as "fancy", but they are many people's favorite tastiest food.


It's not so impossible to spot flaws if you're using worst-case testing scenarios. Which are not worthless because such patterns do actually pop up in real world usage, albeit rarely.


Examples?


Had one happen to me recently where I was scrolling Spotify, and they do the thing where if you try to scroll past max they will stretch the content.

One of the album covers being stretched had some kind of fine pattern on it that caused a clearly visible shifting/flashing Moiré pattern as it was being stretched.

Wish I could remember what album cover it was now.

Though really it's simple enough: As long as you can still spot a single dark pixel in the middle of an illuminated white screen, the pixels could benefit from being smaller. (Edit: swapped black and white)


The single pixel example is wrong I think, because that’s not how light and eyes work - you can always spot a single point of light, regardless of how small - if it’s bright enough.


Yes exactly that's why I went back right away to edit - if the whole screen is white but one pixel is off and you can spot it, then my logic holds.


I know of this one single intersection in northern Stockholm that still enforces left-hand traffic: https://maps.app.goo.gl/uGWr28JUQnEh9cXc6


Doesn't need to be related to left-handed past. Just imagine those lanes were the other way around, then if you and me were going on Valhallavägen opposite directions and wanted to U turn over there at the same time then we can easily be in the way of each other.

When you have two multi lane one way roads going opposite ways and want add a two-way connection between them then you often want to put lanes the opposite of normal way. Another example is all roads connecting Chiang Mai's old city outer and inner circle road. The country is left-handed but those connecting roads are right handed, the opposite of Stockholm intersection


Makes perfect sense, thank you!

I haven't seen any historical maps of the road so I no longer have any reason to believe it stems from that. I was just going based on that (flawed) logic.


> all roads connecting Chiang Mai's old city outer and inner circle road.

Such as https://maps.app.goo.gl/uRzHBaZAY38rnjVS9 .



As a Norwegian I have bonded with Swedes on several occasions over having the same jokes about each other. Not that the jokes are always geniously funny or anything, but that would be both an unrealistic expectation and completely unnecessary. Some unserious rivalry is fantastic to have among friends.


I've been doing something very similar in Rust recently using Askama or Maud for templates (components), optionally with Axum to include a server in the binary.

The author mentions wanting to colocate templates with logic, Maud allows much tighter/automatic integration in this regard.

This approach also synergizes almost perfectly with HTMX.


Recently rediscovered htmx and applied it to a Django project. It really does give that SPA feeling without much of a hassle, and it's also easier in a team setting where there's less stepping on other people's toes, which is often the case with a full blown Vue or React app.


I do this but using hypertext, since the syntax is closer to JSX and less confusing for others to touch.


Just checked out hypertext! Have you noticed any other advantages over Maud, except for syntax?


> Maud allows much tighter/automatic integration in this regard

Interesting, can you share some examples?


In the linked Go approach, and also with Askama, you have to define each template (component) twice. Once as a .html file, and then again as a struct inside a source file.

With Maud, templates/components are only defined once, so you no longer have to worry about keeping the two in sync or colocating them.

This is because Maud provides a macro for writing HTML with Rust-like syntax, so all your HTML (components) ends up inside regular Rust files, and you can refer to variables as usual.

This really makes for almost seamless integration between Rust code and the HTML you serve.


"Her" is one of my favorite movies of all time, and not once while watching the demo did I think that it sounded specifically like ScarJo. The whole concept, of course, made me think of "Her", but not the voice itself.


He's not necessarily saying that was the goal from the start. All he is admitting with that tweet is that it is indeed (he finds it to be) reminiscent of "Her".


Well, from the start, Altman wanted SJ to do the voice. Perhaps he'd never seen or heard of the movie "her", and the association is just coincidental?

After "she" said no, then Altman auditions a bunch of voice talent and picks someone who sounds just like SJ. I guess that's just the kind of voice he likes.

So, Altman has forgotten about SJ, has 5 voice talents in the bag, and is good to go, right? But then, 2 days(!) before the release he calls SJ again, asking her to reconsider (getting nervous about what he's about to release, perhaps?).

But still, maybe we should give Altman the benefit of the doubt, and assume he wanted SJ so badly because he had a crush on her or something?

Then on release day, Altman tweets "her", and reveals a demo not of a sober AI assistant with a voice interface, but of a cringe-inducing AI girlfriend trying to be flirty and emotional. He could have picked any of the five voices for the demo, but you know ...

But as you say, he's not admitting anything. When he tweeted "her" maybe it was because he saw the movie for the first time the night before?


Sky has been in the app for many months, the release last week didn't add the voice, it merely is for a mode which allows a much more natural back and forth and interaction via voice, which is, indeed, fairly reminiscent of the voice interface in "her".

He probably just really wants to actually have SJ's voice from the film. But SJ doesn't really have a right to arbitrary west coast vocal fry female voices. Without the "disembodied voice of an AI" element, I don't think most people would note "Oh she sounds like SJ". In fact, that's what the voice actress herself has said -- she hadn't gotten compared to SJ before this.


If down was up and blue was red then blueberries on the floor would be strawberries on the ceiling.


> After "she" said no, then Altman auditions a bunch of voice talent and picks someone who sounds just like SJ.

According to the article, the word After here is incorrect. It states the voice actor was hired months before the first contact with SJ. They might be lying, but when they hired the voice actor seems like it would be a verifiable fact that has contracts and other documentation.

And like others, not defending OpenAI, but that timeline does tend to break the narrative you put forth in this post.


The 2 days thing won’t fly. The omni model can probably produce any voice when led with a few seconds of audio. Meta developed a similar model a while back but didn’t open source it out of fear of deepfakes and there are commercial offerings like elevenlabs and open source ones like bark et al. So the last minute ask wasn’t nerves but a last ditch attempt to score a rebound for the launch.


The voice, or the plot/concept of the movie? Her was a bout an AI having enough realism that someone could become emotionally attached to it. It was not a movie about Scarlett Johansson's voice. Any flirty female voice would be appropriator a "her" tweet.


Maybe, but that isn't what Altman did. He specifically tried to hire SJ. Twice.


And, hired a voice actress that sounds exactly like Rashida Jones [1] rather than SJ, 6 months before.

These don't have to be related. Maybe they are, but the confidence that they are is silly, since having a big celebrity name as the voice is a desirable thing, great marketing, especially one that did the voice acting for a movie about AI. My mind was completely changed when I actually listened to the voice comparison for myself [1].

[1] https://news.ycombinator.com/item?id=40435695


Exactly, just reminiscent of the movie. So the tweet really proves nothing in this case, unlike what some people seem to believe.


And if you commision an artist to draw a black-leather tight-fit clad red-head superspy in an ad for tour product, it need not look like Black Widow from the MCU.

But if it does look very much like her, it doesn't really matter whether you never intended to.


You can see plenty of discussion elsewhere in this thread regarding how similar the result actually ended up being.

All I'm saying in the comment you are replying to is that it's incorrect to claim that Altman's saying that "the goal was specifically to copy 'Her'".


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: