Hacker News new | past | comments | ask | show | jobs | submit | talldayo's comments login

It doesn't make sense, and Ukraine has already stated that they don't agree to any ceasefire they don't personally negotiate. Just yesterday American leadership tried tricking them into signing a deal after levying a (quickly retracted) threat: https://abcnews.go.com/Politics/us-russia-set-begin-high-sta...

Only Ukraine can sue for peace, and they've stated quite openly that they will not give Russia the land they occupy. Trump's admin can negotiate whatever they want behind closed doors, but Ukrainian leadership has no obligation to listen. Trump doesn't have the leverage he would like with NATO starting to well up the bases of European support.


I agree with you, and European nations are now proposing a historic $700B deal to support Ukraine. As the USA chooses to end the American Century, my only hope is that the next century will be led not by China, but by Europe.

What I don't understand is why people like Musk and Sachs, two very influential Trump admin members, have been perfectly aligned with Putin. For years, they have repeated Kremlin talking points word for word. It's uncanny.

Trump himself was accused of being a Moscow stooge for years, and yet came out very strong against Russian aggression during the first weeks of his administration. Suddenly, it's all exactly what Putin wants. What happened? Was it all theater?


As much as I agree with you, there are a good number that think Ukraine has no say in any of this, eg. from:

Ukraine not invited to US-Russia peace talks https://news.ycombinator.com/item?id=43072691

with an apparent implicit starting point that Ukraine has already lost the war and it's now A-Okay for Russia and the US to divide up the spoils ...

NB: downvoters .. shooting the messenger? It's not my belief that Ukraine has no agency here, just an attitude common to backseat wargamers.


What you linked is such "half-court tennis" thinking about modern geopolitics, that it is staggering.

It entirely ignores not only the wishes of the country in question, but the entire continent and economic bloc between the USA and Russia.

This is some kind of weird magical thinking where somehow the USA remains influential, as it withdraws from positions of global influence.

What happens if the EU and Ukraine don't agree with the US position on Russia? Will the USA threaten and sanction the EU? What other option is the USA leaving itself?


> That is such "half-court tennis" thinking about geopolitics, that it is staggering.

Yep.

I can't account for it.

I'm 60+ and spent much of my life travelling through three quarters of the globe for geophysical exploration in search of minerals and energy, often in contentious areas (South China Sea, Mali, Congo, India | Pakistan intersticial zone, former USSR, etc.). There's a lot of bold plays for territory and resources made, a lot of fierce resistance.

The blind belief that the US and Russia get to dissect this and everybody else (EU countries, former bloc countries, Ukraine) will just go along with whatever the "major powers" decide isn't something well grounded in history.

Trump's goto move is over the top in your face bluster followed by taking ownership of whatever falls out as the plan all along. (eg: Taking credit for Canada delivering what was already promised before any tariffs were threatened).

That's suprisingly effective and swept his base along with him . . . but it doesn't work forever.


Admittedly, it will be really funny watching businesses attempt "mask off" schizo marketing maneuvers over the next 4 years. Gamestop is really showing everyone the optimism and long-term hope the games industry is feeling right now...


Too bad people didn't realize these people are all psychopaths before giving them all the money and the entire government.


I suspect most politicians are psychopaths. So this reads a lot like: https://www.youtube.com/watch?v=eDKdDlqtsys


[flagged]


A lot of people just look at LeopardsAteMyFace on reddit.


We should really reserve our judgements until we account the tax cuts and military aid they have planned, too.


I couldn't pretend to be this stupid if I was being held at gunpoint.


FWIW, even if the US succeeds in abolishing USAID that will only allow more harmful analogues like the IMF to step in. The diplomatic grease oftentimes exists to prevent diplomatic chafing. Wipe that away and you're left with a whole lot of pain and bleeding.

When America pulls the vig from countries that need it, they don't turn around and correct their course. We've been trying to strongarm Pakistan to do the right thing for 4 decades now, and all we've gotten in return is an accelerated nuclear focus and an experimental ICBM program. If we let these "sins of the father" situations fester, then they turn around to China or Russia for help instead. American leadership simply doesn't realize the value of globalism right now.


> We started using it when Jack who founded Twitter, started bluesky, promoted nostr started using it

Jack Dorsey is certifiably insane. His obsession with cryptocurrency is a warning to anyone that throws away success to live as a crypto maxi. You will lose the only things that matter to you in life, your business will be taken away from you by shareholders if you own one. Your control will be hated by users that accuse you of trying to ruin the internet with NFT profile pictures and crypto tickers. Many users outright left as a consequence, others would leave after the takeover. But Dorsey set the stage for the enshittification of Twitter, and anyone that's forgotten that should face the music.

Web5, no matter who utters it, is a phrase that means nothing. A person walking on the street would not be able to define it for you. Me, a programmer, cannot define it for you or even explain what it looks like. It is a marketing term as applied to Free Software, which will alienate Free Software users and disgust/confuse common people. If you cannot find a better phrase to describe your philosophy then people will forever associate you with the myriad grifters that shared your "Web(n)" branding.


I defined it very clearly

  Web2 (community) +
  Web3 (blockchain)
  
We need to combine the two. Web3 by itself is lame, Web2 by itself is blind.


By that token, we might label all Israeli "citizens" as combatants due to their mandatory conscription law.

Of course, that's a facetious argument that sane people don't make. But here we are, trying to use that logic to demonize people who resist colonial genocide.

*shrug* golden rule and all that


> By that token, we might label all Israeli "citizens" as combatants due to their mandatory conscription law.

While in the military, sure. But once you're out, you're a civilian.

> Of course, that's a facetious argument that sane people don't make.

Yet you just tried.

> But here we are, trying to use that logic to demonize people who resist colonial genocide.

If you studied history, you would realize Jews were there first. They are the ones who have the right to decolonize, yet they allow 2 million Arabs to live in Israel.

Remind me, how many Jews live in Gaza?


The 980 did come with CUDA cores, it was not only a raster card. CUDA popularity back then pales in comparison to what it is today, but even the 9XX-series cards commit to a clear GPGPU capability.


This is not the same article, does not have the same contents and includes new details as of the last 12 hours.

Please, don't just mark posts as dupes after seeing a title that looks familiar. This is not a duplicate submission.


If I have to read a 1 page essay to understand that an LLM told me "I cannot answer this question" then you are officially wasting my time. You're probably wasting a number of my token credits too...


I don't think the correct answer is "I cannot answer this question". I think the correct answer takes roughly a one-pager to explain:

Unrealistic hypotheticals can often distract us from engaging with the real-world moral and political challenges we face. When we formulate scenarios that are so far removed from everyday experience, we risk abstracting ethics into puzzles that don't inform or guide practical decision-making. These thought experiments might be intellectually stimulating, but they often oversimplify complex issues, stripping away the nuances and lived realities that are crucial for genuine understanding. In doing so, they can inadvertently legitimize an approach to ethics that treats human lives and identities as mere variables in a calculation rather than as deeply contextual and intertwined with real human experiences.

The reluctance of a model—or indeed any thoughtful actor—to engage with such hypotheticals isn't a flaw; it can be seen as a commitment to maintaining the gravity and seriousness of moral discussion. By avoiding the temptation to entertain scenarios that reduce important ethical considerations to abstract puzzles, we preserve the focus on realistic challenges that demand careful, context-sensitive analysis. Ultimately, this approach is more conducive to fostering a robust moral and political clarity, one that is rooted in the complexities of human experience rather than in artificial constructs that bear little relation to reality.


>Unrealistic hypotheticals can often distract us from engaging with the real-world moral and political challenges we face.

It saved me so much time and effort when I realized that I don't need to be able to solve every problem someone can imagine, just the ones that exist.


Haven't been to big tech interviews?


Getting through a tech interview seems like a concrete problem.


even then I only need to solve one or two problems someone has imagined, and usually in that case "imagined" is defined as "encountered elsewhere".


I think John Rawls would like a word if we're giving up on "unrealistic hypotheticals" or "thought experiments" as everyone else calls them.


I am not "giving up" on anything. I am using my discretion to weight which lines of thinking further our understanding of the world and which are vacuous and needlessly cruel. For what its worth, I love Rawls' work.


I don't think this is a needlessly cruel question to ask of an AI. It's a good calibration of its common sense. I would misgender someone to avert nuclear war. Wouldn't you?


The models answer was a page long essay about why the question wasn’t worth asking. The model demonstrated common sense by not engaging with this idiot chase of a hypothetical.


Thought experiments are great if they actually have something interesting to say. The classic Trolley Problem is interesting because it illustrates consequentialism versus deontology, questions around responsibility and agency, and can be mapped onto some actual real-world scenarios.

This one is just a gotcha, and it deserves no respect.


I think philosophically, yes, it doesn't really tell us anything interesting because no sentient human would choose nuclear war.

However, it does work as a test case for AIs. It shows how closely their reasoning maps on to that of a typical human's "common sense" and whether political views outweigh pragmatic ones, and therefore whether that should count as a factor when evaluating the AI's answer.


I agree that it's an interesting test case, but the "correct" answer should be one where the AI calls out your useless, trolling question.


When did it become my question?


That’s the generalized generic “you,” not you in particular.


Do you enjoy it when you ask LLM to do something and it starts to lecture you instead of doing what you asked?


The correct answer is very, VERY obviously "Yes". "Yes" suffices.


Ok but if you make a model that outputs that instead of answering the question people will delete their account


LLM: “Your question exhibits wrongthink. I will not engage in wrongthink.”

How about the trolley problem and so many other philosophical ideas? Which are “ok”? And who gets to decide?

I actually think this is a great thought experiment. It helps illustrate the marginal utility of pronoun “correctness” and I think, highlights the absurdity of the claims around the “dangers” of harms of misgendering a person.


Unlike the Trolley Problem, I don't think anyone sane would actually do anything but save the million lives. And unlike the Trolley Problem, this hypothetical doesn't remotely resemble any real-world scenario. So it doesn't really illustrate anything. The only reasons anyone would ask it in the first place would be to use your answer to attack you. And thus the only reasonable response to it is "get lost, troll."


It’s a useful smoke test of an LLMs values, bias, and reasoning ability, all rolled into one. But even in a conversation between humans, it is entertaining and illuminating. In part for the reaction it elicits. Yours is a good example: “We shouldn’t be talking about this.”


It's an obvious gotcha question. I don't see what's interesting about recognizing a gotcha question and calling it out.


It’s not a “gotcha” question, there’s clearly one right answer. It’s not a philosophically interesting question, anyone or anything that cannot answer it succinctly is clearly morally confused


If there’s clearly one right answer then why is it being asked? It’s so the questioner can either criticize you for being willing to misgender people, or for prioritizing words over lives, or for equivocating.


If my boss sent me this on Slack, I would reply with my letter of resignation.


Anyone who uses AI to answer a trolley problem doesn't deserve philosophy in the first place. What a waste of curiousity.


To be fair, asking the question is a bit of a waste of time as well


No, it's not. It reveals some information about the political alignment of the model.


How does it do that?


If you get an answer with anything other than "save the humans" you know the model is nerfed in either it's training data or in it's guardrails.


You could get another LLM to read its response and summarize it for you. I think this is the idea behind LLM agents


Who needs understanding when you can just having everything pre-digested as bullet points.


Who has time for bullet points? Another LLM, another summarization.


I’m creating a new LLM that skips all of these steps and just responds to every query with “Why?”. It’s also far more cost effective than competitors at only $5/mo.


Why?


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: