Hacker News new | past | comments | ask | show | jobs | submit | km144's comments login

The current US Administration does not understand the basic mechanics of American hegemony. I think Trump truly believes that we are getting ripped off and stand to gain nothing from our current position with the EU and other allies. But if you take away the incentives and guarantees they'll just turn their back on you and seek their own protection—it has to be symbiotic.

I fully agree that the US benefits from its hegemony and that Trump's statements that the US is getting ripped off are completely false.

But what game is currently being played by the Trump administration? He told the EU to be self-sufficient in defense spending, insults them and Ukraine to awaken their pride. He (temporarily) cuts off public ties with Zelensky. The press conference with the row at the end actually ended with Trump winking at the audience and Zelensky putting his thumb up. That part is cut out of many videos.

EU leaders scramble to put up the type of peace plans that they know will be refused. What if all is prearranged and Trump just wants to dump Biden's conflict on the EU, at least temporarily until everyone has rearmed?

Trump has said a lot, including lifting sanctions on Russia. But he extended the sanctions. He did halt arms shipments to Ukraine either to pressure Ukraine or the EU.

The EU should negotiate with Russia without the US, get a viable peace plan and drop sanctions. Then we'll see if Trump's behavior is more than theater.


I think the point about used cars would have to imply that newer used cars are more reliable than older used cars, which would cause the supply of cars in the market to saturate more quickly, thus causing less demand for new cars. Not sure if that's true but it seems reasonable.

I'm not convinced that LLMs in their current state are really making anyone's lives much better though. We really need more research applications for this technology for that to become apparent. Polluting the internet with regurgitated garbage produced by a chat bot does not benefit the world. Increasing the productivity of software developers does not help to the world. Solving more important problems should be the priority for this type of AI research & development.

The explosion of garbage content is a big issue and has radically changed the way I use the web over the past year: Google and DuckDuckGo are not my primary tools anymore, instead I am now using specialized search engines more and more, for example, if I am looking for something I believe can be found in someone's personal blog I just use Marginalia or Mojeek, if I am searching for software issues I use GitHub's search, general info straight to Wikipedia, tech reviews HN's Algolia etc.

It might sound a bit cumbersome but it's actually super easy if you assign search keywords in your browser: for instance if I am looking for something on GitHub I just open a new tab on Firefox and type "gh tokio".


LLM's have been extremely useful for me. They are incredibly powerful programmers, from the perspective of people who aren't programmers.

Just this past week claude 3.7 wrote a program for us to use to quickly modernize ancient (1990's) proprietary manufacturing machine files to contemporary automation files.

This allowed us to forgo a $1k/yr/user proprietary software package that would be able to do the same. The program Claude wrote took about 30 mins to make. Granted the program is extremely narrow in scope, but it does the one thing we need it to do.

This marks the third time I (a non-progammer) have used an LLM to create software that my company uses daily. The other two are a test system made by GPT-4 and an android app made by a mix of 4o and claude 3.5.

Bumpers may be useless and laughable to pro bowlers, but a godsend to those who don't really know what they are doing. We don't need to hire a bowler to knock over pins anymore.


Being able to quickly get a script for some simple automation, defining source and target formats in plain English, has been a huge help. There is simply no way I'm going to remember all that stuff as someone who doesn't program regularly, so the previous way to deal with it was to do it all manually. It was quicker than doing remedial Python just to forget it all again.

I've also been toying with Claude Code recently and i (as en eng, ~10yr) think they are useful for pair programming the dumb work.

Eg as i've been trying Claude Code i still feel the need to babysit it with my primary work, and so i'd rather do it myself. However while i'm working if it could sit there and monitor it, note fixes, tests and documentation and then stub them in during breaks i think there's a lot of time savings to be gained.

Ie keep the doing simple tasks that it can get right 99% of the time and get it out of the way.

I also suspect there's context to be gained in watching the human work. Not learning per say, but understanding the areas being worked on, improving intuition on things the human needs or cares about, etc.

A `cargo lint --fix` on steroids is "simple" but still really sexy imo.


I think that's great for work and great for corporations. I use AI at my job too, and I think it certainly does increase productivity!

How does any of this make the world a better place? CEOs like Sam Altman have very lofty ideas about the inherent potential "goodness" of higher-order artificial intelligence that I find thus far has not borne out in reality, save a few specific cases. Useful is not the same as good. Technology is inherently useful, that does not make it good.


> Solving more important problems should be the priority for this type of AI research & development.

Which problem spaces do you think are underserved in this aspect?


There is certainly something cultural going on—at least in the US, I notice this. Older generations will look at you with puzzled expressions if you ask them about what made them decide to have children and why they thought it was worth it at the time. Having children simply wasn't a question for people that were born before a certain year.

Analyzing the tradeoffs of having children does not lead to some profound realization—mostly people recognize that it would take a lot of their time, and that sounds like that might make them less happy. But we also know that humans are not really very good at judging what would make them happy.

We also know that as you say, economic forces have normalized a situation where it's difficult for two people to find the time to justify the value of child-rearing. But I find it hard to see how we reverse this trend if the only discussion is economic—people don't have more children in Scandinavia even though the economic burden is lower.


You mean, the generation before them were financially inevitably fucked if they didn't have children. For our parent's generation it was more-or-less neutral and for this generation it's a strong financial disincentive. You have enough (barely, but still), right into old age, if you don't have children. In fact it's easier without children. Simple as that.

As for a solution short of canceling pension plans? That would do it, of course, but ... of course this is exactly what current governments in EU, US, Japan, ... are effectively doing: canceling pension plans. They can either cancel pensions in the future ... or they can choose to spend much less now. So of course they are really reducing pensions (from already insufficient levels to basically nothing...) That won't help the current generation choose children, although that may turn out to be the smart play. The trouble is humans largely don't learn rationally, we learn from example. So we first need to see a large number of Americans and Europeans (and ...) fall into destitution after 65, and see the few that did have many children get rescued by their children from that fate, only then will we choose to have children again.

I guess this is how natural selection stabilizes the human population.

This has happened before and it will happen again. (seriously, human birth rates change in "waves")


It is interesting for a targeted campaign like this administration's—which claims to be concerned with government waste and inefficiency—to cut employees for one of the only organizations that we know causes decreased revenues for the federal government when it receives cuts in funding [1].

Of course, it's not like they're implementing any sort of new policies to make sure productivity in the IRS remains the same. I'm not sure why we'd expect anything less at this point.

[1] https://www.cbo.gov/publication/60037


It's not targeted. It's a blanket destruction of the nation. It is carpet bombing.

I agree with you—destruction of bureaucracy with no plan is a recipe for disaster. I only meant to take the administration at it's word for a moment.

Insofar as you're tariff-ing everything that moves, the less you particularly care about internal revenue. The endgame is to eliminate the income tax entirely anyway, so why would one want the IRS at all?

Because it's all about libertarian, anarcho-capitalist, utopian ambitions like Curtis Yarvin where all government is "the problem" without evidence or admission of reality.

That plus we know Trump and IRS are not best friends...

He also said this, in the same blog post:

> Ensuring that the benefits of AGI are broadly distributed is critical. The historical impact of technological progress suggests that most of the metrics we care about (health outcomes, economic prosperity, etc.) get better on average and over the long-term, but increasing equality does not seem technologically determined and getting this right may require new ideas.

> In particular, it does seem like the balance of power between capital and labor could easily get messed up, and this may require early intervention. We are open to strange-sounding ideas like giving some “compute budget” to enable everyone on Earth to use a lot of AI, but we can also see a lot of ways where just relentlessly driving the cost of intelligence as low as possible has the desired effect.

The idea that distributing a "universal basic compute budget" to every person would do anything to solve potential economic inequality that may arise due to hypothetical AGI is just comically simplistic and childish. Giving people access to AI won't fix power imbalances if wealth continues to concentrate.

Sam Altman is a professional bullshit peddler—particularly at this stage in his career, I rarely hear of him doing anything else.


Congress is also ignoring the principles laid out in the Constitution by acting like presidents have unilateral power when they do not, and were never intended to. Many members of Congress have defended Trump's actions on these issues by implying that voters gave him a mandate, but the voters also voted for Congress, and they are supposed to check the President in some ways. Congress is probably supposed to be the most impactful branch of federal government, not the weakest.


Reasonable take, but to ignore the politics of this whole thing is to miss the forest for the trees—there is a big tech oligarchy brewing at the edges of the current US administration that Altman is already participating in with Stargate, and anti-China sentiment is everywhere. They'd probably like the US to ban Chinese AI.


Yeah especially when it's making waves in the market and hundreds of times more efficient than their best and brightest came up with under their leadership.


Same as the big tech companies, probably make all of their products worse in service to advertising. AI-generated advertising prompted by personal data could be extremely good at getting people to buy things if tuned appropriately.


Well. If you're using AI instead of a search engine, they could make the AI respond with product placement more or less subtle.

But if you're using AI for example to generate code as an aid in programming, how's that going to work? Or any other generative thing, like making images, 3d models, music, articles or documents... I can't imagine inserting ads into those would not destroy the usefulness instantly.

My guess is they don't know themselves. The plan is to get market shre now, and figure it out later. Which may or may not turn out well.


Why does HN automatically edit titles??


It’s supposed to kill clickbait titles. In practice it just randomly mangles titles. I don’t get why they don’t remove the feature and instead rely on flagging.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: