Hacker Newsnew | past | comments | ask | show | jobs | submit | jorvi's commentslogin

High-end chips should be more of a EU concerted effort rather than every country for itself.

The problem is that unlike Airbus, which (highly inefficiently) can be made in multiple countries, you can't really spread out parts of a fab that way. The most you can do is fab machines + chips + chip packaging. Netherlands already has fab machines and in packaging there isn't a high margin.

That leaves chips, and you can be sure that whoever gets the fabs, the other EU countries will throw a shit fit and demand counter investments to compensate. And on top of that there is also regional animosity. So even if it makes logical sense to pop the fab down in the middle of the blue banana, it won't make political sense because France and all of South and East EU will be angry about "the rich getting richer".


I remember those Trilogy remasters being really disliked because they were (are?) based on a botched version of the game.

Que? 4,722,366,482,869,645,213,696 addresses isn't enough for you?

Steve Jobs was all about the customer experience, hence so many of his famous quotes. Two like the most are:

- Him saying "Microsoft has no style", not because I care about ribbing on Microsoft but because it indicated that Apple was a company that really cared about the aesthetics of both their hardware and software products

- His response to the question why there was no $600 MacBook to compete with Windows plastic craptops. He specifically said that to deliver a good UX to the users, he needed Macs at a certain price point to invest in the hardware and the OS. Shareholder value didn't even enter the equation.

He also hated market segmentation and was adamant that all iPhones within a generation had the same features, aside from the storage size. When the 6 Plus models got image stabilization he felt awkward about it.

As soon as Tim Cook took over, it became beancounter city. Market segmentation became massive. Year over year price hikes with minimal improvements. Services became the core strategy. And the last 5 years you are under a constant barrage of ads for iCloud, Apple Music, Apple News, Apple TV and even ads in your Wallet.

Oh, and I'm just remember how Jobs said that form should follow function. Which you can also see a clear decline in from when Jobs became less involved, with iOS 7 being a disaster. And ever since then Apple has being violating their own Human Interface Guidelines. If you download their 1997 version it's absurd how many of their own former guidelines they violate these days.

To be honest, I'm not sure if you can entirely blame Cook. Ever since the 2010s, it's felt like capitalism has reached an endstage culture, where it is no longer about an equilibrium between best product for lowest price vs minimum product for highest price, but instead just maximizing shareholder value at the cost of the customer, the workers, the business itself, the environment and what have you.


> Year over year price hikes with minimal improvements

did you have a specific example in mind? It seems that the price of the hardware generally stays the same from year to year.

for example, from iphone 3g to iphone 6s was $199. and iphone 12 through today's iphone 17 is $799. I think the change in the middle was due dropping carrier subsidies and going to full-screen with face id.


2012-2018 was an insane run for MacBook Pro prices. Doubly so in Europe. Apple loves to adjust (read: gouge) prices when the Euro weakens against the dollar, but they never adjust down when the dollar weakens against the Euro.

This is a common trajectory for companies. The first CEO (founder) paves a vision, the second CEO grows the firm profitably, the third CEO is usually a wall street hire on a mandate to massage the stock price.

Doing it on restart makes the mitigation de facto useless. How often do you have 10, 20, 30d (or even longer) desktop uptime these days? And no one is regularly restarting their core applications when their desktop is still up.

Enjoy the fingerprinting.


There isn't enough energy in the solar system to count to 2^128. Now a uuid v4 number "only" has 2^122 bits of entropy. Regardless, you cannot realistically scan the uuid domain. It's not even a matter of Moore's law, it is a limitation of physics that will stand until computers are no longer made of matter.

I restart my browser basically every day.

yeah I close out everything as a mental block against anything I'm working on.

I think there's a subset of people that offload memory to their browsers and that's kinda scary given how these fingerprint things work.


You just need to open so many instances and tabs in each instance that it crashes every couple days

Umm, I restart my PC about once a week for security and driver updates.

If you don't, you have a lot more to worry about beyond fingerprinting...

Oh and I'm on LINUX (CachyOS) mind you.


The AI skeptics instead stick to hard data, which so far shows a 19% reduction in productivity when using AI.

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

> 1) We do NOT provide evidence that AI systems do not currently speed up many or most software developers. Clarification: We do not claim that our developers or repositories represent a majority or plurality of software development work.

> 2) We do NOT provide evidence that AI systems do not speed up individuals or groups in domains other than software development. Clarification: We only study software development.

> 3) We do NOT provide evidence that AI systems in the near future will not speed up developers in our exact setting. Clarification: Progress is difficult to predict, and there has been substantial AI progress over the past five years [3].

> 4) We do NOT provide evidence that there are not ways of using existing AI systems more effectively to achieve positive speedup in our exact setting. Clarification: Cursor does not sample many tokens from LLMs, it may not use optimal prompting/scaffolding, and domain/repository-specific training/finetuning/few-shot learning could yield positive speedup.


Points 2 and 3 are irrelevant.

Point 1 is saying results may not generalise, which is not a counter claim. It’s just saying “we cannot speak for everyone”.

Point 4 is saying there may be other techniques that work better, which again is not a counter claim. It’s just saying “you may find bette methods.”

Those are standard scientific statements giving scope to the research. They are in no way contradicting their findings. To contradict their findings, you would need similarly rigorous work that perhaps fell into those scenarios.

Not pushing an opinion here, but if we’re talking about research then we should be rigorous and rationale by posting counter evidence. Anyone who has done serious research in software engineering knows the difficulties involved and that this study represents one set of data. But it is at least a rigorous set and not anecdata or marketing.

I for one would love a rigorous study that showed a reliable methodology for gaining generalised productivity gains with the same or better code quality.


There is no such hard data. It's just research done on 16 developers using Cursor and Sonnet 3.5.

> Just use LibreOffice at this point, at least it has native performances

I don't think you've ever used LibreOffice if you think it in any way fits the description "performant". It's a great project but I wouldn't exactly call it snappy.


I use regularly both libreoffice and collabora online and I can say the former is snappy compared to the second. It can take a longer time to open thought, mostly on Windows.

Current LLMs often produce much, much worse results than manually searching.

If you need to search the internet on a topic that is full of unknown unknowns for you, they're a pretty decent way to get a lay of the land, but beyond that, off to Kagi (or Google) you go.

Even worse is that the results are inconsistent. I can ask Gemini five times at what temperature I should take a waterfowl out of the oven, and get five different answers, 10°C apart.

You cannot trust answers from an LLM.


> I can ask Gemini five times at what temperature I should take a waterfowl out of the oven, and get five different answers, 10°C apart.

Are you sure? Both Gemini and ChatGPT gave me consistent answers 3 times in a row, even if the two versions are slightly different.

Their answers are inline with this version:

https://blog.thermoworks.com/duck_roast/


What do you mean, "are you sure"? I literally saw and see it happen in front of my eyes. Just now tested it with slight variations of "ideal temperature waterfowl cooking", "best temperature waterfowl roasting", etc. and all these questions yield different answers, with temperatures ranging from 47c-57c (ignoring the 74c food safety ones).

That's my entire point. Even adding an "is" or "the" can get you way different advice. No human would give you different info when you ask "what's the waterfowl's best cooking temperature" vs "what is waterfowl's best roasting temperature".


Did you point that out to one of them… like “hey bro, I’ve asked y’all this question in multiple threads and get wildly different answers. Why?”

And the answer is probably because there is no such thing as an ideal temperature for waterfowl because the answer is “it depends” and you didn’t give it enough context to better answer your question.

Context is everything. Give it poor prompts, you’ll get poor answers. LLMs are no different than programming a computer or anything else in this domain.

And learning how to give good context is a skill. One we all need to learn.


But that isn't how normal people interact with search engines. Which is the whole argument everyone is saying here, how LLMs are now better 'correct answer generators' than search engine. They're not. My mother directly experienced that. Her food would have come out much better if she completely ignored Gemini and checked a site.

One of the best things LLMs could do (and that no one seems to be doing) is allow it to admit uncertainty. If the average weight of all tokens in a response drops below X, it should just say "I don't know, you should check a different source."

At any rate, if my mother has to figure out some 10 sentence stunted multiform question for the LLM to finally get a good consistent answer, or can just type "best Indian restaurant in Brooklyn" (maybe even with site:restaurant reviews.com"), which experience is superior?

> LLMs are no different than programming a computer or anything else in this domain.

Just feel like reiterating against this: virtually no one programs their search queries or query engineers a 10 sentence search query.


If I made a new, not-AI tool called 'correct answer provider' which provided definitive, incorrect answers to things you'd call it bad software. But because it is AI we're going to blame the user for not second guessing the answers or holding it wrong ie. bad prompting.

I created an account just to point out that this is simply not true. I just tried it! The answers were consistent across all 5 samples with both "Fast" mode and Pro (which I think is really important to mention if you're going to post comments like this - I was thinking maybe it would be inconsistent with the Flash model)

Unfortunately, despite your account creation it remains true that this happened. Just tested it again and got different answers.

It obviously takes discipline, but using something like Perplexity as an aggregator typically gets me better results, because I can click through to the sources.

It's not a perfect solution because you need the discipline/intuition to do that, and not blindly trust the summary.


Did you actually ask the model this question or are you fully strawmanning?

My mother did, for Christmas. It was a goose that ended up being raw in a lot of places.

I then pointed out this same inconsistency to her, and that she shouldn't put stock in what Gemini says. Testing it myself, it would give results between 47c-57c. And sometimes it would just trip out and give the health-approved temperature, which is 74c (!).

Edit: just tested it again and it still happens. But inconsistency isn't a surprise for anyone who actually knows how LLMs work.


> But inconsistency isn't a surprise for anyone who actually knows how LLMs work

Exactly. These people saying they've gotten good results for the same question aren't countering your argument. All they're doing is proving that sometimes it can output good results. But a tool that's randomly right or wrong is not a very useful one. You can't trust any of its output unless you can validate it. And for a lot of the questions people ask of it, if you have to validate it, there was no reason to use the LLM in the first place.


> Implements an init system; does not replace DNS, syslog, inetd, or anything else

Neither does systemd its init.

Unknowledgeable people keep confusing systemd the init and systemd the daemon / utility suite. You can use just the init system without pulling in resolved or networkd or whatever.

Systemd is the Unix philosophy of lots of modularity. But because all the systemd daemons come from the same shop, you get a lot of creature comforts if you use them together. Nothing bad about that.


> because all the systemd daemons come from the same shop, you get a lot of creature comforts if you use them together. Nothing bad about that.

That's how vendor lock-in works, in which a myth is propagated that having it all come from under one roof is best. In fact, it is a guarantee that best-of-breed alternative solutions cannot be used. Interoperability is thwarted. This is why sensible Unix admins historically knew to keep options open for mixed-vendor sourcing as long as the bosses didn't get roped in to a single vendor or source.


Okay, so you code the features that dnsmasq is missing that resolved has. Or pay someone to do it. I promise you systemd does not have special verification protocols that stop you from interfacing with certain features. This isn't Apple.

Think about it, you can't obligate the systemd folks to maintain codebases that aren't theirs.. would be madness.


BAT literally was built for (accumulation of) micropayments.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: