Hacker Newsnew | past | comments | ask | show | jobs | submit | daemontus's commentslogin

The metaphor sure seems plausible, but why does the whole thing read like a LinkedIn post that was fed to an LLM to farm attention? :(


Because it most certainly is.


I may be completely out of line here, but isn't the story on ARM very very different? I vaguely recall the whole point of having stuff like weak atomics being that on x86, those don't do anything, but on ARM they are essential for cache coherency and memory ordering? But then again, I may just be conflating memory ordering and coherency.


Well, since this is a thread about how programmers use the wrong words to model how they think a CPU cache works, I think it bears mentioning that you've used "atomics" here to mean something irrelevant. It is not true that x86 atomics do nothing. Atomic instructions or, on x86, their prefix, make a naturally non-atomic operation such as a read-modify-write atomic. The ARM ISA actually lacked such a facility until ARMv8.1.

The instructions to which you refer are not atomics, but rather instructions that influence the ordering of loads and stores. x86 has total store ordering by design. On ARM, the program has to use LDAR/STLR to establish ordering.


Everything it says about cache coherency is exactly the same on ARM.

Memory ordering has nothing to do with cache coherency, it's all about what happens within the CPU pipeline itself. On ARM reads and writes can become reordered within the CPU pipeline itself, before they hit the caches (which are still fully coherent).

ARM still has strict memory ordering for code within a single core (some older processors do not), but the writes from one core might become visible to other cores in the wrong order.


you are getting downvoted, but you are of course correct.


Normally when talking about relaxed memory models, full cache coherency is still assumed. For example the C++11 memory model cannot be implemented on a non-cache-coherent system, at least not without massive performance penalties.


Maybe this is a naive question, but how are "skills" different from just adding a bunch od examples of good/bad behavior into the prompt? As far as I can tell, each skill file is a bunch of good/bad examples of something. Is the difference that the model chooses when to load a certain skill into context?


I think that's one of the key things: skills don't take up any of the model context until the model actively seeks out and uses them.

Jesse on Bluesky: https://bsky.app/profile/s.ly/post/3m2srmkergc2p

> The core of it is VERY token light. It pulls in one doc of fewer than 2k tokens. As it needs bits of the process, it runs a shell script to search for them. The long end to end chat for the planning and implementation process for that todo list app was 100k tokens.

> It uses subagents to manage token-heavy stuff, including all the actual implementation.


I think it just gives you the ability to easily do that with slash command, like using "/brainstorm database schema" or something instead of needing to define what "brainstorm" means each time you want to do it.


what you are suggesting is 1-shot, 2-shot, 5-shot etc prompting which is so effective that it's how benchmarks were presented for a while


One detail most comments seem to be missing is that the O(1) complexity of get/set in hash tables depends on memory access being O(1). However, if you have a memory system operating in physical space, that's just not possible (you'd have to break the speed of light). Ultimately, the larger your dataset, the more time it is going to take (on average) to perform random access on it. The only reason why we "haven't noticed" this yet that much in practice is that we mostly grow memory capacity by making it more compact (the same as CPU logic), not by adding more physical chips/RAM slots/etc. Still, memory latency has been slowly rising since the 2000s, so even shrinking can't save us indefinitely.

One more fun fact: this is also the reason why Turing machines are a popular complexity model. The tape on a Turing machine does not allow random access, so it simulates the act of "going somewhere to get your data". And as you might expect, hash table operations are not O(1) on a Turing machine.


Ah, the age old question of "1 horse-sized duck vs. 100 duck-sized horses"...


This is a Zerg vs Protoss debate.


Best strategy is Protoss + Zerg. What if toss could field some zerglings along with the expensive OP weapons?


As others have mentioned, the main problem is that open systems are more vulnerable to low-cost, coordinated external attacks.

This is less of an issue with systems where there is little monetary value attached (I don't know anyone whose mortgage is paid for by their Stack Overflow reputation). Now imagine that the future prospects of a national lab with multi-million yearly budget are tied to a system that can be (relatively easily) gamed with a Chinese or Russian bot farm for a few thousand dollars.

There are already players that are trying hard to game the current system, and it sometimes sort of works, but not quite, exactly because of how hard it is to get into the "high reputation" club (on the other hand, once you're in, you can often publish a lot of lower quality stuff just because of your reputation, so I'm not saying this is a perfect system either).

In other words, I don't think anyone reasonable is seriously against making peer review more transparent, but for better or worse, the current system (with all of its other downsides) is relatively robust to outside interference.

So, unless we (a) make "being a scientist" much more financially accessible, or (b), untangle funding from this new "open" measure of "scientific achievement", the open system would probably not be very impactful. Of course, (a) is unlikely, at least in most high-impact fields; CS was an outlier for a long time, not so much today. And (b) would mean that funding agencies would still need something else to judge your research, which would most likely still be some closed, reputation-based system.

Edit TL;DR: Describe how the open science peer-review system should be used to distribute funding among researchers while begin reasonably robust to coordinated attacks. Then we can talk :)


Two things that I don't see mentioned is that:

(a) [Name 2005] is much easier to mentally track if it appears repeatedly in longer text than [5] (at least for me). [5] is just [5]. [Name 2005] is "that paper by Name from twenty years ago".

(b) By using [Name 2005], I might not know which exact paper this is, but I get how recent it is w.r.t. what I am reading. In many cases, this is useful context. Saying "[5] proves X" could mean that this is a new result, or a well known fact. Saying "[Name 1967] proves X" clearly indicates that this is something that has been known for some time.


I see a lot of "This must be real, why would labs publish this if they don't think it's real, they have nothing to gain." sentiment on HN lately. Or "Researcher's career would be ruined if they falsely claim to replicate.", and so on. I also want to believe! But I should add a bit of skepticism to the hype :)

- "this could ruin their career": Depends. If they posted completely fake numbers or intentionally fake videos. Sure, that would be bad. But none of this is peer reviewed, and all of this can be retracted. A contaminated sample? Oops, retract. Bad measurement methodology? Oops, retract. Sure, somebody will remember that you made the controversial paper in the first place, but as long as you are not provably fabricating, a lot can be attributed to "an honest error". There are tons of peer reviewed papers out there with errors that completely change the outcome. Does not mean the authors are "finished".

- "they have nothing to gain": Oh, they absolutely do. While "science should be fully objective", funding agencies very much aren't. Obviously, just like VC funding, science funding is not a complete coin toss. But having "the right" team and background is often as important as the idea itself. One way to get the right background is to "touch shoulders with the giants" and one way to get the right team is to be highly visible and attract talent.

So overall, if LK99 is eventually shown to be a superconductor by someone else, you have a lot to gain, even if your own initial study is not perfect.

Let's say your team synthesised something. It looks like LK99 and it has some properties that are not really superconducting but at least a bit unusual. This clearly isn't what you hoped for. Now, do you run a bunch of other controls to see if it is some form of contamination, process error, combination of both... or do you publish a vague click-bait paper on ArXiv and hope that other results will somewhat align with yours?

Finally, I'm not claiming this paper or any other paper intentionally published untrue or misleading results. Just that scientists are also people. They have FOMO, they follow trends, they see what they want to see. As always, big claims require big evidence, and so far we don't really have that. But that does not mean there isn't some truth to the big claims :)


A contaminated sample that materially change the composition but still yield a superconductor would be a novel finding.

An error in manipulation leading to an external communication on something this high profile is sure to affect your career. It's like a biologist claiming to have found evidence extraterrestrial life and having to retract. I think I would consider hara-kiri..


But the thing is... except for the original authors, none of these papers so far really claim to have a room-temperature superconductor, right? They claim "simulated band structure with low Fermi level", or "unusual levels of diamagnetism", or "almost zero resistance up to -100°C (but lack of phase transition)", etc.

Yes, retracting these is still shameful, but it's not a "we found extraterrestrial life" claim. It's a "we received weird signals from a nebula that we don't understand so far" claim.

And yes, a lot of supporting but inconclusive evidence is still supporting evidence. My point is not that (most) scientists would risk lying about replicating a superconductor, but rather that uncertain or inconclusive results with a solid chunk of plausible deniability in a rapidly evolving environment go a long way towards being "in the room where it happened".


I wouldn't bet on LK99 being a RTAPS but "Replicating a bunch of weird shit that we don't really understand that at least somewhat align with the possibility" really isn't a damning position to be in when the starting point is "The team says they only get a working sample about 10% of the time and everyone else is working off of pretty meh instructions on how to replicate"


This is not my area of expertise, but as a former scientist (at least at the PhD and postdoc level) I would not stake my credibility on something without being 1000% sure on a normal day, let alone when the topic is extraterrestrial life or room temperature superconductors.

Also, it's not true at all the retraction have no consequences. It is an indelible mark of shame.


No disrespect, I use offline Google maps almost daily, but there are far far better offline hiking apps out there.

Google will probably work ok for the most popular trails, and I guess you can use it as a supercharged compass. But at least in Europe, if you actually plan a route in any mountains based on Google, you're in for an adventure :)


What apps would you recommend?


https://play.google.com/store/apps/details?id=cz.seznam.mapy

Bit focused on eastern Europe, but has a hiking mode with very good coverage of the official routes (not only in eastern Europe ;)). And everything is free, including offline maps. Terrific value for casual trips.

https://play.google.com/store/apps/details?id=com.bergfex.to...

This one is not free (there is a free tier though) but seems to have more details in some areas, so the pro tier may be worth it if you hike a lot.


I am really fed up with all the "nobody keeps a phone long enough to make battery replacements useful" arguments around here.

Out of the 10+ phones in our family over the last 5-10 years, one was water damage and one was failure of the internal flash memory. Every other phone was replaced because the battery died. Every single one.

Official replacement was no longer available and DIY was either impossible (lack of parts) or eventually ended up damaging the device beyond economical repairability.

Regular people that don't have thousands of dollars in disposable income (and nothing useful to spend it on) haven't cared about phone specs for years. Hell, I love tech and could buy a new phone every year and even I haven't cared about phone specs since the original Google Pixel.

If you brick your phone every year because that's just who you are, no judging.

If you want a new phone every year and can afford one, its your money. Just remember that you are fortunate enough to be able to do so. And someone will surely buy your used phone and likely (try to) replace the battery in it.

Overall, it's like claiming that nobody drives cars that are 10+ years old because they needed a new clutch. Or that a 50 year old house needs to be torn down because fixing the roof economically is clearly beyond our engineering prowess. Are there people that swap cars every 5 years? Absolutely. But that does not mean those cars go to a scrapyard.

I will not comment on the technical aspects of this proposal, since the actual outcome might very well need to be settled in the court still. But dismissing the general point of legislature which demands better longevity for devices that basically everybody needs to partake in modern society is rather shortsighted.


My phones have been destroyed not by ageing batteries but by bloating software. You need to update software for security reasons, but those updates also take up more space and run slower (because developers work only as hard as is necessary to make apps run acceptably on their phones, which are typically new models). Space bloat, more than time bloat, has been the biggest issue.

Open bootloaders and drivers (after a certain number of years at least?) would help with that. Make available enough to let open source developers help themselves. Even if Google stops supporting Android on my hardware, or Apple stops supporting iOS, there should at least be a stripped down Linux?

I'm not sure how useful even that would be though, because that surely won't be able to run the latest apps used by society.

Sigh. Maybe everything is just an arms race. Phones won't stop going obsolete until it is physically impossible to make faster phones.

Since space bloat has been a bigger problem than time bloat, I could maybe have gotten more life out of my phones if the OS had supported the installation of apps to the SD card. Maybe that could be a cheap partial fix.


I find that to be much more of a problem with Android phones than iOS. I had to scrap two Samsung Galaxy phones after less than three years of use because they became so slow as to be unusable.

I am currently using a five-year-old iPhone Xs, and it seems to be just as fast as ever. The only issue I have with the device is decreasing battery life. If the battery was replaceable, I could easily use it for another 2–3 years at least.


If you look at the charge cycles on a typical iphone battery, it is some where around 500-600. If you look at common usage, most people use the phone enough to have to charge it once a day.

That only gives you maybe 1.5 to 2 years of time before the battery is gone.

While we are on the topic on replacing things, it would be nice if we could change out the internal flash memory. I would keep my iphone for 5 to 7 years if I could change out both the battery and flash memory.


>most people use the phone enough to have to charge it once a day. That only gives you maybe 1.5 to 2 years of time before the battery is gone.

A cycle is equivalent to a full discharge/charge. Using the phone to x% battery is roughly equivalent to x% of a cycle (it’s not a perfectly 1:1 relationship, but close enough).

Most people do not use a phone to 0% battery every single day. That’s equivalent to ~8 hours of screentime on a modern phone.

The average person uses their phone for about 3 hours a day [0]. Assuming that the vast majority of people's usage is within an hour of the average, 2-4 hours of daily phone usage would translate to 25-50% of a cycle, or 1000-2000 days of a usable battery, assuming a 500 cycle battery lifespan. (In reality it would be somewhat less, since as the battery degrades over time those 2-4 hours of usage would constitute more than 25-50% of a cycle.)

[0] https://explodingtopics.com/blog/smartphone-usage-stats


This is... sort of correct. Yes, most people don't use their phone 8 hours a day.

But this is a pretty cursory reading of those stats. If you actually dig into them, the majority of countries being surveyed are using their phones for more than 4 hours a day. The average person in the US uses their phone for 3 hours and 30 minutes.

A couple of takeaways:

- heavy smartphone usage inversely correlates with the wealth of the nation being talked about (this kind of intuitively makes sense, because countries like the Philippines are probably more likely to have people using their phone as their primary computer). Being able to use your phone a small amount of time each day has a small component of privilege to it, it probably means you have access to other computers.

- Even in the US, these are averages. There are people in the US who use their phones as their primary computer. There are people who travel a lot, or for whatever reason, end up using their phone more, and their batteries are very much going to be the first part of their phone that fails. The average usage in the US being 3.5 hours does not mean that the vast majority of people's usage is within an hour of the usage.

- Like you yourself said: "in reality it would be somewhat less, since as the battery degrades over time those 2-4 hours of usage would constitute more than 25-50% of a cycle." If we assume that heavy smartphone users in the US are using their phone for at least 4-4.5 hours a day (a very easy assumption if not conservative, since the average is already 3 and half hours) you're still going to be in a position where after about 2 years you're no longer going to get a full 8 hours out of a charge. Once you get to a point where a phone can't last a full 8 hour day on a single charge, you might start thinking about buying a new phone even if that's not your typical usage, because the first couple of times you forget to plug in your phone you'll stop trusting it to hold a charge.

GP is definitely wrong about how heavily people use their smartphones, but I suspect you're underestimating how heavily smartphones do get used and how big of an issue battery degrading is. I'd love to find more solid stats basically just asking people why they upgrade, but my experience matches GP's (minus the exact numbers). Battery lifespan and the cost of battery replacement is a huge component in smartphone churn. People buy new smartphones because their batteries die.


I would agree with your numbers, but I would like to point out that different uses of the phone can use more energy than other uses.

This would tend to reduce the number of overall days of usable battery.

Think video recording, video editing, or heavy video playback apps. These are huge drains on the battery.


They absolutely should include replaceable flash memory. Flash is both cheap, plentiful and a consumable, it wears out. It has the same qualities as battery replacement. Hell, why not combine them.

Most people would buy a new phone anyway, but these devices get obsoleted and turned into garbage, forcing people to both generate ewaste and buy things they don't really need.

Between eMMC and SD Express, there are at least two existing great options. I am not asking for doors, all this stuff can be internal by removing the back.

The issue when people discuss this, they get into weeds talking about how this or that mechanism isn't feasible, which is entirely orthogonal. You dictate the outcome, let engineers solve the problem.


I don't have any scientific data for this, but my anecdotal experience is that the actual damage comes from the way the phones are used, not necessarily the absolute charge count. Here me out...

- A phone that is used a lot in the car as GPS is often charged/discharged continuously, often for hours.

- Furthermore, this often happens in very hot or cold conditions which are bad for battery charging.

- A lot of people seem to live with the perpetual 5% of battery, or generally don't care about properly charging the device. This is also terrible for longevity.

- There are other reasons why you may want to constantly charge/discharge your phone (e.g. you are making Android apps, or it's the phone where people call your place of business, etc.).

So, just to make myself clear: I completely agree that on average, batteries should last for a long time. But in practice, people often have irregular activities which appear negligible on average ("it's just a few charge cycles"), but end up damaging the battery more than regular prolonged use. But again: I'd very much like more hard data on this :)


My last phone was replaced just because wouldn't enable VoLTE on the phone, even though the hardware supported it.


> Just remember that you are fortunate enough

To a first approximation, anyone can buy a new phone every year. You can get an unsubsidized Android phone for less than $60.


In what scenario? One that you don't have enough money for a nice phone so you buy a shit one, but you have enough money to replace it all the time?

Technically correct sometimes is not the best kind of correct.


What do you think most of the world does?


Something like "buy relatively expensive, not shitty phone, and make most of it"?


You really think developing countries are buying “expensive phones”?

Who do you think is buying the $70 phones in India, China, etc?


You really think its not?

Do you think no one in India, China ever buy expensive phones?

Please educate yourself: https://www.bajajfinserv.in/insights/best-selling-phones-in-...


Is this being proposed as a solution?

Churn on budget smartphones is even worse than on premium phones and battery life is even more likely to be an issue on those budget phones. It's good for the budget market as well if batteries are replaceable.

It would be great if people could buy an old secondhand iPhone and replace the batteries themselves for $30-40 bucks instead of buying a $60-70 garbage phone with who-knows-what spyware and out-of-date software every 1-2 years. That companies are able to put out this kind of garbage and people are buying them is (if anything) evidence that the secondhand/repair market on smartphones isn't nearly as strong as it should be.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: