Hacker Newsnew | past | comments | ask | show | jobs | submit | londons_explore's commentslogin

If you want to know if you've fallen victim to such an attack, this might help:

https://serverthiefbait.com

It is a small crypto wallet you can hide in your computer and be notified when someone steals from it.


This is dumb. The abstraction is at the wrong level.

Applications should assume the page size is 1 byte. One should be able to map, protect, etc memory ranges down to byte granularity - which is the granularity of everything else in computers. One fewer thing for programmers to worry about. History has shown that performance hacks with ongoing complexity tend not to survive (eg. interlaced video).

At the hardware level, rather than picking a certain number of bits of the address as the page size, you have multiple page tables, and multiple TLB caches - eg. one for 1 megabyte pages, one for 4 kilobyte pages, and one for individual byte pages. The hardware will simultaneously check all the tables (parallelism is cheap in hardware!).

The benefit of this is that, assuming the vast majority of bytes in a process address space are made of large mappings, you can fit far more mappings in the (divided up) TLB - which results in better performance too, whilst still being able to do precise byte-level protections.

The OS is the only place there is complexity - which has to find a way to fit the mappings the application wants into what the hardware can do (ie. 123456 bytes might become 30 4-kilobyte pages and 576 byte pages.).


Your response to a change that's motivated by performance improvements is to suggest switching to a scheme that'll have catastrophically worse performance?

It would likely have better performance for similar power and silicon area, because a hierarchical TLB will have a higher hit rate for the same number of transistors.

If you're going to go that far, you might as well move malloc() into hardware and start using ARM-style secure tagged pointers. Then finally C users can be free of memory allocation bugs.

Transistors aren't free (as in power consumptions, thermal etc), and wasting them on implementing 1 byte granularity TLBs would probably be a hard sell, even if assuming everything can indeed be done in parallel.

Dozens of years of kernel building, dozens of OSes, dozens of physical architectures, all having settled on minimum 4KB pages being a right balance between performance and memory usage, wiped away by a single offhand comment with no knowledge about the situation. Now that's HN.

Just the sheer TLB memory usage and performance implication of doing single byte pages would send CPU performance back to the stone age.


Completely false. The 4 KiB page size came from a machine with a total of 512 KiB (1962 Atlas, 3072B pages, 96k 48b words). It hasn’t scaled at all for inertia reasons and it has real and measurable costs. 64 KiB would have been the better choice IMO, but 16 is better than 4.

Hence the "minimum" part. The thread is literally about Android being compiled for 16KB pages, CPU support for larger pages has grown, easily up to 4MB for most consumer CPUs.

Going down _lower_ than 4KB is purely a waste of memory and performance.


My proposed design has many page sizes - nothing stops a software developer making all mappings multiples of 4kb and not using the byte sized pages.

My example was 1mb, 4kb and 1 byte pages - but a real design would probably use every power of two, or every even power of two to get best use of the TLB space.

It hasn't been done before because of a chicken and egg problem. CPU designers don't build it because no OS has the ability to use it, and no OS uses it because no CPU supports it. It would be a substantial amount of work for both parties.


> One should be able to map, protect, etc memory ranges down to byte granularity - which is the granularity of everything else in computers.

But you can do this, you simply have to pay the cost of using PAGE_SIZE of memory per byte you want to protect?


If you're making the migration at all, you really ought to be going for fully variable page sizes, otherwise 5 years from now there'll be a 64K page size CPU and suddenly everyone has to recompile everything again and there is another compatibility wall...

Is there a such a thing? Page size gets baked into things like executable layouts, plus any place that uses the PAGE_SIZE constant (instead of sysconf(_SC_PAGESIZE)).

Indeed it would take redesigning a bunch of things to make runtime variable page size an option.

4 KiB page sizes have been used since the 1960's. More memory doesn't necessarily mean that larger pages are beneficial. Maybe 16 KiB is better for Android? Maybe. There really is no clear consensus on what the optimal page size for modern architectures should be.

Regular 3.5 inch hard drives, hundreds of them, with some software raid to deal with dead drives. All connected by USB because spinning rust can barely fill USB3 bandwidth anyway.

This setup is cheap to begin with (just $50 gets you your first few TB's), and scales well (127 USB devices per host)


What USB hubs would you suggest? How do you organize the fan-out from PC to drive.

I suspect the main issue is the north american 2 split phases+neutral design.

Specifically, without the neutral, the car can already generate that with the onboard charger. A bidirectional charger costs no more than a unidirectional one if you are designing it.

But generating that neutral is expensive. You either need a hundred lbs of transformer, or some expensive power electronics.


One big benefit:

Electrical engineers in 2025 have so many little power drains that any car left undriven for a few months has a dead battery.

A small book sized solar panel is enough to counteract that.


> Electrical engineers in 2025 have so many little power drains that any car left undriven for a few months has a dead battery.

Interestingly enough, the quiescent current drain of my 2020s era vehicle is lower than either of my past 2000s era vehicles when I measured it.

The phenomenon of batteries being drained after a few months of being left unattended is not new.


The big issue tends to be complex logic for going to sleep often getting stuck. Ie. "oh, I was trying to use the LTE connection to poll for updates, but the connection got reset so I kept the CPU awake forever whilst retrying every 5 minutes rather than going to sleep mode".

Older cars had this too - I had a bunch of cars which would kill their own batteries if not locked - the engineers assumed that all owners lock the car when walking away, which often isn't the case in your own garage.


It's not, but older cars tried to keep their batteries fully charged. Newer cars with the so-called "smart" alternators never keep the battery full, they always leave some empty capacity to recover energy while moving.

I had this same problem in my 2005-ish Lexus! I got a cheap switch[1] on Amazon and put it in-line with my battery. If I’m going to leave the car undriven for more than a week, I just disconnect the battery with the switch. It’s been great, no complaints so far.

[1] this is the switch I got https://a.co/d/90K0QiH


Doesn't anti-theft precautions kick in when you do this? On my Honda, if the battery goes completely dead or when I replace it, I have to enter a code in after, and IIRC all my radio stations reset, so it would be really inconvenient to do this often.

My way around this, which is also somewhat inconvenient- is that I pop the hood and connect a trickle charger if I have a feeling I won't be driving for a few weeks. I have a garage so this is the lesser evil.


Hmm I do have to put the key in the door when I’m reconnecting the battery, or else the car alarm will go off. That’s the only thing I’ve noticed though — maybe it’s just too old to have more complex features? I never listen to radio, so the stations may be reset and I just don’t notice.

My accord is a 2007 so not much newer than yours. We can Uber around for most trips but find it very convenient to have a car at times. If it works for you great, I thought that was a fairly common thing- particularly in cars of that era, because radio theft used to be so common.

I use a PV trickle charger, the panel is barely 1 square foot or so. Would be nice if it was integrated instead of having to connect/disconnect it constantly. Although, and I'm just guessing, many vehicles that are so seldomly driven are being kept indoors/garaged? (Mine is)

I haven't found any appreciable drain on my EV's primary battery over the longest period I've left it sitting so far (a little over a week, so not that long, admittedly), but the car _does_ do a very bad job of keeping the 12V battery charged and I've already had to replace it once in <2 years of ownership, plus I bought one of those small jump start packs in case it ever dies not at home (luckily, for an EV, it barely requires any power at all to turn everything on and get it started, so the very smallest, cheapest, jump packs are way more than sufficient). A built in trickle charger to combat that would indeed be nice, if the car companies are incapable of figuring out the logic necessary to do it off of the massive primary battery.

EV's are the worst at keeping their 12V batteries charged. Many EV's don't even charge the battery if they're plugged into an AC charger!!!

You can literally leave it plugged in charging for a month and come home to find it dead.


Don't pay money to give your political opponents facts to help oust you.

politics 101.


They're already going to be ousted. This is about temporarily propping up the price of fossil fuel assets that will never practically be monetized. Even an extra 5-10 years could allow vast sums of fake-wealth to be dumped onto other bagholders.

The asset bust coming to Permian basin via fracking is going to be a doozy.

> Powered by local AI models

I worry that this will make my writing more likely to fail an AI coursework detector, which could really impact my life. The risk just isn't worth it till someone has tested the output through all the big players (turnitin etc.)


If I use correct punctuation marks my work will be more likely to be \detected\ as AI written; The risk just isn"t worth it so I never do that^

We'll soon need a writing tool that introduces spelling and grammar errors into our text and messes with punctuation so that we aren't accused of using LLMs.

It's funny how many people still think sloppy, mistake-filled writing is a sign of AI, as if their writing is at the same level as the image generators giving people six fingers, when the truth is the current LLMs use better English grammar than 99% of humans. Their writing may be kind of boring and standard, but they don't confuse "their" and "there."

It's hilarious, on some other sites one is immediately accused of using ChatGPT when using the n-dash (–) or m-dash (—) instead of the hyphen (-). Not an issue with the monospaced font here. ETA: I stand corrected.

ChatGPT tends to speak in american english, which means it's obvious to readers in the rest of the world because local phrases aren't used.

I don't think the risks are high with this. It's not writing for you. It's just correcting your grammar. If you're 99% writing it yourself and just having it highlight grammar mistakes I wouldn't expect it to trigger an alarm....... but I haven't been in school since waaay before LLMs were viable/common. So, maybe it's worse than I think.

It's a classic social problem.

Group A want to have a huge fireworks display.

Group B want a quiet undisturbed evening and no risk of their house accidentally being set on fire by an untraceable firework.

The two are incompatible. Society needs to decide which groups needs are more important.


In my experience, this sort of thing nearly works... But never quite works well enough and errors and misunderstandings build at every stage and the output is garbage.

Maybe with bigger models it'll work well.


I had hoped that this recursive breakdown approach could remove the need for bigger and bigger monolithic LLM for ever bigger tasks, by allowing every tasks to be at same granularity, but... I guess I should just try building one myself.

Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: