Hacker News new | past | comments | ask | show | jobs | submit login

If I interpret the numbers correctly it is of the order of ~1000 times slower than modern algorithms such as Dragonbox.



Something like that.

The Dragonbox author reports[1] about 25 ns/conversion, Cox reports 1e5 conversions/s, so that’s a factor of 400. We can probably knock off half an order of magnitude for CPU differences if we’re generous (midrange performance-oriented Kaby Lake laptop CPU from 2017 vs Cox’s unspecified laptop CPU ca. 2010), but that’s still a factor of 100. Still a performance chasm.

You can likely get some of the performance back by picking the low-hanging fruit, e.g. switching from dumb one-byte bigint limbs in [0,10) to somewhat less dumb 32-bit limbs in [0,1e9). But generally, yes, this looks like a teaching- and microcontroller-class algorithm more than anything I’d want to use on a modern machine.

[1] https://github.com/jk-jeon/dragonbox/blob/master/README.md#p...




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: