The Dragonbox author reports[1] about 25 ns/conversion, Cox reports 1e5 conversions/s, so that’s a factor of 400. We can probably knock off half an order of magnitude for CPU differences if we’re generous (midrange performance-oriented Kaby Lake laptop CPU from 2017 vs Cox’s unspecified laptop CPU ca. 2010), but that’s still a factor of 100. Still a performance chasm.
You can likely get some of the performance back by picking the low-hanging fruit, e.g. switching from dumb one-byte bigint limbs in [0,10) to somewhat less dumb 32-bit limbs in [0,1e9). But generally, yes, this looks like a teaching- and microcontroller-class algorithm more than anything I’d want to use on a modern machine.