Hacker Newsnew | past | comments | ask | show | jobs | submit | dancsi's commentslogin

Actually, it is just serial under the hood, although electrically slightly nonstandard.


This reminds me of how the student meal subsidies are implemented in Slovenia, and in my opinion it was quite unwieldy. You call a phone number and place your phone's earpiece on another device with a microphone. Then, some personal data is transmitted using (ultra?)sound. I remember it being quite unreliable, but that might be down to using the telephone network as the data carrier.


I remember back in the time, during the eighties, one of the radio stations broadcasted ZX-Spectrum games over the air. You would record that noise to a tape, and later you could load to your ZX and play. It worked remarkably good.


Generally datasette formats were meant to work with the extremely lossy media of tape. Also, I'm not sure but telephone audio tends to be much more compressed compared to radio. That on top of the fact you are going from earpiece to mic with an air gap and the background noise, it's probably much much worse.


Regular telephone audio is usually filtered 0.3-3.4 kHz (this on analog phone lines). Digital phone lines use PCM with A-law (or µ-law in the US and some other places) logarithmic sample encoding, with 8 kHz sampling rate, getting more or less the same result of an analog phone line, possibly with less distortion. FM radio has much higher bandwidth, usually up to around 15 kHz and with better basses too.


> (ultra?)sound

Well it's not ultra I'd say, that screeching is quite audible. Very similar in sound to an old phone modem.

I think the system it uses is the same as for Moneta, which can be used in much the same way but gets billed to the sim account instead. I'm sure other countries also use the same principle for some services.


> using (ultra?)sound

Phone lines have (or at least had) narrow frequency ranges. I'm not an expert but I'd assume this is just normal sound like any old modem.


I guess it will be easier to get it working with Zotero, as it's open source, and I think it even supports custom plug-ins.


Yes, Zotero does support custom plug-ins ("add-ons").


This looks neat, but I don't really understand how it works. I imagine that the DNS record is pointed towards the VPS, and the VPS just forwards all traffic to the actual server via wireguard?


Pretty much, yes.


The implementation of the __cos kernel in Musl is actually quite elegant. After reducing the input to the range [-pi/4, pi/4], it just applies the best degree-14 polynomial for approximating the cosine on this interval. It turns out that this suffices for having an error that is less than the machine precision. The coefficients of this polynomial can be computed with the Remez algorithm, but even truncating the Chebyshev expansion is going to yield much better results than any of the methods proposed by the author.



For anyone else confused by this, the control logic described in the comment happens in https://github.com/ifduyue/musl/blob/master/src/math/cos.c


> Input y is the tail of x.

What does "tail" mean in this context?


It is a double-double representation [1], where a logical fp number is represented with a sum of two machine fp numbers x (head, larger) and y (tail, smaller). This effectively doubles the mantissa. In the context of musl this representation is produced from the range reduction process [2].

[1] https://en.wikipedia.org/wiki/Quadruple-precision_floating-p...

[2] https://github.com/ifduyue/musl/blob/6d8a515796270eb6cec8a27...


Does it make sense to use a double-double input when you only have double output? Sine is Lispchitz-limited by 1 so I don't see how this makes a meaningful difference.


The input might be double but the constant pi is not. Let f64(x) be a function from any real number to double, so that an ordinary expression `a + b` actually computes f64(a + b) and so on. Then in general f64(sin(x)) may differ from f64(sin(f64(x mod 2pi))); since you can't directly compute f64(sin(x mod 2pi)), you necessarily need more precision during argument reduction so that f64(sin(x)) = f64(sin(f64timeswhatever(x mod 2pi))).


But am I correct in thinking that is at worst a 0.5 ulp error in this case? The lesser term in double-double can't be more than 0.5 ulp of the greater term and sensitivity of both sine and cosine to an error in the input will not be more than 1.

Also, in case of confusion, I was specifically commenting on the function over the [-pi/4, pi/4] domain in https://github.com/ifduyue/musl/blob/master/src/math/__cos.c , which the comment in https://news.ycombinator.com/item?id=30846546 was presumably about.


Yeah, sine and cosine are not as sensitive (but note that many libms target 1 or 1.5 ulp error for them, so a 0.5 ulp error might still be significant). For tangent however you definitely need more accurate range reduction.


Double rounding can still bite you. You are forced to incur up to half an ulp of error from your polynomial, so taking another half ulp in your reduction can lead to a total error of about 1 ulp.


I think the polynomial calculation in the end looks interesting. It doesn't use Horner's rule.


It does use Horner's rule, but splits the expression into two halves in order to exploit instruction-level parallelism.


Considering the form of both halves is the same, are compilers smart enough to vectorize this code?


I might be wrong but I would think for something like this vectorizing wouldn't save time (since you would have to move data around before and afterwards. The real benefit of this is it lets you run two fma operations in parallel.


This has been the standard algorithm used by every libm for decades. Its not special to Musl.


But isn't this code rarely called in practice? I guess on intel architectures the compiler just calls the fsin instruction of the cpu.


No. FSIN has accuracy issues as sibling mentions, but is also much slower than a good software implementation (it varies with uArch, but 1 result every ~100 cycles is common; even mediocre scalar software implementations can produce a result every twenty cycles).


No. The fsin instruction is inaccurate enough to be useless. It gives 0 correct digits when the output is close to 0.


> 0 correct digits when the output is close to 0

this is an amusing way to describe the precision of sub-normal floating point numbers


It's not just sub-normal numbers. As https://randomascii.wordpress.com/2014/10/09/intel-underesti... shows, fsin only uses 66 bits of pi, which means you have roughly no precision whenever abs(sin(x))<10^-16 which is way bigger than the biggest subnormal (10^-307 or so)


In that range, just returning x would be way better. Maybe even perfect actually - if x is less than 10^-16, then the error of x^3/6 is less than the machine precision for x.


the error isn't when x is small. it's when sin(x) is small. the problem happens for x near multiples of pi


It is much more amusing if you describe it in ulps; for some inputs the error can reach > 2^90 ulps, more than the mantissa size itself.


FSIN only works on x87 registers which you will rarely use on AMD64 systems -- you really want to use at least scalar SSE2 today (since that is whence you receive your inputs as per typical AMD64 calling conventions anyway). Moving data from SSE registers to the FP stack just to calculate FSIN and then moving it back to SSE will probably kill your performance even if your FSIN implementation is good. If you're vectorizing your computation over 4 double floats or 8 single floats in an AVX register, it gets even worse for FSIN.


Moving between x87 and xmm registers is actually fairly cheap (it's through memory, so it's not free, but it's also not _that_ bad). FSIN itself is catastrophically slow.


Fair enough, and I imagine there may even be some forwarding going on? There often is when a load follows a store, if I remember correctly. (Of course this will be microarchitecture-dependent.)


> I guess on intel architectures the compiler just calls the fsin instruction of the cpu.

Do people do that in practice? It's on the FPU, which is basically legacy emulated these days, and it's inaccurate.


> the FPU, which is basically legacy

you'll pry my long doubles from my cold, dead hands!


The question is if you wouldn't be better served with double-doubles today. You get ~100 bits of mantissa AND you can still vectorize your computations.


Sure. There should be a gcc flag to make "long double" become quadruple precision.

The thing is, my first programming language was x86 assembler and the fpu was the funniest part. Spent weeks as a teenager writing almost pure 8087 code. I have a lot of emotional investment in that tiny rolling stack of extended precision floats.


How is "quad floating point" implemented on x86? Is it software emulation?


You're selectively quoting me - I said it's 'legacy emulated'. It's emulated using very long-form microcode, so basically emulated in software. I didn't say it was simply 'legacy'.


I’m completely out of my depth reading through these comments so I don’t have much of value to contribute, but I do think I can gently say I think the selective quoting was harmless, but deliberate to fit the shape of a harmless joke’s setup. I don’t think there was any intent to misrepresent your more salient information.


x87 trig functions can be very inaccurate due to the Intel's original sloppy implementation and subsequent compatibility requirements [1].

[1] http://nighthacks.com/jag/blog/134/index.html


Wasn't there some blog article a few years ago which showed how glibc's implementation was faster than fsin ?


That's got nothing to do with Musl per se, that's just the SunPro code that basically every C library uses. I'm sure the polynomials themselves (there's another one for the "crest" of the sin curve, and still more for log and exp and the rest of the family) date from somewhere in computing pre-history. Optimizing polynomial fits to analytic functions was something you could do on extremely early computers.


Thanks for explaining this. I actually wrote a SIMD implementation of trig functions years ago, using the techniques you describe.

You can check it out: https://github.com/jeremysalwen/vectrig

I compared several different methods of generating polynomials of different sizes for speed and precision (spoilers: taylor series were the worst and minimax polynomials (Remez algorithm) were the best).

Another (surprising) thing which I learned during the project was that the range reduction was just as (if not more) important to the accuracy of the implementation than the polynomial. If you think about it, you will realize that it's actually pretty difficult to quickly and accurately compute the sin of large numbers like 2^50.

I also tried to directly optimize the coefficients for the accuracy of the polynomial on the required range, but that experiment was unsuccessful.

It's all there in the repository, the implementations, notes about the different polynomials used, and the accuracy/speed statistics for the different methods.


> I compared several different methods of generating polynomials of different sizes for speed and precision (spoilers: taylor series were the worst and minimax polynomials (Remez algorithm) were the best).

I would have expected at least an LSQ approximation with a basis of Legendre polynomials thrown into the mix. I got that as a basic homework in my numerics class once, after we've shown to ourselves in the class that [1, x, x², x³...] is not a really good basis to project things onto.


Are there libraries/tools that people use to do Remez/Chebyshev/etc. function expansions? I can do a basic Taylor series expansion by hand but I’m out of my depth with more sophisticated techniques.


sollya is the king here


Thanks!


Yup. Chebyshev is the way to go (after 30s of consideration)


Now they have added it for host keys.


Ahah! Thanks for pointing that out. Need more coffee.


Brought to you by the legend who managed to run Linux on an 8-bit microcontroller: http://dmitry.gr/?r=05.Projects&proj=07.%20Linux%20on%208bit


Have you tried Kodi (kodi.tv)?


Kodi is good, but I got a lot happier when I switched to Plex, and put the backend on my home server, and just used a Roku device for playback.


No! Looks very interesting, thanks! (^_^)

Will give it a shot.


Bung Kodi on a Raspberry Pi 3B+ - You'll be pleasantly surprised I think.

Downside is... 99% of raspberry pi cases are really ugly :(


It may be better with III but with a Raspberry Pie II XBMC/Kodi is quite slow and a bit laggy.

Personally, I switch to an Intel NUC with a fanless akasa case, and aside from the occasional Lodi crash it works relatively well.


It's not particularly great with the Pi III either. I have one, and I find it's under powered for the "media server" role. Especially, if more than one person connects. I like it for lots of other things, but this isn't one of them.


Ah I use my mine as a playback front end only, pulling media from a NAS. I guess that's a different use case really.


Just stick the Pi to the back of your TV with some self-adhesive PCB risers. No case required.


That's what I do.

Am I the only one though that uses the Pi with a (Synology) NAS with MySQL (MariaDB) to store the Kodi data, and my NAS (NFS) for the content? you can pick up where you left off from your Pi, to your laptop, or Phone.

I know...it's like Plex in architecture.

As for the remote, CEC will let you control Kodi from your TV's remote. Or if you have seperate audio, like a soundbar or AV receiver, a harmony remote will do.


What do you use as a remote? Also does RP+Kodi do Netflix/Amazon/HBO etc. streaming in a nice way?


Some options besides CEC over HDMI:

- Amazon FireTV remote (connects via Bluetooth)

- Logitech F710 (if you also run RetroPie, connects via USB)

https://retropie.org.uk/ is a great emulation system for several vintage consoles (80's-early 2000's).


With a Rpi over HDMI you can use your remote on most tvs if they support the CEC standard.


No and that’s the issue. Works great for your local / NAS media. Lots of plugins for all sorts of things, except, Netflix, Amazon, XM. I still use the TV apps for that.


There’s a Netflix plugin that uses the directory structure on Kodi for v18.


Thanks, I have seen and tried it. It’s still not GA and quite buggy. Not at a level I can hand over to my family for use.


Wife and I use the Kore app. Less likely to forget where my phone is than a dedicated remote. Handy to have the youtube plugin and share a video to kodi from that app too.


Kodi supports network remotes among other types. You can just install a kodi remote app on your phone.


It seems to me that you can select some of the largest EU countries by clicking the US flag in the top right corner.


The shipping is still a bit expensive if you are not in one of those... 11euro to Belgium if you select the German option, for example.


There seems to be a 25% discount for today which removes about 10 USD from the price.

Hopefully this book could get picked up by a publishing company so it becomes easier to find in book stores.


Yes, this makes shipping cost a couple € at most.


Does somebody know which color scheme are they using in these screenshots?


The dark color scheme is Mariana. The light one is Celeste. Both are included with the editor.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: