Hacker News new | past | comments | ask | show | jobs | submit | murkle's comments login

Where do you see that? I see "No significant associations were found for milk chocolate intake" and "Intake of milk, but not dark, chocolate was positively associated with weight gain."


Ah, I stand corrected: the study starts by stating "After adjusting for personal, lifestyle, and dietary risk factors, participants consuming ≥5 servings/week of any chocolate showed a significant 10% (95% CI 2% to 17%; P trend=0.07) lower rate of T2D compared with those who never or rarely consumed chocolate", but goes on to split by chocolate types and then the positive effect disappears for milk chocolate.


Sorry, this one is really terrible (also I asked for no audio) https://math-gpt.org/?video_id=6f622c5b-ccf3-408f-9db1-56d63...


Thank you for the feedback! Working on it!


When will Safari have WasmGC?


AFAIK I think Igalia is/was working on WasmGC support for Webkit/Javascript Core [0]. Not sure about the status though, it's tracked here I think [1]. It says it's updated earlier this month, but don't know of the progress made.

[0]: https://docs.webkit.org/Other/Contributor%20Meetings/Slides2... [1]: https://bugs.webkit.org/show_bug.cgi?id=247394


Oh, https://bugs.webkit.org/show_bug.cgi?id=272004 looks promising

"All Wasm GC features are implemented, and the feature can be enabled once typed funcrefs (https://bugs.webkit.org/show_bug.cgi?id=272003) are enabled"


All messages sent to/received from the cube are encrypted using AES128 in ECB mode with the fixed key 57b1f9abcd5ae8a79cb98ce7578c5108 ([87, 177, 249, 171, 205, 90, 232, 167, 156, 185, 140, 231, 87, 140, 81, 8])


Cool! Could it work with WebHID instead?


Wouldn’t WebHID only help for getting input from the device and for connect and disconnect events?

Whereas for controlling the lights of the mouse with a browser I would think WebUSB would be used for that?


No, Logitech mice and keyboards generally use Logitech's proprietary HID++ protocol which is built on top of standard HID. WebUSB considers HID a "protected interface class" and blocks access to USB HID interfaces. USB HID devices are protected because they include devices like keyboards which handle sensitive user data, such as passwords.

https://wicg.github.io/webusb/#has-a-protected-interface-cla...

WebHID also restricts access to mice and keyboards, but it uses information available at the HID protocol level to selectively block access to sensitive capabilities while leaving other capabilities unblocked. So, you can't implement an input logger but you can configure LEDs and other device behavior.

In general, WebUSB isn't useful for devices that already have some system-level support. If there's a driver, it will have already claimed the USB interface and needs to release it before the browser can access it. Even if the HID interface class were not protected, you still wouldn't be able to claim it because the USB HID interface is already claimed by the system's generic HID driver. The generic HID driver exposes a non-exclusive "raw" HID interface to applications. WebHID uses this non-exclusive interface which is why it doesn't have the same limitation as WebUSB.


WebHID allows sending/receiving feature reports via HIDDevice, which is what almost all of these devices use for configuration


Did that come to anything?


yes, it was fully vetted and open design that had a strong acceptance for the vision impaired. The following Design criteria were considered: 1. durability 2. reuse, repairs and maintenance 3. manufacturability 4. adoption and adherence 5. skill level of the target population to use the device 6. price



Hmm, I think that polyfill might only be for WebGL1 (we use WebGL2). When I go to the WebGL1 report I see that my card supports both:

OES_texture_float OES_texture_float_linear

But on WebGL2, it only shows OES_texture_float_linear. I think OES_texture_float doesn't exist on WebGL2 I'm guessing.


... and if you die when your domain has one year left?


10 years is just the maximum time a domain can be registered at a given time. You can keep adding a year for every year that passes if you'd like.


Here's a good algorithm: https://web.archive.org/web/20111027100847/http://homepage.s...

Algorithm To Convert A Decimal To A Fraction by John Kennedy Mathematics Department Santa Monica College 1900 Pico Blvd. Santa Monica, CA 90405 http://homepage.smc.edu/kennedy_john/DEC2FRAC.PDF


Why would you do this instead of just converting the digits to a fraction over a power of 10 and then reducing the fraction (or power of 2, if it's a floating point datatype)? I was thinking it was faster, but the recursive procedure involves a division step, so I would assume that calculating GCD using the binary algorithm (which is just uses shifts and subtraction) would be faster? I guess this is if your numeric data-type can't fit the size of the denominator?


If you assume/know the decimal you got is the result of rounding some computation, and are more interested in a short description than in the most accurate one.

For example 0.142857 would be 142,857/1,000,000 in your algorithm, but ⅐ in one using best rational approximations (which I assume that paper does, as they’re easy to calculate and, according to a fairly intuitive metric, best)


Common Lisp gives you this choice (this being HN, a Lisp mention is obligatory :-)

The CL function #'rational converts a float to a rational assuming the float is completely accurate (i.e. every decimal digit after the last internally-represented one is assumed to be 0), while #'rationalize assumes the float is only accurate to the limit of float precision (i.e. the decimal digits after the last internally-represented one can be anything that would round to the last one).

Both functions preserve the invariant that

  (float (rational x) x) ==  x
and

  (float (rationalize x) x) ==  x
...which is another way of saying they don't lose information.

In practice these tend to be not very useful to me because they don't provide the ability to specify which decimal digit should be considered the last one: They take this parameter from the underlying float representation, which sometimes causes unexpected results (the denominator can end up being larger than you expected) because the input number can effectively have zeros added at the end if the underlying float representation is large enough to capture more bits than you specified when you typed in the number.

In addition, what I really need much of the time is the ability to limit the size of the resulting denominator, like "give me the best rational equivalent of 0.142857 with at most a 3-digit denominator". That function is not built in to Common Lisp but one can write it using the methods in TFA. It loses information of course so a round trip from float->rational->float won't necessarily produce the same result.


> assuming the float is completely accurate (i.e. every decimal digit after the last internally-represented one is assumed to be 0)

Careful though - the float is probably not actually a set of decimal digits but most likely binary ones, so it would be assuming that every binary digit after the last one is zero.

Just because you wrote ‘0.1’ in your source code that doesn’t mean you only have a single significant figure in the float in memory. It’s going to be 0.0001100110011… (repeating 0011 to the extent of your float representation’s precision).

Although the Common Lisp language doesn’t actually appear to require that the internal float radix be 2 - a floating point decimal type would be valid implementation.


> if the underlying float representation is large enough to capture more bits than you specified when you typed in the number.

If you typed in `0.142857` (or however IEEE floats are syntaxed), the correct rational is 142857/1000000. If you want to rationalize relative to number of decimal significant digits, that should be something like `(rationalize "0.142857")` or `(rationalize (radix-mp-float 10 '(1 4 2 8 5 7) -1))`.


> correct rational is 142857/1000000

That would be correct if the underlying representation were decimal. If it's binary, as it is in most Common Lisps,

  (rational 0.142857) --> 9586971/67108864
because the underlying representation is binary, and that denominator is 0x4000000.

My original comment about the representation being large enough to capture more bits than you specified was wrong; the unexpected behavior comes from the internal binary representation and the fact that 9586971/67108864 cannot be reduced.

If you start with integers you get

  (/ 142857 1000000) --> 142857/1000000
so you could write a version of #'rational that gives the expected base-10 result, but you'd first have to convert the float to an integer by multiplying by a power of 10.



> Common Lisp gives you this choice (this being HN, a Lisp mention is obligatory :-)

Yes, we should probably implement a rule. If a post has 100+ comments, it cannot remain on front page unless at least one that mentions Lisp.

dang?


Leave this meta shit on reddit please.

The SNR of this place isn't what it used to be to start with, let's not go out of our way to ADD EVEN MORE noise.


nice language


You might want a simpler representation (smaller denominator) than that will give you.

An example is converting musical pitch ratios to simple fractions (e.g. as used in just intonation). Literally yesterday I was writing code to do this. The mediant method works beautifully for this, and it can be optimized further than what's presented in the article.


> can be optimized further than what's presented in the article

Fascinating, do you have any links about this? Or more generally about the intersection of music, fractions and programming?

I found some code in Audacity that detects the notes in audio, I'll see if I can edit this comment later.


The basic idea to optimize the algorithm (which I got from this [1] SO answer) is to recognize that you only ever update the numerator and denominator of either bound in a linear fashion. So rather than potentially repeatedly update a given bound, you alternate between the two, and use algebra to determine the multiple by which the bound should be updated.

Regarding the intersection of fractions and music, "just intonation" is the term you want to research.

[1] https://stackoverflow.com/a/45314258


How would you get to 355/113 from pi using your method?


I don't think it's that good. At the very least I implemented it and got different answers for the test case of 0.263157894737 = 5/19 depending on the choice of numerical representation.

float64 ended up at 87714233939/333314088968 while decimal and fraction ended up at 87719298244/333333333327.

Here is the code.

    import argparse
    parser = argparse.ArgumentParser()
    parser.add_argument("Q", nargs="?", default="0.263157894737")
    parser.add_argument("--method", choices = ("float", "decimal", "fraction"), default="float")
    args = parser.parse_args()

    if args.method == "float":
        print(f"Using float64 for {args.Q!r}")
        one = 1.0
        Q = float(args.Q)
        int2float = float
    elif args.method == "decimal":
        print(f"Using decimal for {args.Q!r}")
        import decimal
        one = decimal.Decimal(1)
        Q = decimal.Decimal(args.Q)
        int2float = decimal.Decimal
    elif args.method == "fraction":
        print(f"Using fractions for {args.Q!r}")
        import fractions
        one = fractions.Fraction(1)
        Q = fractions.Fraction(args.Q)
        int2float = fractions.Fraction

    def to_frac(X):
        Da = 0
        Db = 1
        Za = X
        while 1:
            Zb = one / (Za - int(Za))
            Dc = Db * int(Zb) + Da
            N = round(X * Dc)
            frac = N / int2float(Dc)
            print(f"  {N}/{Dc} = {frac} (diff: {X-frac})")
            if float(N) / float(Dc) == float(X):
                return (N, Dc)
            Da, Db = Db, Dc
            Za = Zb

    print("solution:", to_frac(Q))
Here's the output for 0.263157894737

  % python frac.py 0.263157894737 --method float
  Using float64 for '0.263157894737'
    1/3 = 0.3333333333333333 (diff: -0.0701754385963333)
    1/4 = 0.25 (diff: 0.01315789473700002)
    4/15 = 0.26666666666666666 (diff: -0.003508771929666643)
    5/19 = 0.2631578947368421 (diff: 1.5792922525292852e-13)
    87714233939/333314088968 = 0.263157894737 (diff: 0.0)
  solution: (87714233939, 333314088968)

  % python frac.py 0.263157894737 --method decimal
  Using decimal for '0.263157894737'
    1/3 = 0.3333333333333333333333333333 (diff: -0.0701754385963333333333333333)
    1/4 = 0.25 (diff: 0.013157894737)
    4/15 = 0.2666666666666666666666666667 (diff: -0.0035087719296666666666666667)
    5/19 = 0.2631578947368421052631578947 (diff: 1.578947368421053E-13)
    87719298244/333333333327 = 0.2631578947370000000000030000 (diff: -3.0000E-24)
  solution: (87719298244, 333333333327)

  % python frac.py 0.263157894737 --method fraction
  Using fractions for '0.263157894737'
    1/3 = 1/3 (diff: -210526315789/3000000000000)
    1/4 = 1/4 (diff: 13157894737/1000000000000)
    4/15 = 4/15 (diff: -10526315789/3000000000000)
    5/19 = 5/19 (diff: 3/19000000000000)
    87719298244/333333333327 = 87719298244/333333333327 (diff: -1/333333333327000000000000)
  solution: (87719298244, 333333333327)

For the pi = 3.14159265359 test case the solutions are all 226883371/72219220 . The sequence diverges at the point marked with "<---- here".

  Using float64
    22/7 = 3.142857142857143 (diff: -0.0012644892671427321)
    333/106 = 3.141509433962264 (diff: 8.321962773605307e-05)
    355/113 = 3.1415929203539825 (diff: -2.667639824593948e-07)
    103993/33102 = 3.1415926530119025 (diff: 5.780975698144175e-10)
    104348/33215 = 3.141592653921421 (diff: -3.3142111277584263e-10)
    208341/66317 = 3.1415926534674368 (diff: 1.2256329284809908e-10)
    312689/99532 = 3.1415926536189365 (diff: -2.893640882462023e-11)
    833719/265381 = 3.141592653581078 (diff: 8.922196315097608e-12)
    1146408/364913 = 3.141592653591404 (diff: -1.403765992336048e-12)
    5419351/1725033 = 3.1415926535898153 (diff: 1.8474111129762605e-13) <---- here
    6565759/2089946 = 3.141592653590093 (diff: -9.281464485866309e-14)
    11985110/3814979 = 3.141592653589967 (diff: 3.2862601528904634e-14)
    18550869/5904925 = 3.1415926535900116 (diff: -1.1546319456101628e-14)
    30535979/9719904 = 3.1415926535899943 (diff: 5.773159728050814e-15)
    49086848/15624829 = 3.141592653590001 (diff: -8.881784197001252e-16)
    226883371/72219220 = 3.14159265359 (diff: 0.0)
  solution: (226883371, 72219220)
For 0.10000000000000002 it gives 562949953421313/5629499534213129 (for float64) or 500000000000000/4999999999999999 (using fractions):

  Using float64 for '0.10000000000000002'
    1/9 = 0.1111111111111111 (diff: -0.011111111111111086)
    1/10 = 0.1 (diff: 1.3877787807814457e-17)
    562949953421313/5629499534213129 = 0.10000000000000002 (diff: 0.0)
  solution: (562949953421313, 5629499534213129)

  Using fractions for '0.10000000000000002'
    1/9 = 1/9 (diff: -4999999999999991/450000000000000000)
    1/10 = 1/10 (diff: 1/50000000000000000)
    500000000000000/4999999999999999 = 500000000000000/4999999999999999 (diff: -1/249999999999999950000000000000000)
  solution: (500000000000000, 4999999999999999)
FWIW, the exact solution is:

  >>> import fractions
  >>> fractions.Fraction("0.10000000000000002")
  Fraction(5000000000000001, 50000000000000000)


The pascal code cuts when it hits a certain accuracy (passed in as an argument).


The main issue I see is that the algorithm - unlike the mediant version - does not generate maximally successive accurate approximations.

Yes, you can stop the algorithm at a certain accuracy, but that doesn't mean you can't get better for that given accuracy.

Consider the pi = 3.14159265359 case, and you want it precise to 1/1,000,000. The float64 algorithm gives 1146408/364913 or 5419351/1725033 because:

     ...
    1146408/364913 = 3.141592653591404 (diff: -1.403765992336048e-12)
      --- want a solution here ---
    5419351/1725033 = 3.1415926535898153 (diff: 1.8474111129762605e-13)
     ...
The mediant method, on the other hand, gives an intermediate solution:

  >>> import fractions
  >>> fractions.Fraction("3.14159265359").limit_denominator(1000000)
  Fraction(3126535, 995207)
  >>> float(fractions.Fraction("3.14159265359").limit_denominator(1000000))
  3.1415926535886505
That's a difference of -1.3495871087343403e-12 which is more accurate than 1146408/364913, and is not a solution found by the other algorithm.

Or, if you want a denominator of 364913 or 1725033 you can do that with mediants:

  >>> fractions.Fraction("3.14159265359").limit_denominator(364913)
  Fraction(1146408, 364913)
  >>> fractions.Fraction(3.14159265359).limit_denominator(1725033)
  Fraction(5419351, 1725033)
Another issue is the numerical range. Consider the input "1.5e-318". It causes overflow in the float, decimal, and fraction implementations I gave:

  % python frac.py 1.5e-318 --method float
  Using float64 for '1.5e-318'
  Traceback (most recent call last):
    File "frac.py", line 40, in <module>
      print("solution:", to_frac(Q))
    File "tmp.py", line 31, in to_frac
      Dc = Db * int(Zb) + Da
  OverflowError: cannot convert float infinity to integer


  % python frac.py 1.5e-318 --method fraction
  Using fractions for '1.5e-318'
   1/666666666... many 6s removed ...6666 = 1/666666666... many 6s removed ...6666
      (diff: -1/666 .. more 6s removed ... 6600.. even more zeros removed ...00)
  Traceback (most recent call last):
    File "frac.py", line 40, in <module>
      print("solution:", to_frac(Q))
    File "frac.py", line 35, in to_frac
      if float(N) / float(Dc) == float(X):
  OverflowError: int too large to convert to float
while with the mediant solution I don't need to worry about the input range beyond infinity and NaN:

  >>> from fractions import Fraction
  >>> Fraction(1.5e-318).limit_denominator(1000000)
  Fraction(0, 1)
(I used the float() calls to ensure the fraction and decimal methods stop at the limits of float64. If I remove them I still end up with numbers like 3/2E319 which would not be representable as a Turbo Pascal integer, while the Turbo Pascal mediant implementation would not have a problem.)

Finally, the mediant solution is easier to implement.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: