Hacker News new | past | comments | ask | show | jobs | submit | piadodjanho's comments login

A sunscreen that blocks UVB, harm marine life [1] and can cause allergic airways inflammation [2] when breathed (inevitable when it dries on your skin) is hardly the best sunscreen in my option.

My personal choice for to vitamin D synthesis is Avobenzone stabilized with ubiquinone.

[1] https://oceanservice.noaa.gov/news/sunscreen-corals.html [2] https://www.sciencedirect.com/science/article/pii/S0041008X1...


Non-nano zinc oxide, the traditional kind that is white, like OP is talking about, is fine as far as I know(https://www.vogue.com/article/reef-safe-sunscreens-oxybenzon...).

The nano mineral sunscreens are bad, but they can be coated, which sounds like it makes them less harmful.


The issues I enumerated affects both nano and non-nano sunscreen with different degrees.

The most commons concerns specific of the nano version are the skin absorption and free radical creation when expose to UV radiation -- not exclusive of the mineral sunscreen though, azobenzene is also highly unstable.

We often hear about studies done with the nano crystal because are worst. Experiments done with them are more likely to produce a negative result. But this doesn't mean the "macro" version isn't affected as well.

The aspiration concern is very serious one. Some countries bans spray and power mineral sunscreen of any kind.

Just to be clear. I'm not advocating against using them. Every sunscreen has its place, saying one option is better than other is reductive.

For instance, think an Avobenzone + antiox only chemical sunscreen is a good option to minimize the UVA damage and enable the body to produce vitamin D minimizing the damage created by UVA radiation. But they wouldn't be my choice when going to the beach.

Similarly, Mineral sunscreens don't degrade in the sun and block a wide range of UV radiation. But they are bad for marine wild life. They are a good choice for daily facial sunscreen, specially for skin sensitized by "anti aging" treatments.

Again. There is no such thing as the best sunscreen. Each is best in different use cases.


The the most commonly used sunscreen chemical have hormone like activity [1].

https://www.ewg.org/sunscreen/report/the-trouble-with-sunscr...


Sunscreen sensitivity is definitely a real thing, too:

https://www.cbc.ca/news/health/sunscreen-sensitivity-1.42009...


In practice, subnormal are very rarely used. Most compiler disable subnormals when compiling with anything other than -O0. It takes over a hundred cycle to complete an operation.

Demo: #include <stdio.h>

    int
    main ()
    {
    
        volatile float v;
        float acc = 0;
        float den = 1.40129846432e-45;
    
        for (size_t i; i < (1ul<<33); i++) {
            acc += den;
        }
    
        v = acc;
    
        return 0;
    }
With -01: $ gcc float.c -o float -O1 && time ./float ./float 8.93s user 0.00s system 99% cpu 8.933 total

With -O0: $ gcc float.c -o float -O1 && time ./float ./float 20.60s user 0.00s system 99% cpu 20.610 total


I fixed your example a bit and here is what I get

  /tmp>cat t.c
  #include <stdio.h>
  int main() {
      float acc = 0;
      float den = 1.40129846432e-45;
      for (size_t i = 0; i < (1ul<<33); i++) acc += den;
      printf("%g\n", acc);
      return 0;
  }
  /tmp>gcc -O3 t.c && time ./a.out
  2.35099e-38
  ./a.out  5.94s user 0.00s system 99% cpu 5.944 total
  /tmp>gcc -O3 -ffast-math t.c && time ./a.out
  0
  ./a.out  1.50s user 0.00s system 99% cpu 1.502 total
So subnormal numbers are supported in -O3 unless you specify -ffast-math. And it definitely makes a difference (gcc 9.3, Debian 11, Ryzen 7 3700X).

EDIT That one is interesting too

  /tmp>clang -O3 t.c && time ./a.out
  2.35099e-38
  ./a.out  6.04s user 0.00s system 99% cpu 6.044 total
  /tmp>clang -O3 -ffast-math t.c && time ./a.out
  0
  ./a.out  0.10s user 0.00s system 99% cpu 0.101 total
clang (9.0.1) performs about the same without -ffast-math; but with it, it managed to optimize the loop away.


I looked at the asm generate from my original example and they generate very different codes, gcc applies other optimization when compiled with -O1.

I've been fighting the compiler to generate a minimal working example of the subnormals, but didn't have any success.

Some things take need to be taken in account (from the top of my head):

- Rounding. You don't want to get stuck in the same number. - The FPU have some accumulator register that are larger than the floating point register. - Using more register than the architecture has it not trivial because the register renaming and code reordering. The CPU might optimize in a way that the data never leaves those register.

Trying to make a mwe, I found this code:

    #include <stdio.h>
    
    int
    main ()
    {
        double x = 5e-324;
        double acc = x;
    
        for (size_t i; i < (1ul<<46); i++) {
                acc += x;
    
        }
    
        printf ("%e\n", acc);
    
        return 0;
    }

Runs is fraction of seconds with -O0:

    gcc double.c -o double -O0
But takes forever (killed after 5 minutes) with -O1:

    gcc double.c -o double -O1
I'm using gcc (Arch Linux 9.3.0-1) 9.3.0 on i7-8700

I also manage to create a code that sometimes run in 1s, but in others would take 30s. Didn't matter if I recompiled.

Floating point is hard.


Shouldn't i be initialized?


lol, how the compiler didn't warn about uninitialized variable


Compiler warnings are not straightforward and depend on many things; compiler version, optimization settings, warning settings.

  $ gcc-9 -Wuninitialized -O0 float.c  # NO WARNINGS!!!

  $ gcc-9 -O1 float.c  # NO WARNINGS!!!
  
  $ gcc-9 -Wuninitialized -O1 float.c
  float.c: In function 'main':
  float.c:9:5: warning: 'i' is used uninitialized in this function [-Wuninitialized]
      9 |     for (size_t i; i < (1ul << 33); i++) {
        |     ^~~


Weirdly, those two codes behaves differently:

    $ gcc hn.c -O0 -o hn

    for (size_t i; i < (1ul<<46); i++) {
        printf ("%zd\n",i);
        acc += x;
        break;
    }
Without break:

    $ gcc hn.c -O0 -o hn

    for (size_t i; i < (1ul<<46); i++) {
        printf ("%zd\n",i);
        acc += x;
    }


Shouldn't i be initialized?


Also a great format to count how many representation are wasted with redundant representation (ZeroS, NaN, +inf, -inf).


What is redundant about that? It "wastes" a completely negligible part of the representation space, and the consistency gains are enormous.


When talking about tiny 8-bit floats, it does waste a lot: if your exponent is only 3 bits, you've "wasted" 1/8 of all 256 possible values, which is a lot. With normal-sized floats, it's much less of an issue: 1/256 of the billions of possible 32-bit values, and 1/2048 of all possible 64-bit values.

(Also, the real "waste" is only on the multiple NaN values, since the zeros always "waste" only a single value for the "negative zero", and the infinities always "waste" only two values; AFAIK, both negative zero and the infinities are necessary for stability of some calculations.)


Ackchyually...

The IEEE-754 has a lot of redundant representation. Not where you would expect though.

Caveat: Those features are invaluable for some niche applications, but not for the average joe.

To start. Every IEEE-754 float has two zero representation: one for positive zero and another negative negative zero (sic).

The special numbers are another source of redundancy. The the double format, have about 9,007,199,254,740,992 different combination to encode three different states that a production ready software shouldn't reach: NaN, +inf and -inf.

Other than the redundancy, the double have many rarely used combination. For instance, the subnormals representation. Unless you are compiling your program with -O0 or with some exotic compiler, they are disabled by default. One subnormal operation can take over a hundread of cycles to complete. Therefore, more 9,007,199,254,740,992 wasted combination.

If that wasn't bad enough, since the magnitude of the numbers follows a normal distribution (someone whose name I forgot's law), the most significant bits of the exponent field are very rarely used. The IEEE-754 encoding is suboptimal.

The posit floating point address all those issues. It uses an tapered encoding for the exponent field.


I'm mixed on Gustafson's posit stuff. For me, the only thing I'd change for fp would be:

1. -0 now encodes NAN.

2. +inf/-inf are all Fs with sign: 0x7FFFFFFF, 0xFFFFFFFF.

3. 0 is the only denorm.

Which does four good things:

1. Gets rid of the utter insanity which is -0.

2. Gets rid of all the redundant NANs.

3. Makes INF "look like" INF.

4. Gets rid of "hard" mixed denorm/norm math.

And one seriously bad thing:

1. Lose a bunch of underflow values in the denorm range.

However, as to the latter: who the fuck cares! Getting down to that range using anything other than divide-by-two completely trashes the error rate anyways, so why bother?

The rest of Gustafson's stuff always sounds like crazy-people talk, to me.


He also propose the use of an opaque register to accumulate (quire), in contrast to the transparent float register (its a mess, each compiler does what it think is best).

When working with numbers that exceed the posit representation you use the quire to accumulate. At the end of the computation you convert again to posit to store in memory, or store the quire in memory.

In C, it would look like something like:

    posit32_r a, b;
    quire_t q;
    
    q = a; // load posit into quire
    
    q = q + b; // accumulate in quire
    
    a = q; // load quire into posit

> The rest of Gustafson's stuff always sounds like crazy-people talk, to me.

I've read all his papers on posit and agree. But I do believe the idea of encoding exponent with golomb-rice is actually very good and suit most users. The normalization hardware (used in the subtraction operation) can be easily repurposed to decode the exponent and shift the exponent.

But the quire logic (fixed point arithmetic) might use more area than a larger float-point. But maybe in power usage it pays of.


I come from GPU-land, and a quire always brings a chuckle from the fp HW folk. They like the rest of Gustafson’s stuff, though.


Yeah. Gustafons is like, memory is cheap, just use a few hundreds of kb to store the quire. To be fair, he is not HW guy.


Interesting. One issue is treatment of 1 / -inf. This would be -0 in traditional IEEE 754 but would now be +0 IIUC.

This would imply that 1 / (1 / -inf) would now be +inf instead of -inf.


0 is unsigned. I would reject 1/inf — it would be NAN. If the user wants to play silly games with derivatives, computer algebra systems are that way: —>.


NaN is NaR (not a real) in posit notation.


> If that wasn't bad enough, since the magnitude of the numbers follows a normal distribution (someone whose name I forgot's law), the most significant bits of the exponent field are very rarely used. The IEEE-754 encoding is suboptimal.

But isn't that accounted for by the fact the floating point number distribution is non-uniform? Half of all floating point numbers are between -1 and 1.


Hm. I don't know.

My reasoning is about how much information can be encoded in the format.

The IEEE-754 double format have 11 bits to encode the exponent and 52 bits to encode the fraction.

Therefore, the multiplying factor from double is in the range: 2^1023 to 2^-1022. To give an idea how large this is, the scientist estimate there are about 10^80 atoms in universe, in base 2 this is "little" less than 2^266.

Most application only don't work with numbers on this magnitude. And the ones that does, don't care so much about precision.

Let me know if there is something wrong with my logic.


I don't think I see a problem.

The set of floating point values is intentionally biased towards smaller numbers. So yes, while few people have a need to deal with such large numbers, there are also far fewer such large numbers.

I think your flaw is that you are looking at the total set of all possible floating point numbers, and seeing a chunk that few people will use. Don't do that. Look at the 63 bits, and point out which ones you would like to remove/compress/etc. Yes, the combination of having an exponent of all 1's is rare, but none of those bits individually is "rare". The MSB in the exponent is used to represent all the numbers from 1 to 2, for example.

I don't doubt one can come up with a clever scheme that provides a different, encoding which is even more heavily biased towards more "reasonable" numbers, but it's not clear what the gain would be. You'd have to come up with novel algorithms for all the floating point operations (addition, division, etc) - and would they be as fast as the current ones?

I've yet to find real world problems for which the current encoding is pretty poor. Contrived ones, sure - but real problems? Rare.


The Solar Zenith Angle is the most important. This is what changes from winter to summer and morning to night.

Vitamin D production is only made on the UVB range that is highest at midday. Both UVA and UVB causes erythema. Counterintuitively to common sense, it is better to sun bath midday than early in the morning.

If you sunbath at 8 you might need twice as many time to produce the same amount of vitamin D and you be exposed to much more UVA, thus more like to get burn.


I've read a few paper on this subject you are absolutely correct, I don't know why you are have been downvoted.

Vitamin D is only produced with in UVB wavelength [1]. No vitamin D is produced over 318nm. I wonder if that's the reason they divided the range in UVA, UVB and UVC.

The solar ray incidation angle, pollution and altitude (higher, more UVB) affects the amount of UVB that hits the surface [2].

Therefore, the time of the day you are sunbathing is important to optimize Vitamin D synthesis. Midday is the best time. If you live bellow 25 degree latitude, the UVA/UVB ratio on winter is about the same on summer. Of course, you should take in consideration the weather and the UV index. I'm talking only about the ratio.

This paper [3] has guideline for sun exposure for Australian. It is possible to correlate the data for other places in the same latitude -- taking in consideration the Australian differences in UV index.

[1] https://www.direct-ms.org/wp-content/uploads/2018/01/Vit-D-s... [2] https://www.researchgate.net/publication/285056396_Vitamin_D... [3] https://staging.mja.com.au/system/files/issues/194_07_040411...


Not for vitamin D production.

Vitamin D is produced on UVB spectrum [1 Fig. 1] that decreases with the solar azimuth degree [2].

[1] https://www.direct-ms.org/wp-content/uploads/2018/01/Vit-D-s...

[2] https://www.researchgate.net/publication/285056396_Vitamin_D...


Zenith angle, not azimuth.


Adding some information for those interested in more precise numbers.

This paper [1] has the recommended exposure time for a few cities in Australia. If you live in the same latitude, as long you live in the same latitude, this can be used as a guideline.

https://staging.mja.com.au/system/files/issues/194_07_040411...


I haven't kept up with the Ozone Hole lately, but I believe it's not uniformly distributed so keep that in mind too if it's still a problem. I'm pretty sure it was usually more of a problem in Australia than South America for example.

I couldn't believe it the first time I went to Singapore, in 35C heat all day in the sun at a theme park, and didn't even get a minor blush on my pale skin due to the humidity and latitude. Definitely eye opening coming from NSW, Australia.


That's a really interesting link.

From the table there, for example, in Townsville in summer, with 11% of your body exposed, it takes just 6 minutes to synthesis 1000 IU of vitamin D.

A fair-skinned female friend who studied at James Cook University used to complain she got sunburnt walking between classes... So DWG, if you read this, apologies for telling you off for exaggerating!


The page says its twice the traditional Cloud.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: