Hacker Newsnew | past | comments | ask | show | jobs | submit | jamasb's commentslogin

Several years ago I spent some time in a lab studying sleep in fruit flies.

They've put out a number of interesting publications since my time there.

[1] Evidence of very large variability in sleep duration between flies, with limited consequences for very long duration sleep deprivation (wrt lethality)

[2] Evidence of context-dependent modulation of the effects of sleep deprivation. It appears the effects of sleep deprivation on males can be affected by sexual arousal/activity.

It makes me wonder if there are similar effects on humans. A night spent socialising too late does effect one differently to one spent working too late.

[1] https://www.science.org/doi/10.1126/sciadv.aau9253

[2] https://elifesciences.org/articles/27445


It absolutely is when you consider the strength of the research communities surrounding cancers and their treatments in the UK.


I’ve heard that “charities” such as Cancer Research UK are actually business who invest very little of their income to actual research, instead paying relatively a lot more for media and payroll?


£545 million spent on research and £42 million on public information out of £672 million raised. So approx 13% spent on administration.


yes it should be better, but i would like to see a full comparison


Here is the actual study for your perusal

https://www.sciencedirect.com/science/article/pii/S147020451...

Unfortunately, it only covers the 7 countries that funded the study, but it is much more in depth than the news article.


I've been doing some work on link prediction in knowledge graphs recently with poor results on real-world data. These methods don't necessarily require a huge amount of data but they are very sensitive to noise and the 'density' of dataset. The benchmark datasets are, in essence, very easy to get good performance on. It's a real shame that metrics for these methods' tolerance of noise and sparsity are not reported because these are going to be present in almost any real-world dataset in far greater quantities than current benchmarks.


Well, the landscape is still quite fluid (there are new models proposed in literature at every major conference). Processing real-world graphs is obviously more challenging, for a number of reasons (multi-modality, scale, etc.) - even though benchmarks are catching up, and are becoming harder (see FB15k-237 or WN18RR).

As a general rule of thumb, it is important your graph has enough redundancy in it, i.e. the more relations, the better. Also, bear in mind these models do not support multi-modality, i.e. literals such as numbers, strings, geo coordinates, timestamps are simply treated as entities. In most cases it is probably better to filter literals out before generating the embeddings.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: