Hacker News new | past | comments | ask | show | jobs | submit | anonnon's comments login

> I'm surprised there isn't a constant archive of what news sites put into the "memory hole". Maybe Google being shit isn't them failing, but deliberate (I don't seriously believe this).

archive.is


There are newer, non-amphetamine drugs for treating ADHD like Strattera, an SNRI (Selective Norepinephrine Re-uptake Inhibitor). It will be interesting to see what happens if and when these drugs displace the amphetamines (the current front-line treatment for ADHD) to the point that getting a prescription for Adderall or Vyvanse or some other form of medicinal speed because you have (or think you have) ADHD becomes difficult or impossible.


Why would that be a desirable state of affairs? Drugs have varying effects on people, including side effects. Making it difficult to get prescription for stimulants guarantees that a lot of people with ADHD (it seems like 50% don't respond to Strattera) won't have access to medication that would help them.


> Why would that be a desirable state of affairs?

You can't see why an ADHD diagnosis (which isn't hard to get) no longer being a quick ticket to getting amphetamines--one of the most commonly sought-after and regularly abused class drugs since the 1930s--would be a desirable state of affairs? Are you not aware of the adverse health affects they have (cardiovascular, neurological, e.g., Parkinson's, dental, bone density)?

> it seems like 50% don't respond to Strattera

It's seems more like people take longer to respond to Strattera and either don't want to wait, or just want the amphetamines.


I think you'd be throwing people with ADHD under the bus which is infinitely worse than letting some fakers get their hands on stimulants. Improve the diagnostic process if needed, but first you have to prove that it's insufficient. I'm not aware of any studies supporting these claims.

> It's seems more like people take longer to respond to Strattera and either don't want to wait, or just want the amphetamines.

No. Not all drugs are effective for everyone, this includes different stimulants.


> when you have below average body fat %

I can't speak to their choice of taking the drug, but it's wild how warped people's perceptions are now of what constitutes "healthy" and "fat" thanks to the obesity epidemic. People remark on how George Costanza on Seinfeld was once considered fat (because he was), or how Homer's scale-tipping 300 lbs. in the King-Size Homer episode of The Simpsons was considered comically obese (because it was). Never mind the fact that people almost always underestimate how fat they actually are and are almost always disappointed by their DEXA scans. Even if the OP's estimate is correct that they're just a little north of 20% BF (as a man), they're still overweight, and specifically overfat, and probably look soft and doughy.


Interesting that in this thread discussing a new Mozilla EULA/AUP (among other things) banning pornography, not one person has mentioned that their image library was once called libpr0n: https://bugzilla.mozilla.org/show_bug.cgi?id=66984


People aren't having kids because of stagnant real wages and soaring home prices. In the US, the median home price is now $450k. In Canada, it's $650k. And when people do have children, they're on average having fewer, later in life (with a greater risk of complications): https://www.northwell.edu/news/the-latest/geriatric-pregnanc...

I doubt banning porn or abortion or engaging in cultural engineering will fix this.

And then there's this phenomenon, discussion of which was once verboten in goodthink circles (like HN) due to its anti-feminist and "incel" optics, but has since grown enough in strength and scale to shove its way through the Overton Window so that even respectable, MSM sources cover it: https://thehill.com/blogs/blog-briefing-room/3868557-most-yo...


Top income brackets aren't really having more than 2 children either, which is a requirement for growing population. Like most studies has shown that, in general, educated women, freedom of choice and etc. will negatively impact birthrates. It's the same thing everywhere. Sure, income, less social pressure and etc. affects it somehow, but there's just no real need in general to have 3 kids in this day and age. Asking a woman to give away at the bare minimum 6 years of their youth won't cut it nowadays. And honestly, I don't blame them, I think exactly the same way.


The best way to have more kids is to increase the size of the middle class, while lowering housing, food and childcare costs.


I have no idea why people keep saying it's monetary reasons. Why would anyone have 3 kids nowadays? There are no real incentives, other than "I want a big family". Society actively discourages large families as well. The amount of people in their 20s aiming for that is getting smaller and smaller too.

The best way to have more kids, unironically, is making everyone as poor as possible, removing any other method of entertainment, and making "having kids" the only choice. That's how it worked for the eternity, and some people want a percentage of people to go back to it, so it would support the current established system.


I didn't say 3. Perhaps I should have said, any.


If everyone has 1/2 kids, the outcome is the same as having no kids, just with more years to get there. That’s Japan’s biggest problem right now. People are having kids. Tokyo is fairly kid friendly, and infrastructure/culture is there. But nobody wants to have 3 kids.


Simply not true. 2 is the replenishment rate. 3, is a 1.5 increase generation to generation. Our population is out of whack with the resource load. Your model is orders of magnitudes too simplistic.


I don't think creating the illusion of an imaginary middle class ever helped anything. I believe it only makes things worse, as now a lot of people think they are not working class, just because they have an above median wage. Snap it, even some even hold to the illusion that they are rich, just because they have a house with a mortgage and a private pension.

What you need to have a modern, western country instead of a dog-eat-dog wild west is welfare, including universal health care.

But welfare is considered as an evil communist plot in the US and the people who are led to believe that they are somehow above the working masses keep voting against their own interests. Not just in the US, unfortunately.


> People aren't having kids because of stagnant real wages and soaring home prices.

That's proably a non-insiginficant factor but unlikely to be the only one. Poorer people have never had problems reproducing in any society.

I think media exposure plays a much bigger part. Not porn exactly, but anything that glorifies a "free" lifestyle over settling down.


> But it is much easier to read an article like this, where the same point is repeated multiple times, with some “they said this,” “they said that,” etc. than to understand even a small portion of a field. More people will do the former, and then apparently call for executions without any way to judge who will be executed. And I sit down to write code for my experiments, click on one link, and see what I perceive as harmful information, and there goes apparently half an hour. I can either let this kind of stuff lead to my funding being cut, or reply to it and slow down my research

Am I unreasonable to think that funding ought to be re-directed elsewhere, given that we 1) already have effective anti-amyloid monoclonal antibodies, and 2) they don't seem to work that well, and 3) there are alternatives, like the chronic inflammation hypothesis, that have supporting evidence? (e.g., https://www.nature.com/articles/s41591-024-03201-5)


There are many reasons anti-amyloid mABs might not work that well besides "amyloid hypothesis is bunk." You are giving them to very late-stage patients, they don't actually work that well if you look at the data, and some people also think that if you disaggregate fibers without a molecule that also inhibits new aggregation, you are just creating more "seeds" that can grow into more amyloids. I think the fact that they do SOMETHING despite all these unanswered questions and also their terrible side effects actually suggests we might be on the right track.


> 1) already have effective anti-amyloid monoclonal antibodies, and 2) they don't seem to work that well

I fear you may be falling into the exact trap that the person you replied to, is warning against.

There is not just one thing called "amyloid", so not only are the "anti-amyloid monoclonal antibodies" not effective against all amyloid, the amyloid against which they are effective may well not be disease-contributing amyloid.

The state of the field is much more complicated that deciding between pop-sci summaries of "amyloid bad" and "amyloid irrelevant" and directing funding accordingly.


Yeah, this reminds me of "asbestos" which is not a single thing and has many non-dangerous examples, but has been banned because a few of them (and contamination is a worry) are a significant hazard. If the different structures were just called something different, they might have significant commercial applications, but "asbestos".

Same thing with MRI with the removal of Nuclear from NMR. Sure the 3D imaging is cool, but you know that they had to remove that N from the name for marketing.

Ok, and then there's EUV lithography. Don't call it X-Ray lithography even though it's 13nm, because there were decades of expensive failures with that marketing.


To step back for a second, why is the question always re-directing funding when science funding overall has been steadily eroded by inflation and cuts since the 60s? A few months ago, I looked up the percentage of the American population that has PhDs and how it has changed over time. We have so, so, so much more advanced work and technical needs now that would benefit from them.

To answer your question more specifically, our lab is actively trying to collaborate with people working on questions related to chronic inflammation (I am not sure what the funding status is, but I know someone from our lab is working there). In particular, microglial activation. Not to get lazy with cancer analogies, but if you look at what causes even a single cancer, it's often hundreds of events lining up perfectly. The question is, which one do we target to make an effective drug? The honest answer I have to this is that it's wishful thinking to think that that decision should be made at the level of basic science. How the funding and research ecosystem is supposed to work, is academia is supposed to explore all kinds of avenues, and then industry is supposed to exploit them and push the most promising avenues after. The problem is that when there's a crunch on science funding, "this is an unexplored avenue" is not enough, and you basically begin to take on the role of industry without any of the money. I think fraud is heavily incentivized by the funding crunch, too, actually. In the case of Eliezer Masalah, my impression is that his fraud was photoshopping images and faking experiments to add to collaborators' points to support other experiments that WERE actually conducted and made the points. I assume he was trying to get a very high impact factor for funding and promotions, but was trying to minimize the chances that he would get caught by making sure his research was part of works he assumed would be reproducible given that he assumed the others' research was legitimate. I haven't kept up with the case since it initially broke, though, and I didn't go through it as closely as I should have (I also do not focus much on alpha-synuclein).

But, if you really want an answer as to why I think there is more funding on amyloids than chronic inflammation, it is that amyloids are Rome. Imagine a funnel. Chronic inflammation is near the entry to it, along with many other things. But shingles is not the only thing that might cause chronic inflammation that might cause Alzheimer's. Herpes has also been linked. So basically everything is at the very top of the funnel. But the reason people study amyloids is that they are at the very bottom of it. Chronic inflammation seems to cause them to misfold, but so do certain mutations, so if we can develop a drug that disaggregates and/or inhibits the growth of amyloids, we are stopping more causes than just looking at chronic inflammation, which is one of many.


Thanks for your thoughtful replies


I suspect the copycat effect is a large part of it: https://en.wikipedia.org/wiki/Mass_shooting_contagion

The media has historically, starting with Columbine, been extremely irresponsible when it comes to school shootings, showing little of the discretion that it does when it comes to youth suicide (for which they've adopted professional standards informed by CDC, WHO, etc. recommendations: https://afsp.org/ethicalreporting/), to the point that it's given perpetrators fame that's endured decades after their demise: https://www.nytimes.com/2018/05/30/us/school-shootings-colum...

And they're doing this not just out of recklessness, but out of a pretty clear bias and desire to leverage these events to produce support for gun control.


Youi're showing a lot of cognitive bias here. It's very reasonable for the media to cover mass casualty events, that's definitely their job.

Another thing that has changed (which you haven't addressed at all) is that there are communities of mass shooting enthusiasts online who collect data on them, lionize the perpetrators, spread their manifestos, and encourage others to commit similar acts. People write guides with a mixture of justification for motives and sharing of practical techniques and advice, similar in format to the magazines periodically published by Al Qaeda. At least one such outfit has been designated as a terrorist group in several countries and several of its members have been arrested and are facing criminal charges.


>Another thing that has changed (which you haven't addressed at all) is that there are communities of mass shooting enthusiasts online who collect data on them, lionize the perpetrators, spread their manifestos

Which is directly enabled by the coverage mentioned in GP.


I understand it's an emotional topic, but the article is just dry data, and flagging it (along with half the comments in this thread) was unnecessary. I wouldn't even care that much if it weren't for the fact that HN clearly penalizes accounts based on how their submissions and comments are flagged by other users. @dang could you please unflag it?


BTW the point of my submission was to highlight the anomaly that the number of school shootings (and victims) is paradoxically surging despite gun ownership declining and murder rates--while spiking after 2020--still being well-below historic highs: https://www.nytimes.com/2023/06/26/briefing/murder-rate.html


Note that the surge in school shootings is despite a marginal secular decline in household gun ownership: https://www.vpc.org/studies/ownership.pdf


> > > > > David Howells did a patch set in 2018 (I believe) to clean up the C code in the kernel so it could be compiled with either C or C++; the patchset wasn't particularly big and mostly mechanical in nature, something that would be impossible with Rust. Even without moving away from the common subset of C and C++ we would immediately gain things like type safe linkage.

> > >

> > > That is great, but that does not give you memory safety and everyone

> > > would still need to learn C++.

> >

> > The point is that C++ is a superset of C, and we would use a subset of C++

> > that is more "C+"-style. That is, most changes would occur in header files,

> > especially early on. Since the kernel uses a lot of inlines and macros,

> > the improvements would still affect most of the existing kernel code,

> > something you simply can't do with Rust.

I have yet to see a compelling argument for allowing a completely new language with a completely different compiler and toolchain into the kernel while continuing to bar C++ entirely, when even just a restricted subset could bring safety- and maintainability-enhancing features today, such as RAII, smart pointers, overloadable functions, namespaces, and templates, and do so using the existing GCC toolchain, which supports even recent vintages of C++ (e.g., C++20) on Linux's targeted platforms.

Greg's response:

> But for new code / drivers, writing them in rust where these types of bugs just can't happen (or happen much much less) is a win for all of us, why wouldn't we do this? C++ isn't going to give us any of that any decade soon, and the C++ language committee issues seem to be pointing out that everyone better be abandoning that language as soon as possible if they wish to have any codebase that can be maintained for any length of time.

side-steps this. Even if Rust is "better," it's much easier to address at least some of C's shortcomings with C++, and it can be done without significantly rewriting existing code, sacrificing platform support, or the incorporation of a new toolchain.

For example, as pointed out (and as Greg ignored), the kernel is replete with macros--a poor substitute for genuine generic programming that offers no type safety and the ever-present possibility for unintended side effects due to repeated evaluation of the arguments, e.g.:

#define MAX(x, y) (((x) > (y)) ? (x) : (y))

One need only be bitten by this kind of bug once to have it color your perception of C, permanently.


> Even if Rust is "better," it's much easier to address at least some of C's shortcomings with C++

This simply forgets all the problems C++ has as a kernel language. It's really an "adopt a subset of C++" argument, but even that has its flaws. For instance, no one wants exceptions in the Linux kernel and for good reason, and exceptions are, for better or worse, what C++ provides for error handling.


> It's really an "adopt a subset of C++" argument, but even that has its flaws. For instance, no one wants exceptions in the Linux kernel and for good reason

Plenty of C++ codebases don't use exceptions at all, especially in the video game industry. Build with GCC's -fno-exceptions option.

> and exceptions are, for better or worse, what C++ provides for error handling.

You can use error codes instead; many libraries, especially from Google, do just that. And there are more modern approaches, like std::optional and std::expected:

https://en.cppreference.com/w/cpp/utility/optional

https://en.cppreference.com/w/cpp/utility/expected


> You can use error codes instead; many libraries, especially from Google, do just that. And there are more modern approaches, like std::optional and std::expected:

Even if we are to accept this, we'd be back to an "adopt a subset of C++" argument.

You're right in one sense -- these are more modern approaches to errors, which were adopted in 2017 and 2023 respectively (with years for compilers to implement...). But FWIW we should note that these aren't really idiomatic C++, whereas algebraic data types is a baked in, 1.0, feature of Rust.

So -- you really don't want to adopt C++. You want to adopt a dialect of C++ (perhaps the very abstract notion of "modern C++"). But your argument is much more like "C++ has lambdas too!" than you may care to admit. Because of course it does. C++ is the kitchen sink. And that's the problem. You may want the smaller language inside of C++ that's dying to get out, but C++'s engineering values are actually "we are the kitchen sink!". TBF Rust's values are sometimes distinct too, but I'm not sure you've really examined just how different C++'s values are from kernel C, and why the kitchen sink might be a problem for the Linux kernel.

You say:

> RAII, smart pointers, overloadable functions, namespaces, and templates, and do so using the existing GCC toolchain

"Modern C++" simply doesn't solve the problem. Google has been very clear Rust + C++ codebases have worked well. But the places where it sees new vulnerabilities are mostly in new memory unsafe (read C++) code.

See: https://security.googleblog.com/2024/09/eliminating-memory-s...


Isn't "Rust without panics" a subset of Rust?


> Isn't "Rust without panics" a subset of Rust?

I'm not sure there is much in your formulation.

It would seem to me to be a matter of program design, and programmer discretion, rather than a "subset of the language". Re: C++, we are saying "Don't use at least these dozen features, because they don't work well at many cooks scale, and/or they combine in ways which are non-orthogonal. We don't want you to use them because they complect[0] the code." Re: no panic Rust, we are saying "Don't call panic!(), because obviously you want a different program behavior in this context." These are different things.

[0]: https://www.youtube.com/watch?v=SxdOUGdseq4


And -fno-exceptions, while being de-facto standard e.g. in gamedev, still is not standard C++ (just look how much STL stuff in n4950.pdf is specified as throwing, most of those required for freestanding too (16.4.2.5)).

And you cannot just roll your own library in a standard compliant way, because it contains secret compiler juice for, e.g. initializer_list or coroutines.

And once you use your own language dialect (with -fno-exceptions), who is to stop you from "customizing" other stuff, too?


> And -fno-exceptions, while being de-facto standard e.g. in gamedev, still is not standard C++

So? The Linux kernel has freely relied on GCC-specific features for decades, effectively being written in "GCC C," with it only becoming buildable with Clang/LLVM in the last two years.

>(just look how much STL stuff

No one said you have to use the STL. Game devs often avoid it or use a substitute (like EASTL) more suitable for real-time environments.


> So? The Linux kernel has freely relied on GCC-specific features for decades

That is unironically admirable. Either they have their man on GCC team, or have been fantastically lucky. In the same decades there have been numerous GCC extensions and quirks that have been removed [edit: from the gcc c++ compiler] once new standard proclaims them non-conformant.

So, which C++ dialect would provide tangible benefits to a freestanding self-modifying code that is Linux kernel, without bringing enough problems to outweight it all completely?

RAII and templates are nice, but it comes at the cost of making code multiple orders of magnitude harder to reason about. You cannot "simply" add C++ to sparse/coccinelle. And unlike rust, c++ compiler does not really care about memory bugs.

I mean, the c++ committee introduced "start_lifetime_as", effectively declaring all existing low-level c++ programs invalid, and made lambdas that by design can capture references to local variables then be passed around. Why would you set yourself up to have rug pulled out on the next C++ revision if you are not forced to?

C++ is a disability that can be accomodated, not something you do to yourself on purpose.


> I mean, the c++ committee introduced "start_lifetime_as", effectively declaring all existing low-level c++ programs invalid

Did it? Wasn't that already the case before P2590R2?

And yes, a lot of the C++ lifetime model is insanity (https://en.cppreference.com/w/cpp/language/lifetime). Fortunately, contrary to the committee, compiler vendors are usually reasonable folks allowing needed low-level idioms (like casting integer constants to volatile ptr) and provide compiler flags whenever necessary.


Thank you for the correction! Indeed, the "magic malloc" part (P0593R6, a heroic effort by the way) looks to have gone in earlier into C++20. As you say, no sane compiler was affected by that change, the committee like a boss went in, saw everyone working, said "you all have our permission to continue working" and left.


isn't that why you pick a particular subset, and exclude the rest of the language? It should be pretty easy to avoid using try/catch, especially in the kernel. A subset of C probably doesn't make much sense but for c++ which absolutely gigantic, it shouldn't be hard. Getting programmers to adhere to it could be handled 99% of the time with a linter, the other 1% can be code by reviewers.


> isn't that why you pick a particular subset, and exclude the rest of the language?

If the entire natural inclination of the language is to use exceptions, and you don't, beginning with C++17 and C++23, I'm less sure that is the just right fit some think it is.

> Getting programmers to adhere to it could be handled 99% of the time with a linter, the other 1% can be code by reviewers.

What is the tradeoff being offered? Additional memory safety guarantees, but less good than Rust, for a voluminous style guide to make certain you use the new language correctly?


> If the entire natural inclination of the language is to use exceptions, and you don't, beginning with C++17 and C++23

I've personally written libraries targeting C++20 that don't use exceptions. Again, error codes, and now std::optional and std::expected, are reasonable alternatives.

> What is the tradeoff being offered? Additional memory safety guarantees, but less good than Rust, for

It's not letting the perfect be the enemy of the good. It's not having to rewrite existing code significantly, or adopt a new toolchain, or sacrifice support for any platform Linux currently supports with a GCC backend.


> For example, as pointed out (and as Greg ignored), the kernel is replete with macros--a poor substitute for genuine generic programming that offers no type safety and the ever-present possibility for unintended side effects

I never thought I would say that C++ would be an improvement, but I really have to agree with that.

Simply adopting the generic programming bits with type safety without even objects, exceptions, smart pointers, etc. would be a huge step forward and a lot less disruptive than a full step towards Rust.


At this point, I think that would be a misstep.

I'm not sure I have an informed enough opinion of the original C++ debate, but I don't think stepping to a C++ subset while also exploring Rust is a net gain on the situation, and has the same kinds of caveats as people who are upset at R4L complain about muddling the waters, while also being almost entirely new and untested if introduced now[1].

[1] - I'm pretty sure some of the closed drivers that do the equivalent of shipping a .o and a shim layer compiled have C++ in them somewhere sometimes, but that's a rounding error in terms of complexity testing compared to the entire tree.


Ya it rather baffling, it would be a solid improvement, and they can easily ban the parts that don't work for them(exceptions/stl/self indulgent template wankery).

On a memory safety scale I'd put C++ about 80% of the way from C to Rust.


There are clear case studies like the ones by Google (on Android) and Microsoft where introducing rust reduced vulnerabilities by 70%. In the case of android there were zero vulnerabilities discovered in new rust code. Are there similar case studies showing such clear success from adopting C++ over C?

The answer appears to be no https://grok.com/share/bGVnYWN5_29baa93d-e774-45ec-898b-19bb...


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: