This feels like a spectator sport to me - elite classes fighting amongst themselves while ordinary people are caught in the crossfire.
Sure, I do feel bad for the well meaning professors, scientists, and researchers (some of whom are personal friends) that are losing their grant funding - techies of all people should be sympathetic to a sudden disruption in funding. But if the elites are asking us to take sides, they need to make a stronger case that they're really on _our side_ - that requires more than appealing to mutual self-interest. Academia must reconcile a litany of shortcomings and broken promises.
For the promise of education, we see a large and growing contingent of debt holders outweighing the number of available employment opportunities. Tuition is at an ATH, yet more and more students are walking away with worse outcomes. A grim forecast for the younger generation.
For the promise of research, we've seen a shift from truth-seeking to grant-seeking, a punitive culture of ideological conformity, and a pipeline to brain-drain the rest of the world, favoring foreign talent over local. Hardly the unbiased bastion of free-thought we've been taught is the foundation of good science.
As for ideology, I'm not sure the intelligentsia has ever been neutral, if that's even possible, or if neutrality itself isn't another ideology. As far as I can tell, this is just the pendulum swinging back. Academia is experiencing the consequences of its failure to live up to its promises to society and hold itself accountable for the negative externalities of its own agenda.
So... Burn down academia with no replacement? Shutter and board research and leave America with no future?
And, of course, not in even the guise of reform. But an outright and open political witch hunt to ban all those who oppose "dear leader" and dare allow opposing free speech, progressive anthropological research, and objective sciences in unpopular topics like climate change?
That's your preferred approach? That is what "academia deserves" for... What? Charging too much? And that's not even an objective statement - the market bears this cost! If the US government was actually worried, maybe they could put all this effort into an actual reform effort, or modernized scholarships / financing, or tightening accreditation requirements.
Not to mention, US education and research is still one of our strong points, drawing in the brightest minds from the entire world.
> US education and research is still one of our strong points, drawing in the brightest minds from the entire world.
The Average Joe graduates with enormous student debt and ends up in a low paying job, one that's entirely unrelated to their degree. They never experienced the benefits of education nor research. In fact, they feel quite ripped off by the whole affair, so this point holds no stock for them.
> So... Burn down academia with no replacement? Shutter and board research and leave America with no future?
This is a bit of a hysterical reaction to "budget cuts." Academia isn't going anywhere and probably won't change much, it's just going to have less money. In fact, k-12 education is seeing far more severe budget cuts as part of the bill, but there are fewer prophecies of doom in its honor. Academics argue passionately for their own bread, but not for those who are starving. (They also send their kids to private school)
Median student loan debt is about $25,000 dollars.
That's a lot, but not so much that it wouldn't be covered by a very modest increase in lifetime earnings.
The mean is closer to $40,000, which is still going to be covered by a modest improvement in earnings.
I think we should be working on changing things so that students graduate with less debt, and working on ensuring that time spent studying is worthwhile to the individual, but it's weird to treat extreme cases as if they are universal.
Your math is incomplete. The whole formula includes interest rates and total amortized cost as well as the average / median income, cost of living, time to employment, and employment rates of new graduates. Additional exercises for the reader: find the average TTL of full loan repayment (~20 years) and the percent of graduates where debt exceeds earnings (~20%).
This isn't to say that degree attainment isn't statistically better on average, much like paying off ransomware. The price of tuition seems to reflect the lack of reasonable alternatives more than the intrinsic value of the education.
> The U.S. National Science Foundation (NSF) froze all outgoing funding, including new awards and scheduled payments on active grants. Over 1,000 NSF research projects were abruptly canceled in a few days, resulting in roughly $739 million in halted research funding. The directive, issued with little explanation, has created chaos across the academic research ecosystem, part of a broader trend Nature described as an unprecedented assault.
Just normal budget cuts. Everyone, close your eyes and don't look at Columbia or Harvard or refusing foreign students (and requiring loyalty to the administration in your social media to be admitted) or deporting dissidents (now to South Sudan!)
And completely ignore that people are actively upset about the cuts to K-12, because it's not the focus of an article specifically about higher education. It's not like one of the top posts on HN when it happened was the announcement of killing the Department of Education. Or that the same people upset about the gutting higher education are upset about the voucher program designed to funnel taxpayer money to private and religious institutions while destroying the quality public education, furthering evermore the wealth gap.
You may even be shocked to find out that I also care about children and think they have an intrinsic right to food and that school lunches should be free and nutritious and tasty. If I don't include that in a footer on every post, will you accuse me of siding with Republicans - the party whose currently saying kids should get jobs while in primary school to pay for their food needs?
This is a very odd way to put this, as a battle between elites. Scientific research is a general benefit to the entire US economy, one of its greatest strengths that makes the US economy so much better than anywhere else in the world.
How does framing this as a competition between elites even affect the common person? Shouldn't I care about what helps the US the most? I don't give a damn about elites or taking sides or being on a side of that battle. I want what's best for the US.
> Academia is experiencing the consequences of its failure to live up to its promises to society and hold itself accountable for the negative externalities of its own agenda.
I'm sorry but could you explain any of this? There's a lot of assumptions in here that I can not even begin to unpack, and any possible meaning I can assign to your statements here are easily disproven with even a minimum of research, so I don't want to misunderstand you.
Feel free to unpack at your leisure, it's all in my original comment. I think it's both clear and concise, though you're welcome to be specific about what's left you feeling confused.
There's no great definition of elite but a reasonable approximation is the one often used for intellectual: people whose income or status in society depends on their production of ideas, without needing to test if the ideas actually work.
This definition captures academics, university administrators, journalists, writers, analysts and other talking heads on TV, Hollywood screenwriters, many kinds of civil servant, people rich enough to be philanthropists, and frequently but not always politicians. Those are the classes of people that are usually being handwaved towards when the word elite is used.
> people whose income or status in society depends on their production of ideas, without needing to test if the ideas actually work.
In general I'd agree -- but academic researchers (and the projects previously funded by the National Science Foundation) are not this. By definition, they are testing "if the ideas actually work", that testing is what the funding is paying for.
These folks aren't really "elite" -- not in perception, nor in class or wages. (they usually make less than the average programmer or CS graduate).
The definition of elite people seem to be using doesn't depend on wages. It's not a synonym for well paid.
There's no requirement to test if your ideas work if you're an academic. The funding pays for papers to get published, and that's all that's checked. To publish a paper you might need to at least pretend to test your ideas, maybe, but only in some of the better fields and journals. Nobody will check if you actually did test them though, so just making data up is a perfectly viable strategy. Worst case, you might get caught ten years from now if some random independent chooses to investigate and manages to go viral on social media. In very extreme cases you might get fired, but that won't stop you immediately getting another job doing exactly the same thing at another university (c.f. Brian Wansink).
But those are in the best, most scientific fields. In others you can just write papers all day that say all kinds of things, never test or even predict anything, and still have a successful career. That's how you get papers like the famous "feminist glaciology" paper [1], or thousands of COVID papers that present model outputs but never validate if the results matched reality, or thousands of string theory papers that don't make any predictions.
None of these problems will stop academics from being cited by journalists in high profile news outlets as if their ideas are already validated, nor being consulted by powerful politicians who then transcribe their policy demands into law, nor having their claims be automatically taken as gospel on forums like HN. If their ideas do become law and then cause a disaster, nothing will happen to them and they won't even suffer loss of reputation. For example, Einstein was a big supporter of Soviet-style planned economies, as were many academics of his era. That was a disastrous idea, and his supporting it - as late as 1949 no less - should have harmed his reputation for being a genius. But nobody remembers this today.
In these respects, they are the very epitome of what it means to be elite.
> you can just write papers all day that say all kinds of things, never test or even predict anything, and still have a successful career. That's how you get (snip) thousands of COVID papers that present model outputs but never validate if the results matched reality...
Disliking some papers is not really a fair or relevant metric by which to judge academia.
This would be like saying, "programmers are elite people, because they never have to test their work. Just look at the thousands of broken, shoddy-programmed repositories on GitHub full of junk code that's been abandoned! Half of it doesn't build, and isn't even testable. Why are we paying for hackathons just to get this garbage. Why do we hire interns and new CS grads, if they won't be seasoned veterans on day one".
A paper can be seen as kind of like the "pull request" of the academic world -- not every PR gets merged in, and a PR isn't always a waste just because it didn't get merged. Some number of bad papers does not mean "science" is "elite".
---
> That's how you get papers like the famous "feminist glaciology" paper [1]
That's probably a bad example, because the feminist glaciology paper isn't even bad or wrong. (Did you actually read it? Or did you only read the "feminist" and "glacier" and then get outraged?).
A quick scan of the actual paper, and Carey's argument seems to be that geologists and historians have been neglecting or ignoring how glaciers impact women (both the women in science doing these studies, and the women whose lives have been impacted by glaciers, or melt flow, flooding, disaster recovery, etc).
That's...like objectively true? And very well sourced. This isn't some crackpot theory, this is 100% a "validated idea" whose "results match reality".
Carey is an Environmental Historian and Professor of History. His papers are research and recordings of history, they won't be "testable" (under your definition of rubric) because he's reporting on things that have already happened. And this title specifically represents one (1) single paper that focuses on the impact to women, out of the 25(+?) he's published on Glaciers, Climate Change, and Societal impacts, over the past two decades.
If you want to remove Sociology and History from Science, so be it. (That would be Wrong, but I won't argue the point). But it should probably be expected that if you fund an Environmental Historian, you're going to get a lot research on Geological History and its societal impacts -- that's kind of the whole point of History as a discipline.
Your argument is that elitism is good, not that academics don't fit the criteria. Getting paid by the government to write entirely subjective, untestable ideas about the interaction between glaciers and feminist ideology is a perfect fit for the definition. Pull requests, on the other hand, are not. Their correctness is often tested automatically!
There is a definite psychological component to depression - CBT and meditation are effective treatments - yet a point often absent is how one becomes depressed in the first place. We tend to moralize a lack of resilience and a bad attitude. A more empirical perspective is to look at studies using animal models which show that depressive symptoms can be induced via factors like chronic stress, social defeat, and general deprivation of wants and needs.
I'm concerned by the pervasive thought pattern held by many, but especially engineers: that you can think your way out of any problem. This bias leads to preferential seeking and over-prescribing of therapeutic treatments.
Yet how many people who self report depression are also satisfied with the various external factors in their life? Perhaps the correct framework is: a healthy environment is the cornerstone of a healthy mind.
I'm sure it's possible with enough meditation and medication to achieve enlightenment whether you're exiled in the Siberian wilderness or locked up in a castle dungeon, but to do so is working against the grain of nature. Unhealthy environments have a negative impact on individual health. The Foxconn "safety nets" come to mind.
Though I could be mistaken, perhaps zoo animals are only depressed because we haven't fully translated "mindfulness" into ASL.
"smart" could be defined as: the act of consistently making good decisions - which can further be defined as: effective optimization towards an outcome. (defining the outcome is itself a matter of making a "good decision")
This requires all of: being aware of a given problem, being sufficiently informed of the relevant context (which is further a matter of curiosity, discerning between trustworthy sources, and robust sense making), and finally caring enough to apply any attention and effort to the issue in the first place.
In this regard, almost everyone is "stupid' about everything most of the time. If anyone manages to achieve "smartness" it's usually in a very narrow decision space.
In terms of AI and education, the problem is: the path of least resistance is an optimal one - at least in a greedy sense.
The usefulness of the tool and "smartness" of the user are irrelevant to the core issue - general education is rapidly eroding. This is strongly correlated to (if not outright caused by) the ongoing rapid changes in technology.
The issue is that structured education originally meant: relying on your own wits, which in turn strengthened them. No cheat codes allowed.
This is no longer the case. Not only because of students using AI, but because "the path of least resistance" applies to educators and administrators as well.
Technology will change but educating people remains a fundamental good - to that end, institutions must adapt to make sure every student gets the proper enrichment they deserve. Get cheat codes out of education.
tl;dr it was commissioned, constructed, and closed due to local safety concerns in the wake of the Three Mile Island accident.
Interesting excerpt from the wiki:
In 2004, the Long Island Power Authority erected two 100-foot, 50 kW wind turbines at the Shoreham Energy Center site,[18] as part of a renewable-energy program.[19][20] At a ceremony, chairman Kessel stated, "We stand in the shadow of a modern-day Stonehenge, a multibillion-dollar monument to a failed energy policy, to formally commission the operation of a renewable energy technology that will harness the power of the wind for the benefit of Long Island's environment." The turbines generate 200 MWh per year, or 1/35,000th of the energy the nuclear plant would have produced.[21]
> your browser shares a surprising amount of information, like your screen resolution, time zone, device model and more. When combined, these details create a “fingerprint” that’s often unique to your browser. Unlike cookies — which users can delete or block — fingerprinting is much harder to detect or prevent.
Ironically, the more fine tuned and hardened your device, OS, and browser are for security and privacy, the worse your fingerprint liability becomes.
more idle thoughts - it's strange and disappointing that in the vast space and history of FOSS tools, a proper open source browser never took off. I suppose monopolizing from the start was too lucrative to let it be free. Yet there really is little recourse for privacy enthusiasts. I've entertained the idea of using my own scraper, so I can access the web offline, though seems like more trouble than its worth.
That's... not accurate at all. Firefox was extremely popular at one point, and completely ate the lunch of everything else out there. (And then Google used anticompetitive practices to squash it, but that came later.)
> then Google used anticompetitive practices to squash it
Not exactly. Apple happened.
Every "web designer" had to work on a macbook to be different like every one else. And firefox had dismal performances on those macbooks so said designers turned to the only browser with good tools and good enough performances: Chrome.
Next time you're told "performances don't matter", remember how it can be a differentiating feature and could cost you your market share.
All the front-end devs I knew at the time switched to Macbooks after the Intel switch, because you could get a Unix-based machine that could run Safari and Firefox natively, and Internet Explorer in a VM. Chrome wasn’t even released at that point.
Google didn't use anticompetitive practices to squash it. They just made a better browser. When Chrome came out it was significantly better than Firefox. That's why people switched.
To be honest it's still better (at least if you ignore the manifest V3 nonsense).
I think it's pretty debatable that Chrome is currently better, but you're definitely correct. When Chrome first debuted (and for years afterwards) it was clearly superior to Firefox.
> Ironically, the more fine tuned and hardened your device, OS, and browser are for security and privacy, the worse your fingerprint liability becomes.
1. You could (however, I doubt the effectiveness) use something like brave which tries to randomize your fingerprint.
2. You could "blend in with the crowd" and use tor.
2. is almost immediately fingerprintable even with JS enabled. 0.00% similarity for canvas, 0.09% similarity for font list, 0.39% for "Navigator properties", 0.57% for useragent. with JS disabled (best practices for tor) it's even worse. maybe this works for windows users?
(debian, latest tor browser 14.5.3, no modifications)
if there's 0.00% similarity for canvas, then I think there would be some issue with the letterboxing. You shouldn't resize your tor window from 1400x900. Tor pretends it's windows, so I don't know why it would do that for the useragent.
I've always used it inside of whonix, and when I tested it, it seemed like everything was fine.
When you disable js you need to do so by setting tor to Safest.
The font list should be spoofed by tor?
Anyway, you can fix all of that just by using whonix and setting tor to safest.
What's surprising is that, over time, Firefox has done virtually nothing to reduce the impact of fingerprinting.
Why on earth are we, in 2025, still sending overly detailed User Agent strings? Mozilla/5.0 (X11; Linux x86_64; rv:139.0) Gecko/20100101 Firefox/139.0 .... There are zero legitimate reasons for websites to know I'm running X11 on x86_64 Linux. Zero.
Why are Refer(r)ers still on by default?
Why can JS be used to enumerate the list of fonts I have installed on my system?
We need way more granular permission controls, and more sensible defaults. There are plugins to achieve this, but that's a big hassle.
Because the users of web browsers expect compatibility. If one vendor unilaterally decides to stop supporting some browser APIs, the result isn't better privacy. The result is that people switch to other browsers.
If you have Firefox with "resist fingerprinting" enabled then you are feeding it some dummy data. People worry about the fact that this might make you "unique," but fail to grasp that if you look differently unique every time you're not necessarily identifiable.
I think its matter of "least common denominator" as in the sum of all fields will surely be unique, but what's the _minimum_ number of fields needed to isolate one user? You can download the JSON from each test and compare the diffs yourself - there's a lot of noise from "cpt" and "ratio" fields, but some that stand out are "referer" and "cookie" fields as well as a few SSL attributes. Not sure if controlling for those is all it takes to de-anonymize, but either way it's not great.
FOSS is a flexible term but carries the connotation of community ownership, and therefore independence from for-profit interests. That was an original selling point of FF, and to this day the user base is mainly comprised of individuals (who were at one point or another) seeking free and open alternatives. Sadly Mozilla as an organization has made increasingly user hostile decisions (deals with Google, recent changes in privacy policy, some telemetry on by default) and FF no longer lives up to the original promise. But yes, thanks to the code being open source there are off-shoots like LibreWolf and WaterFox that may be worthwhile (I haven't vetted them) but its the same dilemma as with chrome, the upstream code is captured and controlled by an organization that I don't trust to respect user privacy.
> FOSS is a flexible term but carries the connotation of community ownership, and therefore independence from for-profit interests.
That's certainly not true. Unless Red Hat, MongoDB, Chef, etc. are not open source.
While I love to believe that the FOSS world is an anarchist utopia that believes in wellbeing for all, I think there are plenty of profit driven people there. They just don't sell access to the code/software.
At one point, Firefox (3.5 specifically) was #1, for a brief moment:
> Between mid-December 2009 and February 2010, Firefox 3.5 was the most popular browser (when counting individual browser versions) according to StatCounter, and as of February 2010 was one of the top 3 browser versions according to Net Applications. Both milestones involved passing Internet Explorer 7, which previously held the No. 1 and No. 3 spots in popularity according to StatCounter and Net Applications, respectively - https://en.wikipedia.org/wiki/Firefox_3.5
Then Chrome appeared and flattened both IE and Firefox.
The problem with the reasoning is, either we already know who's a smart decision maker, in which case the mechanism isn't efficient. or else we don't know, in which case the logic is circular - whoever gets the most tokens must be the best decision maker.
in any case i think all smart decision makers have learned to stay away from crypto.
Everyone knows we live in the Information age, not everyone realizes we're also living in an age Psychology - and of the two, we have a much clearer understanding of information and technology.
You can argue we're seeing a continuation of a pattern of our relationship with media and it's evolution, but in doing so you affirm that the psyche is vulnerable under certain circumstances - for some more than others.
I think it's a mistake to err on the side of casual dismissal, that anyone who winds up insane must have always been insane. There are well known examples of unhealthy states being induced into otherwise healthy minds. Soldiers who experience a war-zone might develop PTSD. Similar effects have been reported for social media moderators after repeated exposure to abusive online content. (Trauma is one example, I think delusion is another less obvious one w.r.t things like cults, scientology, etc.)
Yes, there are definite mental disorders like schizophrenia and bi-polar, there's evidence these conditions have existed throughout history. And yes, some of us are more psychologically vulnerable while others are more robust. But in the objective sense, all minds have a limit and are vulnerable under the correct circumstances. The question is a matter of "threshold."
I'm reminded of the deluge of fake news which, only a few years ago, caused chaos for democracies everywhere. Everything from Q anon to alien space ships, people fell for it. A LOT of people fell for it. The question then is the same question now, how do you deal with sophisticated bullshit? With AI it's especially difficult because its convincing and tailor made just for you.
I'm not sure what you would call this metric for fake news and AI, but afaict it only goes up, and it's only getting faster. How much longer until it's too much to handle?
"Shared" as in shareholder?
"Shared" as in piracy?
"Shared" as in monthly subscription?
"Shared" as in sharing the wealth when you lose your job to AI?
reply