I'm a bit behind on the times, but I'm failing to see how the philosophy of effective altruism and cryptocurrency are related. Do you know how the two got intertwined?
My (outsider's) understanding is that the "longtermist" sub-school of EA has grown over the years, and has developed increasingly convoluted and speculative rationales for behavior that would otherwise be considered unethical.
The example I'm most familiar with is calculating utils in terms of the expected total human population of the galaxy, were we to achieve interstellar travel: it's not inconceivable that orders of magnitude more humans would be alive in the distant future than are alive now, which makes their combined future interest more important than the interests of any (or even all) living human beings. Consequently, we should divert money away from the welfare of people living today and towards efforts that maximize the likelihood of a galaxy filled with colonized planets.
Galactic utility is ridiculous. But so is the other extreme, to act only on short-term utility. The asymmetry is that most humans don't need any encouragement to think short-term, but they do need some impetus to think long-term. But it takes wisdom to balance the two concerns. And yes it is a huge downside to any strategy that delays gratification that it can be "hacked" for evil, which is a problem more general than EA or "longetermism".
Assuming utilitarian ethics for a moment, I agree!
Maybe someone more familiar with the EA movement can answer this: why has rule utilitarianism disappeared from EA rationales for behavior? That seems to me a very natural check on these otherwise extreme (long and short-term) leaps of reasoning.
I don't know about "EA rationales", but rule utilitarianism is unpopular among philosophers largely because it's believed to collapse into act utilitarianism. The phil101 caricature is that rule utilitarianism says "no stealing, because 'no stealing' is a rule that increases utility" and act utilitarianism says "steal if you think it maximizes utility" - but on the one hand "no stealing, except for bread to feed your starving child" is an even better rule, and so on; and on the other hand not incorporating uncertainty into your decision procedure isn't being a good act utilitarian, it's just being an idiot. And so in practice they endorse the same decision procedures in pursuit of the same end.
Honestly, I think this is even more perverse than the "normal" hedonic trap of the ends justifying the means: this particular branch of EA expects us to make world-altering decisions based on what mostly comes off as speculative science fiction.
I would understand it more if it was purely hedonic: at least I could then argue about whether the ends actually do justify the means! But instead it's this sort of unfalsifiable, "but what if" reasoning that's actively diverting money away from worthwhile causes.
That phrase refers to doing immoral acts and justifying it by the supposedly positive impact of that act in the long run. It does not refer to acting in the long term future's interest instead of the short term future's.
My understanding is that the EA argument for cryptocurrency boils down obtaining the means to do the most good: they want to do the most good, and that requires financial resources. This allows them to justify just about any financial activity, so long as they can convince themselves that the long term utility of their behavior outweighs its downsides. Which gets us cryptocurrencies and oil company investments.
> Consequently, we should divert money away from the welfare of people living today and towards efforts that maximize the likelihood of a galaxy filled with colonized planets. If that sounds ridiculous, it's because it is.
It sounds ridiculous to you because you framed it in an uncharitable way. Another way of saying "maximize the likelihood of a galaxy filled with colonized planets", so far as contemporary people are concerned, is "minimize the likelihood of civilizational collapse". Even a marginal reduction in the risk of nuclear war, for instance, would be a very good thing.
> Even a marginal reduction in the risk of nuclear war, for instance, would be a very good thing.
Nobody disagrees with this. But since you brought it up, I think it's worth asking: what precisely has the EA movement done to reduce, even marginally, the risk of a nuclear war?
I believe the general consensus is that nuclear war is a great risk and important to prevent but the effectiveness of philanthropic efforts is low. Fighting malaria is still a more effective form of altruism.
> what precisely has the EA movement done to reduce, even marginally, the risk of a nuclear war?
Nothing that I'm aware of. In general I think that most of the current "longtermist" organizations are useless - but that doesn't mean they're incorrect.
Here’s how I think about it: they aren't necessarily incorrect, but they are behaving in a misleading manner by labeling speculative ethical behavior as “EA” when it’s neither effective nor particularly altruistic (in the sense that Singer uses).
I don’t belong to the EA movement, so maybe it just isn’t my place to say what should or shouldn’t be EA. But this kind of bootstrapping of classroom ethical dilemmas into actually diverting donated money away from human welfare efforts astounds me.
> misleading manner by labeling speculative ethical behavior as “EA” when it’s neither effective
I agree that they're ineffective, but I don't see any evidence that the MIRI types are lying - they're trying to help, they think they're helping ... and they're mostly wrong. It happens.
> But this kind of bootstrapping of classroom ethical dilemmas into actually diverting donated money away from human welfare efforts astounds me.
Are you sure that's what's actually going on? Donations and especially effort are not fungible. The techy libertarian futurist types in the MIRI orbit, for instance, are so far as I can tell simply not as bothered by poverty as Singer et al. A world in which they're not worried about AI risk is not a world in which they give just as much to other causes; it's a world in which they give less.
In principle, a greater number of future lives matter more than a lesser number of present lives. The principle itself seems sound to me.
The problem is you can't actually be certain those future lives will come to exist. One assuming they will, and being so confident in their assumption that they deprioritize the lives of the currently living based on it, is hubris, but only because it over-estimates one's powers of prognostication.
If we agree the principle can't be applied - why would we consider it sound? That's like saying the principle behind 1 + 1 = 3 is sound, with the small caveat that you get the wrong answers. If something looks good on paper and turns sour when we try to use it, that's the indication that we've miscalculated.
Another perspective to consider; the best way to help people in the future is to help people today. Solving the challenges of our time is an investment that will compound into the distant future. Speculating on what life could be like in a hundred or a thousand years, and tailoring your efforts to that imagined future, is much like creating a product without a customer in mind. If you end up with something useful, it's almost certainly on accident and not because your speculation was accurate.
A principle can be sound while not being applicable. Principles and applications are two separate things and there's no reason to conflate them.
>>That's like saying the principle behind 1 + 1 = 3
That's not sound in principle..
>>Another perspective to consider; the best way to help people in the future is to help people today.
That is an entirely valid, and I would argue probably correct, perspective yet it doesn't invalidate the principle that a greater number of future lives matter more than a fewer number of current lives. It may be that the best course of action for both future lives and current lives is to pursue the best interest of the currently living.
The philosophies aren’t related beyond the fact that EA seems to view making as much money as possible a virtue if you use the money effectively, and the crypto space has happened to be the place where people are making the most money the fastest.
"Using the money effectively" is the operative point: as more and more of the money in the EA sphere comes from cryptocurrencies (and other civically corrosive sources), the EA movement's notion of "effective use" seems to loosen.
As someone kind of on the outside of all these movements, I think this is mostly a "founder effects" thing.
A lot of the initial people in EA came from the world of rationality. Classic LessWrong had 3 main topics that were actively discussed - AI risk, Rationality, and EA. It also happened that cryptocurrency became known and somewhat popular among some LessWrong visitors.
We're probably talking like 1% of LessWrong visitors, mind you - but a few of those became crypto-millionaires.
So that's why a bunch of people, who were interested in both AI risk and EA, and now have millions, are infusing the EA movement with cash. Or at least, that's the story as I understand it.
Effective altruism and cryptocurrency are related because SBF's pledge to donate his wealth represented a significant portion of total wealth pledged to EA, and the FTX Future fund represented a large portion of the present funding available for longtermist causes.
OP is complaining about SBF coming into ea and using his crypto money to donate huge amounts to speculative and sometimes even harmful causes such as ai safety and that Oregon house seat instead of meat and potatos ea causes of global health/poverty and animal rights
From everything I've seen, Org Mode is basically everything that I've ever wanted in a productivity/note-taking app. The only problem is that I would have to use Emacs...
I know Org Mode is extremely tied in to Emacs' core, but if someone could figure out how to separate it into a vscode extension or something, that'd be really cool
That'd be difficult since org isn't super well defined, though there is an orgdown project that aims to fix that by having levels of compatibility.
You don't need much emacs for org-mode, especially if you keep the (admittedly dated) context-sensitve toolbar. Here's a cool (but short) infographic that gives you more than enough to go off of:
lol just get a pdf cheat-sheet for common commands and learn emacs! it is really not that big of a deal. i think you need few hours before you can start using it effectively and independently. ignore the hype. it is just a tool
I can't help but have a sinking feeling that this as a result of the wider trend of using only Chromium-based browsers. If they were the ones affected by this issue, I feel like it would get solved almost immediately.
I was seeing problems in Chrome a couple hours ago. In the fancy editor it was taking seconds to echo, and the cursor would jump to the start of the text, and placeholder text was not being replaced with typed text. The older editor where you have to manually enter markdown worked fine.
I assumed they had done something that botched the JavaScript of the site. Maybe some update went badly today, and it manifests differently in Firefox and Chrome?
Anyway, whatever it was either they didn’t test well in Chrome either, or they don’t use the fancy editor themselves.
It was fixed within an hour or so. I didn't even realize, surfing on reddit and it was working. I was just mindlessly surfing like I normally do and then it clicked, "oh cool, Reddit is back up".
I think GP's point was Reddit breaking Chrome would cause a big uproar and result in the issue being fixed quickly, not that Chrome has more engineers.
I had a good chuckle putting my colleague's names in it - everyone was a superstar, famous poet, or swimmer of some sort. Any good ones you wouldn't mind sharing?
Hey HN! I've been lurking for a while now and I've finally created something that I feel is worth sharing.
I've called this project "Tensorpedia." At its core, Tensorpedia takes in a title and utilizes it as a prompt for GPT-2 to synthesize the introductory part of a Wikipedia article. The machine learning stuff is written using a wonderful library called aitextgen [0], using Wikipedia's "Vital Articles" as a data set [1]. The server is written in Node, and it uses Redis as an article cache. If you want to read my article about it (for some reason), you can check it out here [2].
I created this project to get more experience with server technologies. While I wouldn't say it's a complicated application, I learned quite a lot from it.
Additionally, as I was inspired by all of those this-x-doesn't-exist projects from a while back, this project is mostly for fun. As such, I don't know how much practical use it has, but I've generated some pretty hilarious articles from it.
I actually remember seeing this about 9 or 10 years or so ago! It no doubt inspired me to peruse a degree in Computer Science. Thanks for the nostalgia!