Hacker News new | past | comments | ask | show | jobs | submit login

AGI = Artificial General Intelligence, watch this for the main idea around the goal alignment problem: https://www.youtube.com/watch?v=EUjc1WuyPT8

They're explicitly not political, lesswrong is a website/community and rationality is about trying to think better by being aware of normal cognitive biases and correcting for them. Also trying to make better predictions and understand things better by applying Bayes' theorem when possible to account for new evidence: https://en.wikipedia.org/wiki/Bayes%27_theorem (and being willing to change your mind when the evidence changes).

It's about trying to understand and accept what's true no matter what political tribe it could potentially align with. See: https://www.lesswrong.com/rationality

For more reading about AGI:


- Superintelligence (I find his writing style somewhat tedious, but this is one of the original sources for a lot of the ideas): https://www.amazon.com/Superintelligence-Dangers-Strategies-...

- Human Compatible: https://www.amazon.com/Human-Compatible-Artificial-Intellige...

- Life 3.0, A lot of the same ideas, but the other extreme of writing style from superintelligence makes it more accessible: https://www.amazon.com/Life-3-0-Being-Artificial-Intelligenc...

Blog Posts:

- https://intelligence.org/2017/10/13/fire-alarm/

- https://www.lesswrong.com/tag/artificial-general-intelligenc...

- https://www.alexirpan.com/2020/08/18/ai-timelines.html

The reason the groups overlap a lot with AGI is that Eliezer Yudkowsky started less wrong and founded MIRI (the machine intelligence research institute). He's also formalized a lot of the thinking around the goal alignment problem and the existential risk of discovering how to create an AGI that can improve itself without first figuring out how to align it to human goals.

For an example of why this is hard: https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden... and probably the most famous example is the paperclip maximizer: https://www.lesswrong.com/tag/paperclip-maximizer

Great yeah that sounds like something I wish I knew existed

Its been very hard to find people able to separate their emotions from an accurate description of reality even if it sounds like a different political tribe, or moreso that people are more willing to assume you are part of a political tribe if some words don't match their political tribe’s description of reality even if what was said was most accurate

I’m curious what I will see in these communities

I recommended some of my favorites in another comment: https://news.ycombinator.com/item?id=25866701

I found the community around 2012 and I remember wishing I had known it existed too.

In that list, the less wrong posts are probably what I'd read first since they're generally short (Scott Alexander's are usually long) and you'll get a feel for the writing.

Specifically this is a good one for the political tribe bit: https://www.lesswrong.com/posts/6hfGNLf4Hg5DXqJCF/a-fable-of...

As an aside about the emotions bit, it’s not so much separating them but recognizing when they’re aligned with the truth and when they’re not: https://www.lesswrong.com/tag/emotions

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact