Eliezer Yudkowsky and the LessWrong forum popularized AI safety/alignment ideas. (The Effective Altruism community was originally mostly populated by people from LessWrong.) I think this article is a little awkwardly infatuated with LessWrong, I say even as a fan, but it does fit this discussion conveniently well about where they're coming at the subject from: https://unherd.com/2020/12/how-rational-have-you-been-this-y...
While MIRI is prominently connected to Yudkowsky, I wouldn't treat them as defining the AI alignment community. There are many people not involved with it who make substantive posts and discussions on LessWrong and the Alignment Foundations forum. There are other organizations too. OpenAI considers alignment important and has researchers concerned with it, though Yudkowsky argues the company doesn't do enough to prioritize it relative to the AI progress they make. Anthropic is an AI company prioritizing AI safety through interpretability research.
I think I'm at the edge of my ability with AI as I'm noticing I'm trying to argue against the usefulness of the concepts rather than the concepts themselves. At the very least I'm not smart enough to casually read these sites (lesswrong, AI alignment) at this time.
I remember feeling like this (brain CPUs pegged at 100%) trying to slog through HPMOR the first time too in fairness, it's just too many concepts to take in in one sitting. I'll get there eventually if I keep at it but not on my first read.
I'll consider my opinions on AI safety void due to lack of knowledge for now, always try to jump over the first stage of competence. I'll start with the Wikipedia page for AI alignment, haha.
Thank you for your responses in any case, I'll dig into this further!
Oh, I think my last post was more about the people concerned with AI safety rather than the topic itself. If you want to get closer to the actual topic itself, this article is a surprisingly great resource: https://www.vox.com/future-perfect/2018/12/21/18126576/ai-ar...
While MIRI is prominently connected to Yudkowsky, I wouldn't treat them as defining the AI alignment community. There are many people not involved with it who make substantive posts and discussions on LessWrong and the Alignment Foundations forum. There are other organizations too. OpenAI considers alignment important and has researchers concerned with it, though Yudkowsky argues the company doesn't do enough to prioritize it relative to the AI progress they make. Anthropic is an AI company prioritizing AI safety through interpretability research.