Hacker News new | past | comments | ask | show | jobs | submit login

Can you back up those claims? I don't believe all of the nuance is contained in your comment.

I'm looking into MIRI, if that's not "the AI safety community" please do correct me.

I've become very aware ever since ChatGPT started saying inaccurate things confidently that humans have a much worse hit rate.

Seriously, keep an eye out for it, you'll see it everywhere. At least ChatGPT will double check if you ask if it's sure. People tend to just get annoyed when you don't blindly trust their "research" haha.




Eliezer Yudkowsky and the LessWrong forum popularized AI safety/alignment ideas. (The Effective Altruism community was originally mostly populated by people from LessWrong.) I think this article is a little awkwardly infatuated with LessWrong, I say even as a fan, but it does fit this discussion conveniently well about where they're coming at the subject from: https://unherd.com/2020/12/how-rational-have-you-been-this-y...

While MIRI is prominently connected to Yudkowsky, I wouldn't treat them as defining the AI alignment community. There are many people not involved with it who make substantive posts and discussions on LessWrong and the Alignment Foundations forum. There are other organizations too. OpenAI considers alignment important and has researchers concerned with it, though Yudkowsky argues the company doesn't do enough to prioritize it relative to the AI progress they make. Anthropic is an AI company prioritizing AI safety through interpretability research.


I think I'm at the edge of my ability with AI as I'm noticing I'm trying to argue against the usefulness of the concepts rather than the concepts themselves. At the very least I'm not smart enough to casually read these sites (lesswrong, AI alignment) at this time.

I remember feeling like this (brain CPUs pegged at 100%) trying to slog through HPMOR the first time too in fairness, it's just too many concepts to take in in one sitting. I'll get there eventually if I keep at it but not on my first read.

I'll consider my opinions on AI safety void due to lack of knowledge for now, always try to jump over the first stage of competence. I'll start with the Wikipedia page for AI alignment, haha.

Thank you for your responses in any case, I'll dig into this further!


Oh, I think my last post was more about the people concerned with AI safety rather than the topic itself. If you want to get closer to the actual topic itself, this article is a surprisingly great resource: https://www.vox.com/future-perfect/2018/12/21/18126576/ai-ar...


All good mate, I've just caught myself in unconscious incompetence haha

I need to know a lot more on this subject before dismissing it! I'll give that a read too, thank you :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: