Hacker News new | comments | show | ask | jobs | submit | from login
MIRI: Want to get paid to spend all day learning more about ML? Apply here! (intelligence.org)
2 points by lyavin 5 days ago | past | web | discuss
A Reply to François Chollet on Intelligence Explosion (intelligence.org)
115 points by lyavin 10 days ago | past | web | 85 comments
Security Mindset and Ordinary Paranoia (intelligence.org)
4 points by rsaarelm 21 days ago | past | web
MIRI got awarded $3.75M from the Open Philanthropy Project (intelligence.org)
1 point by lyavin 39 days ago | past | web
New Paper: “Functional Decision Theory” (intelligence.org)
4 points by JoshTriplett 56 days ago | past | web
New MIRI Paper: “Functional Decision Theory” (intelligence.org)
5 points by ASipos 56 days ago | past | web
AlphaGo Zero and the Foom Debate (intelligence.org)
1 point by JoshTriplett 58 days ago | past | web
There's No Fire Alarm for Artificial General Intelligence (intelligence.org)
219 points by MBlume 65 days ago | past | web | 206 comments
Why so many very smart people are worried about AI [pdf] (intelligence.org)
3 points by chakalakasp 154 days ago | past | web
A.I. Alignment: Why It's Hard, and Where to Start (intelligence.org)
2 points by davedx 221 days ago | past | web
Vingean Reflection: Reliable Reasoning for Self-Improving Agents [pdf] (intelligence.org)
1 point by cl42 223 days ago | past | web
Coalescing Minds [pdf] (intelligence.org)
1 point by tvural 244 days ago | past | web
Ensuring smarter-than-human intelligence has a positive outcome (intelligence.org)
2 points by rbanffy 249 days ago | past | web
Ensuring smarter-than-human intelligence has a positive outcome (intelligence.org)
2 points by apsec112 249 days ago | past | web
Cheating Death in Damascus – The escaping death dilemma (intelligence.org)
3 points by titusblair 255 days ago | past | web
2016 in Review – Machine Intelligence Research Institute (intelligence.org)
4 points by JoshTriplett 264 days ago | past | web
Cheating Death in Damascus [pdf] (intelligence.org)
2 points by apsec112 274 days ago | past | web
Response to Ceglowski on superintelligence (intelligence.org)
11 points by apsec112 337 days ago | past | web
AI Alignment: Why It’s Hard, and Where to Start (intelligence.org)
5 points by apsec112 354 days ago | past | web
Reducing Long-Term Catastrophic Risks from Artificial Intelligence (intelligence.org)
1 point by davedx 360 days ago | past | web
Logical Induction (intelligence.org)
157 points by apsec112 461 days ago | past | web | 65 comments
Safely Interruptible Agents [pdf] (intelligence.org)
28 points by wallflower 526 days ago | past | web
Algorithmic Progress in Six Domains [pdf] (intelligence.org)
1 point by apsec112 530 days ago | past | web
A formal solution to the grain of truth problem (intelligence.org)
93 points by ikeboy 534 days ago | past | web | 15 comments
New paper: “A formal solution to the grain of truth problem” (intelligence.org)
2 points by JoshTriplett 535 days ago | past | web
Safely Interruptible Agents – Accessible Paper on Google's AI 'Kill Switch' [pdf] (intelligence.org)
3 points by hunglee2 554 days ago | past | web
DeepMind's “Safely Interruptible Agents” [pdf] (intelligence.org)
4 points by vonnik 557 days ago | past | web
Interruptibility, AI and the big red button [pdf] (intelligence.org)
1 point by pilooch 558 days ago | past | web
A 'Big Red Button' for AI to interrupt its harmful sequence of action [pdf] (intelligence.org)
3 points by auza 559 days ago | past | web | 3 comments
Safely Interruptible Agents [pdf] (intelligence.org)
3 points by neverminder 562 days ago | past | web
More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: