Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
|
from
login
Shutdown Resistance in Reasoning Models
(
lesswrong.com
)
1 point
by
ben_w
14 hours ago
|
past
|
discuss
A path from autonomy V&V to AGI alignment?
(
lesswrong.com
)
1 point
by
yoav_hollander
2 days ago
|
past
|
discuss
Race and Gender Bias as an Example of Unfaithful Chain of Thought in the Wild
(
lesswrong.com
)
9 points
by
ibobev
3 days ago
|
past
|
discuss
Dialects for Humans: Sounding Distinct from LLMs
(
lesswrong.com
)
5 points
by
nebrelbug
3 days ago
|
past
|
1 comment
Proposal for making credible commitments to AIs
(
lesswrong.com
)
1 point
by
surprisetalk
5 days ago
|
past
|
discuss
Machines of Faithful Obedience – LessWrong
(
lesswrong.com
)
1 point
by
kiyanwang
5 days ago
|
past
|
discuss
X explains Z% of the variance in Y
(
lesswrong.com
)
15 points
by
ibobev
7 days ago
|
past
|
1 comment
X explains Z% of the variance in Y
(
lesswrong.com
)
4 points
by
surprisetalk
8 days ago
|
past
|
discuss
A case for courage, when speaking of AI danger
(
lesswrong.com
)
2 points
by
ibobev
9 days ago
|
past
|
discuss
The Best of LessWrong
(
lesswrong.com
)
3 points
by
sebg
10 days ago
|
past
|
discuss
Situational Awareness: A One-Year Retrospective
(
lesswrong.com
)
2 points
by
fofoz
10 days ago
|
past
|
discuss
The V&V method – A step towards safer AGI
(
lesswrong.com
)
1 point
by
yoav_hollander
11 days ago
|
past
|
discuss
My Pitch for the AI Village
(
lesswrong.com
)
1 point
by
ibobev
11 days ago
|
past
|
discuss
Foom and Doom 1: "Brain in a box in a basement"
(
lesswrong.com
)
1 point
by
ibobev
11 days ago
|
past
|
discuss
Metaprogrammatic Hijacking: A New Class of AI Alignment Failure
(
lesswrong.com
)
1 point
by
Hiyagann
12 days ago
|
past
|
discuss
Orienting Towards Wizard Power
(
lesswrong.com
)
1 point
by
desmondwillow
12 days ago
|
past
|
discuss
A deep critique of AI 2027's bad timeline models
(
lesswrong.com
)
90 points
by
paulpauper
13 days ago
|
past
|
61 comments
X explains Z% of the variance in Y
(
lesswrong.com
)
5 points
by
kmm
15 days ago
|
past
Does RL Incentivize Reasoning Capacity in LLMs Beyond the Base Model?
(
lesswrong.com
)
2 points
by
fzliu
15 days ago
|
past
A deep critique of AI2027s bad timeline models
(
lesswrong.com
)
5 points
by
iNic
16 days ago
|
past
|
1 comment
A Technique of Pure Reason
(
lesswrong.com
)
2 points
by
ibobev
17 days ago
|
past
Intelligence Is Not Magic, But Your Threshold For "Magic" Is Pretty Low
(
lesswrong.com
)
4 points
by
optimalsolver
20 days ago
|
past
Beware General Claims about "Generalizable Reasoning Capabilities" of AI Systems
(
lesswrong.com
)
3 points
by
mkl
21 days ago
|
past
A Straightforward Explanation of the Good Regulator Theorem
(
lesswrong.com
)
49 points
by
surprisetalk
22 days ago
|
past
|
5 comments
Corporations as Paperclip Maximizers
(
lesswrong.com
)
12 points
by
busssard
23 days ago
|
past
|
9 comments
Broad-Spectrum Cancer Treatments
(
lesswrong.com
)
2 points
by
surprisetalk
25 days ago
|
past
Read the Pricing First
(
lesswrong.com
)
2 points
by
surprisetalk
26 days ago
|
past
Repairing Yudkowsky's anti-zombie argument
(
lesswrong.com
)
4 points
by
Bluestein
26 days ago
|
past
Reference Works for Every Subject
(
lesswrong.com
)
4 points
by
surprisetalk
30 days ago
|
past
A Technique of Pure Reason
(
lesswrong.com
)
4 points
by
ibobev
31 days ago
|
past
More
Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: