This is pretty interesting and reminds me of the classic “Bitter Lesson” essay by Sutton: early efforts to micromanage LLM behavior parallel early chess programmers painstakingly encoding human heuristics, only to lose out to brute computational scaling/cheaper intelligence (rekt).
Your transition from rigidly engineered workflows to systems embracing raw intelligence feels a lot like the shift from handcrafted speech models to deep neural nets.
wonder how much more human intuition you have to scrape out before you are futureproofed
That said, there’s still a subtle tension here—human workflows encode intention in a way chess doesn’t, so purely scaling compute might underestimate how much structured intent matters. Perhaps the final answer isn’t “more computation” alone, but rather more computation guided by a minimal yet essential scaffolding of human intent.
Your transition from rigidly engineered workflows to systems embracing raw intelligence feels a lot like the shift from handcrafted speech models to deep neural nets.
wonder how much more human intuition you have to scrape out before you are futureproofed
That said, there’s still a subtle tension here—human workflows encode intention in a way chess doesn’t, so purely scaling compute might underestimate how much structured intent matters. Perhaps the final answer isn’t “more computation” alone, but rather more computation guided by a minimal yet essential scaffolding of human intent.
reply