Hacker News new | past | comments | ask | show | jobs | submit login

Per Kernighan, debugging is 2X harder. If the AI jockeys don't understand what they're being given, man, it's going to be humorous watching them putting out fires. And how can statistically fit models exceed their training set w/o going random? How is AI going to string together equations to do physics or engineering? They're bloody squiggly symbols and letters.

And the marketplace still isn't interested in fixing bugs over "oooh, shiny", so my concerns might never be addressed.




Imagine if the AI can provide both a program and a machine-verifiable proof of correctness though. Then all you have to manually verify is that the proof proves the right thing.


So the only thing a developer have to deliver is an unambiguous specification of what the program is supposed to do!


That sounds hard to do in a natural language. Maybe they could deliver the unambiguous specification in a formal language designed specifically for the purpose of specifying what programs are supposed to do.


If I could get unambiguous specifications, then my job might actually be in trouble. It would cut my workload down considerably.


I was half joking, because software development is largely the process of developing an unambiguous specification - just in the form of executable code.


A premium on the ability to express requirements clearly and logically, on the fly rather than as part of a cycle of paper-pushing.


How will the AI overcome the halting problem?


You can't in general determine if an arbitrary program will halt. I believe it is however, possible to construct a program that does provably halt. I have no idea if you can write most useful programs that way though.


Almost all programs that people write are written so that they provably halt (or provably don't halt - as continuing forever is sometimes the desired behavior).

How do we do that? By using known-halting parts, and composing them in known-halting ways. How did we find those parts in the first place, well computer science.. But boiling that down to a codified process, that's where it gets murkier. I think we don't really know exactly how human ingenuity discovers new useful halting parts in the face of the halting problem.


Easy - it will come up with a solution too complicated for mere humans to understand, at which point everyone will assume it is correct.


The same way that humans do. By redefining "doesn't halt" as "didn't halt before my patience ran out".


Well humans haven't overcome it, and we muddle through. Same with AI, it seems.


"Oooh, shiny" that doesn't work has a very short run in the marketplace.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: