Hacker News new | past | comments | ask | show | jobs | submit login

> And computers can't, yet.

Yes they can. Computers have been writing safe code in various languages for decades: they're called compilers/transpilers. Maybe this is one of those situations where we define AI as "things we don't know how to program yet": we said being good at chess would require intelligence, once computers got good at chess they became 'just tree search'. It sounds like you're imagining that automating programming requires some currently-unknown approach, when others would say it's 'just compiling'.

Of course, we need to tell the computer what code to write. We do that using a programming language.

It could be the same language (C++), but that seems a bit pointless. Furthermore, since C++ allows unsafe code, using C++ to tell the computer what we want means that we're able to ask for unsafe things (if we want to). That makes the computer's job ambiguous:

- Should it always do what it's told, and hence write unsafe code when asked? (In which case, it doesn't solve the "write safe code" task we're talking about)

- Should it always write safe code, and hence not always do what it's told? (In which case, where do we draw the line? Would it be valid to ignore all input and always write out `int main() { return 0; }`?)

We can avoid this ambiguity if we forbid ourselves from asking for unsafe things. In other words, providing a guarantee about the code that the computer writes is the same as providing that guarantee for the instructions we give it (i.e. the input programming language).

For example, if we don't want the computer to ever write memory-unsafe code, that's the same as saying we can't ever tell it to write memory-unsafe code, which is the same as saying that we should instruct it using a memory-safe language.

That way, it's possible for a computer to not only write safe C++ in a codebase of arbitrary complexity; but (more importantly) to write safe C++ code that does something we want, and the instructions specifying what we want can be arbitrarily complex.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: