It seems an appalling number of people agree with Dijkstra here - "we not only don't need natural language, we don't even want it."
I'm sorry, but if I can express to a human being a set of directions to fill out a form in a minute or two and expressing that to a computer takes much longer and is more error-prone, that is an inefficiency in software development which it is extremely desirable to address.
There is nothing magical about human brains that would make them theoretically impossible to express in software in such a way that we can give a program natural language directions, and the fact that so many people want to dismiss this endeavor out of hand is ridiculous to me.
This should be our holy grail, something to strive toward, not something to ignore.
In fact I would suggest that it will most likely be the only way out of the mess of such a wide variety of software standards (ever have fun moving your things to a new system and having to re-learn many things just because you changed PC or phone operating systems?), whereas natural language is a standard we already have and works fine.
This way you essentially wouldn't need to learn a new set of incantations - just tell the damn thing what you want the damn thing to do, dammit.
The thing is, natural language suffers from many of the same problems. I'd say it's even worse, because it requires a common cultural background between the interlocutors.
Ever moved to a new job and had fun figuring out the local terminology, and learning the in-jokes (and figuring out when you were the butt of the joke but didn't quite get it), and when your boss told you to "just do your damn job and quit pestering me", did you immediately know he was just having a bad day and it wasn't your fault at all?
A mistake in understanding these "natural language incantations" will lead to you doing the wrong thing, or feeling embarrassed or depressed. Not so different from a system crash, when you think about it.
Thankfully machines never feel embarrassed or confused about the "damn thing" you want them to do. They only do the things we tell them, in a painfully literal way.
After all, when people want to be really precise, they use mathematical notation :)
Of course you are technically correct, however I retreat back to my main point: we do this every day and it works just fine.
Certainly not perfectly of course, but go and ask people who use some sort of CRUD system that used to be manual what they like and what they dislike about the new system.
Likely very few of them will say it is easier to use than just asking Sharon in the next cubicle to verify something.
They will have some likes, sure - but they will be things like the almost infinitely increased speed or the fact that they can access the system 24/7 while Sharon needs to sleep and take a break to eat the occasional Ding-Dong, but the ease of use that comes with dealing with another human being is a sacrifice that they make for these benefits.
(or to put it the reverse way, imagine the same office and a new employee named Eliza comes in and behaves exactly like a computer, "only doing the things you tell her in a painfully literal way." How quick would you want to give her the boot?)
> After all, when people want to be really precise, they use mathematical notation :)
Again, this is absolutely true. The problem is when you're dealing with a simple CRUD app for your insurance house or just copying files over to your iPod, you're not interested in being really precise - you're interested in the shortest path to get a relatively simple thing done.
If that path is blocked by the fact that you don't know the particular menu item, keyboard shortcut, or command switch for something that you can express in English without even thinking about it, then I regard that as a huge opportunity for technology in general and our industry in particular.
I fully agree conciseness is valuable! If a formal system doesn't have a direct route to the action you want, it probably can be improved. I don't think shorthand and formalism are opposites.
I also agree that a lot of people tend to shy away from formalism. I think that's what Dijkstra was lamenting.
By the way, what is a CRUD? John Doe doesn't understand the word. What do you mean, Create-Read-Update-Delete? He understands some of those words, but I'm unsure they mean what he thinks they mean. Are you sure that, without some training on the formalisms of the system (which Sharon obviously had!), you want John Doe to delete something from the system? He might try to unplug the harddrive, maybe that's what he thinks "deleting" means.
Yes, it's easier to teach John Doe to use a limited UI instead of, say, teaching him SQL. But he'll be able to do less complex stuff with just the UI. (And the fact SQL has some English-sounding keywords is helpful, but SQL is an extremely formal system with few parallels to natural language).
> you want John Doe to delete something from the system? He might try to unplug the harddrive, maybe that's what he thinks "deleting" means.
Highly unlikely that someone familiar with the system (even in an informal way) would do this, for the same reason that you have no trouble understanding me when I say "fruit flies like a banana". Is there technically a chance that I mean "all pieces of fruit fly through the air in the same manner that a banana flies through the air"? Sure. But it's so ridiculously low that you simply ignore it and are willing to accept the infinitesimal risk of misunderstanding.
Should all programming or user interface work be done this way? Of course not. I'm just saying it would be a very effective level of abstraction for a great many use cases, in the same way that I don't need formal mathematical notation to write a Python statement to print "hello, world".
Of course, he formalism has to be there once you get down far enough, in the same way that showing Sharon how to do an account credit in the CRUD system means that you are altering neural structures with electric and chemical signals.
However, the person who trains Sharon doesn't need to know exactly what neurons to stimulate in Sharon's brain with exactly what voltage in order to teach her that system - the relatively lofty level of abstraction provided by English works just fine.
I agree with some of what you say. The fruit flies sentence, for example. We humans are decent at disambiguating those, I'll grant you that.
Allow me to add some random thoughts:
- Python's print "hello, world" IS a formal notation. It's just that this particular notation and this particular task are so simple that we can delude ourselves into thinking it's English. But when you move to actual Python scripts, the only ones who believe "it sounds like English" are programmers :) I wouldn't trust my mom to write a Python script, after all.
- Let's go back to our CRUD/office situation example, and allow me to make it a bit more realistic (but still funny):
"Sharon, please print the report."
"Which report?".
"The one I asked you about yesterday."
"Uh, you asked about two reports yesterday. Do you mean the one about fruit flies or about bananas? Or do you want both?"
"Yes."
"Sorry, yes what? I asked you multiple questions!"
"Yes, both reports. I forgot about the other one, but I want it too."
(...)
"Ok, even though I have a terrible headache, I printed your reports. Here they are."
"Oops, sorry Sharon. I didn't mean you had to print them now. Tomorrow would have been fine. Also, please don't get mad, but I didn't want them on my desk. They are actually for Jane on the fifth floor... Didn't I mention that? Also, why did you print them using the expensive printer?"
----
My point is that just doing CRUDs with English is probably fine, but as the complexity of the task approaches that of a general purpose programming language, the level of precision you must use with your language approaches that of a formal system. Which is what programming languages are...
You've hit on an important point here. When we instruct our machines in natural language, they are surely going to have to be able to ask us questions to resolve ambiguities and fill in details we haven't specified.
I don't think EWD's point is that we don't need natural language at all; just that it's not a good way to program a computer. Which I do believe to be true.
We can explain to a human how to fill out a form because the human has probably seen a lot of forms before. Humans make a lot of assumptions that often end up being wrong: but enough of them are correct that they're still useful.
We trust computers to be unimpeachably accurate because we as humans are not. If computers need to make the types of assumptions that humans do, then they lose a good deal of their accuracy (and their usefulness).
Human language is also visually difficult to read. The biggest improvement that symbolic languages (specifically, modern programming languages) make is the use of spacing and symbols to break apart complex processes into sub-sections, loops, etc.
Furthermore, I disagree with your statement that "natural language is a standard we already have and works fine." Language is not static, nor is it standard. Sure, we may have "standard" grammar rules, but even those can vary from region to region and many people don't follow the rules on a day to day basis. It's not a static target, so developing something that could interpret natural language means developing an artificial intelligence capable of taking nuance, context and the like into account.
EWD was simply claiming that such a system applied to general purpose computing would be so complicated as to be wildly impractical.
>> This way you essentially wouldn't need to learn a new set of incantations - just tell the damn thing what you want the damn thing to do, dammit.
I'm going to guess that you've never given a set of requirements to a programmer before. Programmers ARE the human interface to computers, as they are often writing software to specifications created by someone else. Many of the problems arise in the ambiguity of the human-human communication. Another part comes from the lack of specificity combined with different ideas about how to handle unspecified cases. Your idea of what is obvious is not the only one. Some people lack domain knowledge that is assumed in requirements and leads to poor choices where specifications are not complete. In the end, natural language assumes a broad swath of "common sense" that computers do not have yet.
I'll respond to your points, but was the ad-hominem really necessary?
> In the end, natural language assumes a broad swath of "common sense" that computers do not have yet.
Absolutely. I'm not at all saying that we'd have had this last Tuesday if we'd just take our heads out of our asses - I'm saying it's something we should strive toward and not ignore.
> Many of the problems arise in the ambiguity of the human-human communication.
Human-Human communication works, and works well - once again, we do it every single day, all the time.
Do we encounter problems with ambiguity? Sure. But they are by far the exception and not the norm. After all, forms get filled out, Driver's licenses get renewed, complicated Starbucks orders get filled - these common use cases work.
By contrast, have you ever had this fun experience with a terminal program?
> quit
Unknown command: "quit"
> exit
Unknown command: "exit"
> shutdown
Unknown command: "shutdown"
or my favorite:
> quit
Unknown command "quit." If you want to close the program, type "exit."
Simple English statements like these work extremely well in human-human communication and do not work hardly at all in human-computer communication. I'd just like us to get from A to B, that's all.
There is nothing magical about human brains that would make them theoretically impossible to express in software in such a way that we can give a program natural language direction
I think we are still a long way from understanding what really happens inside our brains.
About telling a computer what to do: They don't 'do' anything. They run programs. (Free after Weizenbaum)
> I think we are still a long way from understanding what really happens inside our brains.
Agreed 100% - I'm in no way saying that it's easy or will be done in our lifetime or the next 10 lifetimes - just that it's not impossible because we're not magic and we do it every damn day.
> About telling a computer what to do: They don't 'do' anything. They run programs. (Free after Weizenbaum)
What I want is (to reference an example in another comment) to go to a command line and type "Copy the report to the share" and (if I'm on a linux box) have my computer translate that to "cp /path/to/report.pdf /path/to/share/" without my ever having to know what it's doing behind the scenes.
Whatever categorical bucket someone wants to put that in doesn't matter to me whatsoever - all I'm saying is that's what I want to see, and that's what I think we should work toward.
What'd help get there is, an object store instead of a file system. We continue to hang files on the ceremonial file tree like xmas ornaments and call it an OS feature. But nearly zero real apps are happy with that. Everybody implements an object store inside a file, and keeps all their crap organized in there (email folders; docx files; project databases and on and on).
When will we get an OS that lets me persist my objects, uniquely identify them with a uuid plus arbitrary attributes (print date??? give me a break), migrate and cache them anywhere and sign them for authenticity? That would be a real OS feature.
Sure all that can be cobbled together on one machine with different libraries. But to be an OS feature, I need servers that understand and respect all that. Object browsers that let me create a relation to view pertinent objects. Security managers that limit access to apps with digital authority etc. All on the network.
The biggest show stopper here is, how do you email your objects to some client after you are done with them? Do you use a different representation? If so, why don't you just use the network representation all the time?
It's not a new idea, nobody ever was able to make that kind of storage work.
Strange claim. Network representation is always different than local, don't know what that could mean.
As for making it work, there's no obstacle. Implementation is straightforward. And since any current file system API is trivially implementable on top of it (create a relation using parentDir, filename, {dates}) there should be little integration issue.
This could be a personal shortcoming but I am frequently left clueless what was meant. For low volume interactive situations asking for clarification is fine as long as the domain is simple.
Gathering requirements is generally done in natural language, but it's a very slow error-prone process. Even after sign-off on requirements it's pretty standard for them to be wrong in critical ways. Frankly this is the part of many software development projects that dooms them to failure.
Ignoring the difficulty of actually getting precise natural language, you'll still get to the point where no one can understand the language. If you don't believe me, go read some Kant.
I'd appoligize for the rambling comment with grammatical and spelling mistakes, but they further my point :)
It seems an appalling number of people agree with Dijkstra here - "we not only don't need natural language, we don't even want it."
I'm sorry, but if I can express to a human being a set of directions to fill out a form in a minute or two and expressing that to a computer takes much longer and is more error-prone, that is an inefficiency in software development which it is extremely desirable to address.
There is nothing magical about human brains that would make them theoretically impossible to express in software in such a way that we can give a program natural language directions, and the fact that so many people want to dismiss this endeavor out of hand is ridiculous to me.
This should be our holy grail, something to strive toward, not something to ignore.
In fact I would suggest that it will most likely be the only way out of the mess of such a wide variety of software standards (ever have fun moving your things to a new system and having to re-learn many things just because you changed PC or phone operating systems?), whereas natural language is a standard we already have and works fine.
This way you essentially wouldn't need to learn a new set of incantations - just tell the damn thing what you want the damn thing to do, dammit.