The web/cloud does not automatically make software "smart". Programmers make software smart. Everything else is an implementation detail.
Your addressbook does not automatically hook up with trusted friends to update because you just glossed over about five or six Certifiably Hard Problems such as a) proving identity, b) disambiguating names, c) trust, and d) doing it all with a GUI which will not cause a big-thinker-no-technical-skills-marketing-consultant to say "Why do the freaking engineers make it so I need a PhD in graph theory to use my freaking address book? How hard does an address book need to be? I put a name in once, I type it again, it comes back out? Sheesh, do your jobs, people!"
Sometimes the assumptions that you make to make software smart will make users go postal.
I once had a coworker who was a smart guy. He helped design the guidance system for the HARM missiles. But one day he called me over to his office to ask for help. Ends up Microsoft Word was automatically injecting text while he was typing and he didn't know how to get it to stop.
The smart software's injects were correct given a specific type of paper for the general populace, but not for his edge case.
Some of what Seth was talking about was just bad programming. AM/PM default was just bad or lazy programming. Some of what he's talking about could be taken care of using Bayesian filters or something at that level. (AM/PM default could be taken care of this way too) Make things mildly adaptive and unobtrusive, and us?
As for addresses, something like Facebook or LinkedIn could be tremendously useful. Instead of storing bits in a database, you could basically subscribe to another's address info. In fact, accounts to these websites amount to subscribing to other's personal or professional info, and letting them subscribe to yours.
All the problems he's describing could be solved in about five minutes, if he were using Emacs' org-mode, bbdb, and VM-mode or Gnus to read mail. But only if he'd spent a couple hours learning them first.
There are tradeoffs with every kind of software. That little screenshot he posted is beautiful; time spent making a program beautiful is time not spent making it functional.
The ability to run these little utility programs on the command line is a great virtue of Unix, and one that is unlikely to be duplicated by pure GUI operating systems. The wc command, for example, is the sort of thing that is easy to write with a command line interface. It probably does not consist of more than a few lines of code, and a clever programmer could probably write it in a single line. In compiled form it takes up just a few bytes of disk space. But the code required to give the same program a graphical user interface would probably run into hundreds or even thousands of lines, depending on how fancy the programmer wanted to make it. Compiled into a runnable piece of software, it would have a large overhead of GUI code. It would be slow to launch and it would use up a lot of memory. This would simply not be worth the effort, and so "wc" would never be written as an independent program at all. Instead users would have to wait for a word count feature to appear in a commercial software package.
Normal people don't use command lines as it's not discoverable, they can barely type, it's a pain, easy to forget, etc.
And compared to 1998 when the article you cite was written, normal people also have supercomputers sitting in their living rooms where time to launch for 1,000s of lines of code is instant, for all intents and purposes.
We have APIs, Flash, Silverlight, WPF, Java and many, many more to take away the pain of writing GUI code.
"time spent making a program beautiful is time not spent making it functional"
Try telling that to all the people who bought an IPhone. It's time to man up and accept the fact that a good UI and hence UX is essential to modern programming.
But feel free to go on living in 1998. I see the appeal, you get to point at your emacs screen and do the old 'all I see now is blonde, brunette, redhead...' ;)
Command Line or GUI is a false dichotomy. For example, both of you cite Emacs as a command line tool, but I haven't used Emacs in a terminal for years. The Emacs I use today is a Cocoa application with pull down menus, drag and drop, etc. (I still wouldn't call it pretty, or a good example of user interface, but not because it is a CLI.)
On the contrary, I think the command line was reborn in 1999 in the form of Google. It should be relatively straightforward to work out how Google succeeded where terminals failed at being effective interfaces for normal people.
Compiled into a runnable piece of software, it would have a large overhead of GUI code. It would be slow to launch and it would use up a lot of memory. This would simply not be worth the effort, and so "wc" would never be written as an independent program at all. Instead users would have to wait for a word count feature to appear in a commercial software package.
Stuff like this is all through the Symbolics Lisp machine OS. Any object that could print itself out to a stream of text automagically got a hot-link back to an "inspector" of itself in any text window it printed itself out to. Not any GUI programming was necessary, but text utilities were part of that GUI navigation. (The magic of "around" inheritance in MOP.)
We have low expectations of OS today, which were formed in the days of green screen machines that were many orders of magnitude slower, had many orders of magnitude less memory, and where you had to count every byte. We have low expectations of GUIs, formed by GUIs designed when machines struggled to run them.
It is stupefying that this article is written now, and there is not a single reference to semantic web, or artificial intelligence research.
This problem has been identified decades ago, years of research have gone into it, and some solutions have been identified. Granted, most solutions are not workable, precisely because they try to be too smart, which ends up not working either.
But just saying "why is my meeting scheduled at 2am?" or "why doesn't it recognize names?" or calling out "This is the end of dumb software!" is being dumb yourself. Seth could've at least done a little bit of research.
Unfortunately, often it ends up being very annoying when the computer tries to be smart. Not to say it can't be done, but the PC's helping hand is not always welcome.
Yeah, when a program tries to be smart and repeatedly, helpfully "corrects" what you're doing, it often just gets in the way or corrupts your work.
Word is the first example that comes to mind (for myself, and many people). One such situation is they way that git just punts on hard merge decisions - coming up with accurate heuristics for complex merging is very difficult, but the programmer responsible for the merged content can usually make the decisions easily. This frees git up to focus instead on storing and propagating those decisions well.
A part of the problem is the top-down mandated idea of what is "correct." If these things can be adaptive in a non-intrusive way, then it can all be like "teh" and "MIstake." I would love to stop telling new word processors to stop correcting Smalltalk. I wonder if there's any way to keep information like that around with you? I can try to do 100% of my word processing on Google docs, but, am I ever going to get there?
Preferences like this need to be tended by operating systems. If I tell one text editor widget about "Smalltalk," there's no reason why all of them across the system shouldn't know about it.
Fully agree - I'm much more receptive if I get a little notificiation saying 'new feature, wanna see?'.
I actually thought Microsoft's Clippy was a really good idea in principle; where they went wrong was in giving Clippy's options the appearance of a modal dialog, which people thought they needed to respond to. Also, screen resolutions were typically lower so Clippy took up an undue amount of visual real estate, to the point of being intrusive. But the context-sensitive task helper is now seen on many applications, as is some kind of anthropomorphic assistant on many web pages.
No kidding. I've scheduled stuff at 2am before. Sometimes it's just "fetch laundry from dryer before bed" but once or twice it's been a phone call with people traveling on the other side of the world. "Intelligence" would be at best unhelpful and at worst disastrous.
When there are useful defaults, or usefully filtered and arranged options that we can override if we like, we see the software as smart. When the software thinks it knows better than we do, then it's not really "helping."
My point is that there is a person who created the software who deserves the blame. You can't talk about a computer being smart at all. Either the programmer was smart or they weren't.
If only Stalin's goons had had access to the top five people you talk with and where and when you meet them, all on a centralized network. They could have gone home early to spend quality time with their families rather than stake people out and interrogate them.
This has been solved. The solution is called DCI.
Data Context Interaction. A new paradigm where you design your software according to the end users mental model. Understand DCI and you should be able to craft software that makes more sense per usability. Trygve Reenskaug has explained why the traditional MVC is flawed.
> A new paradigm where you design your software according to the end users mental model. Understand DCI and you should be able to craft software that makes more sense per usability.
If you understand the end-user's mental model, your biggest gains will be from sales and marketing.
There is no contradiction - understanding user's mental model helps both marketing and code structure. It pays to structure code in such ways as to ease absorption of new requirements, and the DCI claim is that they have a better idea how to do this compared to MVC or other approaches. I recommend reading the linked article, it's long but worthwhile.
I mean users like the author of that blog who think just because he can think of a feature it is easy to implement. I mean users who create a meeting at 2am instead of 2pm and get mad at the software for not warning them of their mistake. I mean dumb like people who create 10,000 contacts and get mad at the software for not automatically deleting the ones they don't care about anymore.
What he is suggesting is just absurd. If it actually did do what he wanted, he'd get pissed off like the guy the other day who got mad because apple's time machine automatically deleted some year old backups of his system to make room for the latest backup.
His examples tend to focus on sources of data. Arguably, a desktop program doesn't have to care where its data comes from, and it doesn't even directly need web features to be smart.
For instance, what is the difference, really, between using address book data that's entered entirely manually by the user, and data that may have been partially synchronized from somewhere on the web? As long as it ends up in the format the program expects, it can appear "smart". The program itself doesn't need a sync feature, as long as something can sync that understands its formats.
So the issue, to me, is that programs just need more open data formats, and there need to be more handy services (like sync programs) that deal with those formats.
I honestly don't see what developing for the web does to encourage smarter applications? Is it things like "If I do this, I have to round-trip to the server over the WAN?" or "If I do this, I have to ask the asshole middle-tier developer for something?"
In fact, more and more desktop applications ARE web applications, they just don't use a web browser as a client. I suspect that Godin is a little bit off the curve here, and the moment is going to shift back toward the desktop (AIR and other sorts of things) where developers can build rich applications without having to hassle through the browser compatibility issues that can cripple teams.
One of the apps I'm working on solves his Address Book problem. It aggregates your interactions with others using the Spotlight database and by crawling your chat logs and (probably not in 1.0) interactions on sites.
his point, for the second issue, is that the option should be on by default. and that makes sense.
however, the sad thing about article is that it shows Seth does not understand software development. There is no people who make "desktop" vs "web" software. Mostly this is the same group of people: software developers, and we all are alike - only difference is the delivery platform.
And the specifics of the selected delivery platform does not provide any qualitative difference for code that runs on it. Basically: you can make crappy desktop apps, and crappy web apps. The address book does not become good magically, because it is rendered as HTML page.
To finalize my point, the default state of software is "crap". Anything else must be engineered on top of the underlying "crap". The "why my software does not do (blindingly obvious) thing X" articles are amusing (function as bug reports, or invent insightful features), but are orthogonal to actual development of software, unfortunately.
Software will continue to slowly evolve, there is no qualitative leap coming.
A leap may come when we can reduce the cost of development by off-loading more of the work on to computers. However that is not just around the corner. And nothing to do with web development.
Your addressbook does not automatically hook up with trusted friends to update because you just glossed over about five or six Certifiably Hard Problems such as a) proving identity, b) disambiguating names, c) trust, and d) doing it all with a GUI which will not cause a big-thinker-no-technical-skills-marketing-consultant to say "Why do the freaking engineers make it so I need a PhD in graph theory to use my freaking address book? How hard does an address book need to be? I put a name in once, I type it again, it comes back out? Sheesh, do your jobs, people!"