The computer in TOS was almost horrifyingly robotic. It was used sparingly by the crew, and technology was generally seen as a necessary evil (there was even an episode where a new computer system threatened to take away Kirk and co.'s jobs).
It wasn't until TNG that the computer became a friendly, integral part of the Enterprise. The crew widely acknowledged and were comfortable with the fact that the computer could do all of their jobs, but that never became an issue.
This shift corresponds with the rise of the personal computer away from mainframes, and the replacement of blinking lights with GUIs for depictions of computers in film and television. I will now resist the urge to credit Apple for helping usher in such a shift in culture.
I'm not sure the computer could do all of their jobs. A lot of their jobs were judgement calls, often based on "gut feel". I never got the impression the people there were completely redundant.
"The crew widely acknowledged and were comfortable with the fact that the computer could do all of their jobs, but that never became an issue."
...until Barclay became the computer (and could presumably do all of their jobs at the same time), though that didn't last too long.
https://www.youtube.com/watch?v=g4AEBiyn8Rs
How many of you use voice commands? (i.e. siri, record voice notes with evernote etc.)
The primary problem with copying the star trek computer is the user interaction. One of the biggest reasons all of the star trek's utilize a voice command computer is ...
For directing.
Otherwise when the crew entered a command, recorded a log, etc. they'd either always be shouting it out or making us watch / read the creation of text. It was quite brilliant from a directing perspective, but do you want everyone around knowing how many times you go to facebook/hacker news/gmail all the time? For that matter, personal log entries etc. Yes it has its place when you're in isolation, but its quite difficult when around others.
I use google now's voice search almost daily, and some other voice-command like "what song is this?", "set a timer...". Personally, I wish my desktop had google now and voice search.
I've especially found voice search to be the more socially acceptable way to search for something when conversing with a group of people. For instance, during a discussion about movies at a bar with friends, to tap the phone and ask it a question fits into the dynamic far better than putting your head down to tap it into a keyboard and wait for results.
Asking the question aloud shows everyone that you didn't suddenly start texting someone or checking facebook mid-conversation, but rather searching for something in order to add to that conversation. Same goes for "what song is this?".
I almost forgot my all time favorite while driving (whether alone or with a car full of people): "Navigate Home"
I use siri all the time for setting appointments and adding notes/reminders. Also good for knowing when games might be going on at the local sports bar (so I can get a seat prior to the rush).
So for me, its become exactly like the computer from TNG.
The only place I could really see myself using it would be in the car; as much as I love Google Now the only time I use the voice commands is when I'm showing someone how great they are.
As with most things Google nowadays, their vision of the future seems to be at odds with my desire to remain anonymous and maintain my privacy. As I swype this on my galaxy s3, Google search app remains un-updated because it wants all sorts of pervasive permissions that I just don't want to give it. Why does Google search app want access to my text messages and contacts? I'm sure there is some googly answer but it just doesn't sit well.
the Google search app is what handles the voice actions. it needs permissions to your texts and contacts so you can send texts to your contacts by voice.
The external access to your personal data is a problem that has to be acknowledged by Google or anyone else that might come with assisting solutions. I see no problems in collecting of public information and combine it with one's private information in order to serve one's needs. It's just that the resulted combined data has to stay out of some stranger's hands so I guess it has to be somehow private and local (and the tech that does the job - open-source in order to be trusted). I would therefore have no problems If I could be sure that my digital assistant, after entrusting my intimate data, would be loyal only to me. The solution provider might gain as being my supplier of public informations.
Google, like many companies, wants the biggest market share, therefore it goes mostly for idiots. The rest will just have to find a way to fit in into all that.
Is anyone else turned off by the idea of ubiquitous voice interfaces? I can think, type and process visual information much faster than I can vocalize or process sound, and an interface that mimics natural conversation seems unnecessarily inefficient for a conversation that is, essentially, me talking to myself.
Not to mention it just feels weird. Have you ever used Siri on the middle of a crowded subway platform? It's a strange feeling.
Not that I recall. They only used voice commands when on a starship and communicating with the "main computer." I don't even think they used voice commands on shuttle craft.
In DS9/Voyager, the shuttlecraft have talking computers. On occasion, they use comm badges to execute remote commands like having the computer use the shuttle's short range transporter to beam them back into the shuttle.
I believe this is less about the input method (in this case, speech) than it is about situational awareness. The reason the Star Trek computer is perceived as so smart is because of the thousands of sensors it employs inside and outside of the ship, the other seemingly limitless data sources it has to pull from, and of course the ability to parse all of that data into something usable when needed.
As a member of the Enterprise, Star Trek's computer knows your vital signs, your medical history, your food preferences (replicator), your entertainment preferences (holodeck), your sleep patterns, your work history, your travel history, your current whereabouts (combadge), your relationships (Riker's combadge sure does spend a lot of time around Troi's combadge), your psychological status (log entries), etc.
This is where Google and many others are headed, particularly with the advent of wearable computing. If only we could rest assured that their intent is as ostensibly benevolent and altruistic as that of the Enterprise computer.
This is a nice goal; I appreciate the answers to questions that are increasingly showing up on DDG, Google, etc. But I find the idea of a Star Trek computer inadequate as the overall vision of what a search engine should be.
Most of us search for lots of things other than just information. We search for experiences: a video to watch, a story to read, a game to play, funny comics, music, a wacky interaction with a stranger. We also search for capabilities: a site that lets me buy something, enter a contest, execute some code, sign a petition, communicate with people, create an image.
And even when we search for information, often it is not simply a one-time answer to a question. I might search for a site that will give me ongoing updates on something: the weather, a webcam. Or I might want to read an article to help me master a topic.
(Surely the folks at Google know all this. I have to wonder how much of his own ideas the writer is projecting onto them.)
I'm all for this. You can talk much faster then you can type and that is at a normal human speaking pace. We can actually talk much faster then that and still be pretty understandable to us, and there is no reason a computer couldn't understand us talking faster then that.
I've recently been testing this theory out by talking to computers as fast as possible to see if they can understand, usually through customer service lines, but some with Siri. The last time was when I activated my credit card and had to read off the numbers. It had no trouble understanding me at a pretty ridiculous speed, something much faster then a person could write down or remember.
In the interest of science: I tested out Siri just now, talking as fast as I can. I asked for: "directions to a violin", "make an appointment for 2:15 am", "what is the temperature". 3 for 3. That thing can handle some pretty fast input!
Interesting data I found surfing around:
Typing speed: 40 wpm
Talking speed: 150 wpm
Reading speed: 250 wpm
(up to 1000 wpm with 50% reading comprehension if you believe speed reading websites)
Clearly pictures are worth a thousand words, so words per minute don't hold a candle to maps, charts and other visualizations. This makes me think reading will still be a big part of interfaces of the future. Perhaps we input with voice and but will have to receive results "on screen." Haha. Ok, maybe Star Trek jokes still aren't mainstream yet. When this is all a reality, will they be funny then? =)
meh, I hit about 400wpm without "speed reading" techniques; I mean, that's just how fast I read. If I don't really understand the material or if my short term memory isn't working well, I'll go back and re-read it.
The thing is? slowing down between words hurts comprehension, rather than helping it. The bottleneck in my system is short-term memory. The translation of letters to words to sentences to concepts is largely automatic; but if I have a pause there? that automatic process stops. If I'm talking in person with someone, and they say half a sentence, then stop and 'uhm' for a while, I completely lose the thread of conversation.
I find it pretty believable that others are twice as fast as I am at the letters on a page->concepts translation.
That bit about re-reading? that is the primary reason that I think reading would be superior, even if I could speed speech. saying "what?" is /way easier/ when you are reading than when you are listening.
(as an interesting aside, I know blind people often use text to speech software set to run, well, much faster than a normal human can talk. Apparently they start out at a normal speed and then slowly increase the speed. I've been talking for quite some time about getting me some of that gear and learning how to use it, but I have not done so yet. I suspect it would suffer from the same "difficult to repeat just the bit you want" part, but I haven't tried.)
Exactly. It's easy to trick yourself into thinking you're reading, when all you're doing is scanning through a lengthy passage and skipping past words you assume are insignificant.
Maybe because some have tried it and discovered they didn't really retain what they read. This might easily be a personal thing, not an indictment of speed reading at all, only that some people can't absorb printed matter quickly.
In other words, the same training with the same goal, but completely different outcomes.
Another issue is content. Some of what I read requires great concentration and repeated scanning of complex passages. Other content can be read quickly simply because it doesn't require much from the reader. So one might say that not all reading is created equal.
Example: how quickly can one absorb "It was a dark and stormy night"? I would guess pretty quickly. On the other hand, "Time flies like an arrow, but fruit flies like a banana" might take a little longer. :)
The rate at which I can consume material definitely varies depending on the material. I read philosophy a good 2-3x slower than literature.
It's a little bit disingenuous to suggest those speaking of speed reading are saying, "1,000 WPM ALL THE TIME ALL THE ABSORPTION IRRESPECTIVE OF MATERIAL!" - I don't think anybody is saying that.
This is a great example of an internally sold and shared vision. I think that can be a powerful thing. I wish more places I've worked had this strong a vision of what they're aspiring towards down the line.
Siri and the Google app with speech recognition are already pretty interesting, but the biggest reason for not using them is that you need to touch the device to get them started.
If there was a way to keep such an app always open and just say "computer" and then ask the question instead of having to press a button, it would be much more useful.
I don't see why it wouldn't be possible.
I remember my Motorola StarTac phone, which I fancied as Capt. Kirk's "communicator," and it's follow up a few years later - the Razr - was a definite improvement, so I fully expect a Star Trek type computer in the not-too-distant future.
It wasn't until TNG that the computer became a friendly, integral part of the Enterprise. The crew widely acknowledged and were comfortable with the fact that the computer could do all of their jobs, but that never became an issue.
This shift corresponds with the rise of the personal computer away from mainframes, and the replacement of blinking lights with GUIs for depictions of computers in film and television. I will now resist the urge to credit Apple for helping usher in such a shift in culture.