Looking at screens to get key information distracts me from my surroundings and seems archaic.
My wife is a sound designer who has opened my eyes to the importance of sounds both in film and in the world. It's not that I was unaware of sounds, but I didn't realize how important they are to centering me in this world and the made up worlds of films and games. Try watching a scary movie with the sound turned off, it turns into a comedy.
I think its unexplored territory that has huge potential to impact the way we interact with the real world, even more so then Glass or Hololens.
When I listen to music as I walk down the street I change, my mood, my posture and the way I look at the world. The music augments the reality around me in a way that visual UX never can because it's a lens between my eyes and the world.
The iPhone is catching up fast too...my wife's taken to sending emails via Siri (to avoid strain on her hands), and most of the time it gets things perfectly.
The biggest problem is privacy. One of the nice things about touchscreens is that you have a personal dialog with the device that can't be overheard by anyone nearby. That doesn't apply to voice recognition systems, and it can be pretty awkward to dictate an e-mail to a phone in a crowded place.
It would be nice for voice recognition platforms to start being built in. I know there's training data that's needed, but there's some convenience afforded.
The privacy concern _isn't_ necessarily about having something to hide. It's about the consistent hacking of major systems, and exposure of personal data.
Historically probably wasn't much of an issue but given that most people will spend hours at a desk on a keyboard, it's likely to become more of a problem. Think of it akin to paying attention to your posture
Real easy to say: "Okay Google... navigate to California Academy of Sciences."
What's missing for me is spotify/app specific integration.
For that to really happen in a robust way, I think Google needs to open up Custom Voice Actions.
I agree discovery of these magic phrases needs work.
Okay Google... Play music will start Music app
Okay Google... Start Radio will start NPR app
Perhaps if I used Google Music the integration would be built out.
I've done a fair bit of interface engineering for the web. Between that and using so much software over the course of my life, I'd say that this applies to GUIs just as much as voice interfaces.
(I'd suggest this is actually a combination of the still-non-trivial nature of NLP, combined with a lack of feedback, combined with the fact that giving instructions is quite hard. Humans overestimate human language's ability to communicate clear directions, as anyone who has done tech support over a phone understands.)
I wonder if NLP research should have started as our ancestors did, with grunts and hoots and cries. Instead it's focused on recognizing full words and sentences while almost completely ignoring inflection.
Another dimension to add with vocal input is directional. If you have mics in all corners of a room, which direction you speak in can affect whether "turn off" operates your TV, your lights or your oven.
There is a lot of improvement for voice processing in several dimensions of voice.
Taking sections of the last response, or hell, even having every response essentially be wrapped up in some sort of object you can reference in your next query to the interface is what all of these lack.
Even Androids "Search this artist" doesn't quite get there. The lack of context between queries is what murders Siri for me. That and her seemingly random selection of what goes to Google and what goes to Wolphram Alpha. Sometimes even the "wolfram" verb prepended to a query just doesn't go to wolfram no matter what.
A text field in contrast doesn't need any intelligence, nor do buttons.
This is in particular important for instance for people living in non english speaking countries but using english in specific contexts (work, gaming, minor hobbies etc.). Switching language in audio applications are generally a PITA. Then even when you do the switch between languages every time, the engines are still have huge performance gaps between the languages.
Sofware has become way extremely tolerant for multiple languages IMO. Voice recognition interfaces are not so mature yet in my experience.
Now, granted, this is a specific use case, but, you know... "explore the space" and all that. (more cowbell!)
After 111 failed attempts :)
Still, it's a hell of an achievement.
EDIT: to be fair, Ornstein & Smough is a very tough fight even with normal controls.
Also notice the voice recognition fails to recognise some words like "item" even though they are spoken clearly. Almost gets the guy killed at one point.
A VUI breakdown would be inability to understand accents, or non-responsiveness to commands. As a user input, Alexa is pretty well buttoned up.
Geordi: Computer, subdued lighting.
(computer turns the lights off)
Geordi No, that's... that's too much. I don't want it dark. I want it cozy.
Computer: Please state your request in precise candlepower.
(The scene: https://www.youtube.com/watch?v=OPZnR3Ue1n4)
Computers train users on how to use the computer all the time. It's less ideal than having the computer know everything, but once you know what you can expect from a computer, it's easier to get a good result.
Which would you rather do? Be forced to state your lighting preferences in candlepower, or have the computer learn that when you say "subdued lighting", you mean "12"?
But if I type in "how fast do I need to go to travel 100 miles in 6000 seconds", now it has no idea what I'm talking about and instead gives me a comparison of time from 6000 seconds to the half life of uranium-241.
Now, when I get that result, I don't usually just give up on trying to figure out the answer. Instead I try to figure out what the computer expects me to say. Through some trial and error, I can shorten the query to "100 miles in 6000 seconds" and boom, I get the answer of 60 miles per hour. Instead of natural language, I'm using the search engine like a calculator.
The computer has just taught me how to use it. Ideal? No, but we work within the reality we're given. 12 candlepower is dim for you but for someone with decreased vision, that might be completely dark. The computer doesn't know unless it's taught, and we know from looking at history that users would rather the computer train the user than the user having to train the computer.
What you should have asked is: "100 miles per 6000 second to miles per hour", which it will happily convert the rate you gave, for the one you really wanted.
I guess what your saying is it should be able to figure that out, but at some point, the old phrase "garbage in garbage out" surfaces.. You never told it to convert the unit.
Some phrases exist as a "wow, 1 million people phrase this problem this way, let's throw that in." The fact it can take an easily dictated, albeit strictly phrased problem, and get you your answer is really what I love about it. Now if Siri would just stop sending stuff to Google. -_-
Example format: "Computer, define X as Y"
"Computer, define subdued lighting as set lighting to candle power twelve"
Then the VUI just adds a new entry to the voice commands where saying X results in Y.
It still feels a lot like the old text-based RPGs, in that you spend most of your time trying to figure out how to phrase something to accomplish a basic need, while angrily thinking "it would have just been easier/faster to pick up my phone."
It's 2016. How are we still OK with the unreasonable constraints of technology that make us jump through a hoop like a trained poodle to get the treat?
We don't have audio search engine equivalent yet but that day is also not far.
Devil's advocate though: this seems more like a case of the guy being good enough at the game to win in spite of the voice controls rather than because of them. Compared to a regular controller/keyboard+mouse/whatever there's just no contest in terms of input speed and precision. Not all genres are a good fit for this either. I'd be really interested to see if anyone could make it work with, say, a competitive FPS game.
I can recall the first time I ever saw a computer and how primitive they now look.
Now we have little bots that listen to you and reply with info.
When my two-year-old is forty - we will have ghost in the shell.
It's crazy beautiful and scary to me that we all grew up reading cyberpunk fiction and watching anime and not all of us did, but pretty much all of us are actually building that future.
There is a balance between dystopia and utopia though.
We are all working at the Great Game - and the future is going to be interesting, but we can never turn back. So hopefully we keep the balance and get it right.
The control of government apparatus is thinking in advance - I personally feel that the tech sector's vision is myopically focused on today's profits and not in the future where it should be viewing, with the exception of this most recent case between apple and the FBI. At least Cook's comments were salient and forward thinking and truly for the greater good... Let's hope that invigorates the tech industry as a whole to think about where we are headed.
However, we seem to still be pretty far from natural language interfaces that make sensible inferences about actions you're requesting and perhaps join multiple data sources to answer your query. There have been a lot of advances--don't get me wrong. But it's a very hard problem that's been being worked on for a very long time.
CLI has the same issue, but at least you can man-xxx, which I imagine works a lot better in text than it does in audio.
I think Google is quickly getting there with their search interface. I'm always amazed at what a good job Google does when I ask it a question like "what's the name of the instrument powered by steam" and milliseconds later it's showing me info about calliopes.
I wonder if the smartphone age will go away as quickly as it came. I picture a world where we just have smart wearables like a watch which has a tiny visual interface, but a powerful audio one (speaker, earpiece, put watch up to ear, etc). It seems a lot less intrusive. I imagine as we get better with AI and voice recognition, it'll be as practical as a phone. What I'm able to do with Google Now on my watch is fairly impressive today. We already have the technology to understand things in context like "Navigate to Katz's deli" brings up Google Maps to the deli as opposed to a google search results page about navigating to a cat themed deli, which was the status quo not too long ago with voice search.
I imagine carrying around this big selfie/facebook machine around, constantly charging it, whipping it out all the time, etc will be pretty gauche if wearable-only solutions become competitive.
Not to say that content can't shift for the medium, just as it always does. What would an audio Facebook sound like?
For teens and such I can see the big phone never going away but for most adults, having an inconspicuous wearable just seems like a more refined experience. I imagine there's a logical procession here from desktop > traditional laptop > ultrabook laptop/convertible > tablet > mobile > wearable. You lose functionality with every step, but depending on the use case, it doesn't really matter. For people in my peer group, a wearable that could work without a phone would sell like hotcakes.
It's also less accessible. I'm sure auditory UI is useful in many cases, but it also seems to be more cumbersome in others. In any case, I hope that pervasive auditory UI doesn't become any sort of standard without an accompanying visual/physical interface.
> Try watching a scary movie with the sound turned off, it turns into a comedy
Allow me to be pedantic and say it is that being fully immersed in the context of the movie that really matters. You could probably achieve a similar suspenseful effect with silence+subtitles, although I'm sure the experience isn't identical. Otherwise, the deaf could never enjoy scary movies, including me.
For whom? To the blind this would be a godsend. From a practical medical perspective, audio is superior because we have decades of experience with effective ear implants to help the hard of hearing and the deaf, but the visual equivalent still eludes us.
Actually, I'd imagine that a good old-fashioned tty is pretty good for a blind person: it's TUIs and GUIs that get progressively more painful.
Source: am blind without my glasses; can imagine preferring ed to emacs, vim, Atom, SublimeText if I had to use an audio interface.
For sure. Different interfaces disadvantage different classes of people. There is no silver bullet; I'm trying to point out that an exclusively audio/voice-driven UI would not be desirable.
> we have decades of experience with effective ear implants
The problem is multi-faceted. Hearing loss, especially from a young age, often leads to difficulty speaking -- it is no use if a voice-driven system can't understand you in the first place.
And while cochlear implant technology has helped a lot of people, it is by no means a cure, and there are many, many others that don't benefit enough from assistive technology to achieve functional equivalence (which is the key phrase when talking about accessibility). I have a cochlear implant and haven't worn it in years, because it really doesn't help.
Well, I think blind people would disagree with you.
> I hope that pervasive auditory UI doesn't become any sort of standard without an accompanying visual/physical interface.
Any speech interface could be trivially translated to a text interface, right?
> Any speech interface could be trivially translated to a text interface, right?
Pretty much, which is why UIs should not be exclusively auditory, that is, delivered without an accompanying visual interface (text or otherwise). Ordering the Echo Dot verbally is a cute gimmick given its premise, but it would really suck if otherwise useful products and services were only usable through audio.
Hopefully the audio UI trend does not follow the obsession over touch screens: a rapidly adopted, de facto standard driven by tastemakers that leave little consideration for others that might prefer an actual keyboard or other physical affordances.
I hope to not be a super pedantic ass for pointing out that the 'immersive' media in films is the audio, not the visual components.
That's a non-falsifiable opinion, really (even if it does apply to the majority of the population). I'm living proof you can enjoy movies without the audio.
It's the sum of our experience that colors our perception -- almost irrevocably in this case, since I imagine it would be difficult for the typical person to really be able to enjoy something in complete and utter silence.
I am not looking to equate immersion with enjoyment, and by no means do I intend to disrespect the manner by which you enjoy a type of media. My apologies for coming off that way!
When I refer to 'immersive media' I am referring to the 360-degree omnidirectional dispersion pattern of sounds and our similarly omnidirectional hearing of those sounds. This is 'immersive experience' as opposed to a 2-dimensional or stereoscopic experience, which is what we get with visual media. Television/film screens fire light directly at the eyes; even in iMax situations the film is never experienced behind us. That isn't immersive, whereas say a VR headset can potentially offer this type of immersion. But since this technology is still in its infancy I think it too early to call it fully immersive like audio is.
Then that is splitting hairs over a definition of immersion, and quite unrelated to how the word was used in my original comment. Had I instead said "fully engrossed," my point would still hold, and you would not have one.
I understand you were being "super pedantic," but if you're going to do that, then you should be super precise in the pedantry, otherwise you're arguing a strawman.
Don't you mean oral or aural?
no thank you.. i will use my hand
TV remotes are awesome because it has physical buttons, and it's fairly dumb... almost no chance of issues.
For the few times where I may need to walk additionally around the house, it's a non issue
This is something I've been thinking is becoming more problematic as well as an opportunity for real ubiquity. I have 3 separate devices nearby that are Google Now voice activated (the newer devices support this even if the screen is off), and they will sometimes trigger at the same time accidentally.
Since the processing is cloud based, and they know my identity, why don't the devices recognize this fact and cooperate. Instead of just 7 beam forming mics in the Echo, if you have two within hearing distance you could have the benefit of 14 and a unified response. Don't tie the request & response to a particular device, instead think of it as ubiquitous network that moves with you as you walk around the household, you should be able to continue your conversation from one room to the next seamlessly.
The echo and noise reduction software that I'm aware of can't really do that in a reasonable fashion.
With current solutions, you've got one DSP that's receiving all the audio streams simultaneously, and they need to be exactly synchronized in time. Then, using basically pattern-matching, it figures out what direction the user's voice is coming from, and combines some/all of the audio streams together to eliminate environmental noise and make the speech as clear as possible.
To do this with separate devices, you'd want extremely precise time synchronization. Which is possible, but I wouldn't want to implement it.
The extra processing and synchronization would take longer, and delay input to the speech recognition engine. I don't think it would enhance the user experience.
Ah yes, the rally cry of the person not doing the actual development work... In my experience, rarely is _anything_ "So simple, and easy to implement".
Baidu trains the voice recognizer by adding all kinds of noise to the training data. I think it might be easier to do that than use multiple microphones. The neural net learns to do the difficult process of separation of useful data from noise.
I want to have an Echo in every room, and I don't want to have to remember all their different names!
> Since the processing is cloud based, and they know my identity,
Interesting, so everything said in that room gets processed and potentially sent to Google for indefinite storage? What a 1984-style luxury.
Having all of the devices listening all the time would be a bandwidth and power nightmare, if not for the sender, for the receiver.
https://history.google.com/history/audio has a list of all audio recorded
Harder for things with cellular data, though.
exactly why I think none of this is worth it (echo, google now, siri, smart tvs etc) - especially given the current applications of the 3rd party doctrine - you are giving up the right to privacy for everything that is said in your home.
I am very much hoping they fix it in the future and add a software layer to combine/route commands with one single wake word.
If all of "Alexa" was included in a disconnected local database I bet it would still be as appealing.
Rosie on the Jetsons didn't have to "phone home".
I think they are rather creepy, because it's so obvious there is (or, could be) a hidden agenda.
If Amazon can somehow monetize my primary use of Echo as a glorified kitchen timer I will be impressed.
It occurs to me that the background noise in your home actually reveals a whole lot about the self:
- What you're listening to and when
- What you're watching and when
- What type of gentleman's material you enjoy and when
- When you leave home and get home
- When you wake up, when you go to bed
Some of these can be limited by the size of your house, but the trend in urban dwellings has been towards smaller so one unit could presumably capture every sound in your home.
The tech insanity has really gone far ..
When I wake, sleep, leave, and come home could be monitored by Echo, but it's also already being monitored by other devices I own, and it's data I'm not particularly concerned about at the moment.
Not sure how other people feel about talking out loud at home, but as someone who also lives alone (in a 250 sqft apartment) and always wears headphones, I can't really imagine talking out loud. Just seems weird for some reason. I never use Siri either.
Wonder if that's a living alone thing, or a small apartment thing, or ...?
I'm less than enthusiastic about a $180 kitchen timer that uploads everything I say to the cloud for analysis, even if I understand that the analysis is to some degree necessary to improve the voice recognition.
FREE vs. $99? No contest there my friend
Aside from that, I didn't purchase the Echo with the intent of it being primarily kitchen timer. It just so happens that after owning it for over a year my usage of it is mostly limited to that.
My usage is probably around 85% timers and alarms, 10% streaming music, 4% shopping lists, and 1% everything else.
Do you think you've used like you thought you would, or did you have ideas about how you might use and those didn't pan out or the device didn't work very well for those?
I really didn't have a particular use case in mind at the start, but I was (and still am) impressed by the sound quality from such a small speaker. It's nice to be looking in the fridge and say "Alexa add X to my shopping list" or when my hands are covered with flour say "Alexa set a timer for 30 minutes" or whatever. And for those things it's worth the cost to me.
Most of the features that have rolled out just seem gimmicky, though. Take the news briefing: It either provides too little info to be useful, or it drones on and I get annoyed by the voice which, while it sounds natural compared to Microsoft Sam, still feels cold and artificial. In general I like having more control over my internet actions. I'll never use it to order a pizza or anything from Amazon because I don't know what happens if it misinterprets me or I make a mistake. And the third party apps are clunky ("Alexa, ask X to do Y").
To sum it up, aside from the very basic features I've used since day one it just feels like a toy.
It's a deflationary race to the bottom. The bottom is a hell where everything watches you and sells absolutely everything about you to whomever can afford to buy the data.
Human personal assistants were connected to the outside world -- how else would they make appointments and reservations, book flights, find out what the weather would be, etc.? The whole point is to be connected to the outside world, automatic or no.
There's a difference between the "always on" communication these devices have and communication the user specifically requests.
When I want to make an airline reservation, I'm requesting the device to send the booking information to the airline. I'm not asking it to send a recording to the mothership of everything that happened in my home for the last 5 hours, which a human assistant would never do.
You have to be able to trust that Echo isn't recording everything you say, unless you prefix it with "Alexa", and that this behavior will never change (say this is the behavior for the average user, but with a police warrant, they're able to tap your Echo).
I'm part of the group that thinks the tradeoff is worth it for the convenience, but I understand why many people would disagree.
Besides any contact with outside world would need communication. So you can't have an entirely standalone gadget.
I wonder if the internet archive has a record of the size required minus images.
a) it's not open source so we can't be sure (aside from monitoring network traffic, which is probably encrypted)
b) if the FBI is successful in compelling Apple to develop a backdoor for the iPhone there's nothing stopping them from compelling Amazon to do the same with Echo.
c) better hope you don't say "Alexa" or something Echo mistakes for it.
It would also be possible to take a look at the hardware design and determine the linkage between the "mic mute" button light being on and power going to the mics.
The customer can set the device to provide both audio and visual indication when it "wakes up" and begins streaming to the cloud. And, of course, the customer can also press the mic mute button to avoid accidental wake up.
Yes, the FBI could try the same approach with Amazon as they are trying with Apple. For all of our sake, let's hope that Apple wins.
How would the mics listen for the wake word if they aren't always on?
My point was that you could check to see if the linkage between that red indicator light and the power going to the mics was in software or hardware.
This is analogous to the warning light that many laptops have for when the built-in webcam is on.
No backdoor needed if they information is sent to Amazon. All that is needed is a court order for Amazon to hand it over.
I'm suggesting in order for the FBI to use Echo (or any other internet connected device that has a microphone) as a wiretap, the FBI could try to compel the manufacturer to write, sign, and push an update that causes the device to transmit audio to the FBI at any point.
That would have seemed a little far fetched in the past, but the current FBI/Apple situation could set a precedent.
I do worry about said court orders being rubber stamps, and about surveillance that DOESNT require a court order.
Otherwise we can make no technological advancement.
One of the worst offenders is Dropcam. They have a super camera, easy to set up and use. Great picture quality. Would be an awesome baby monitor or "closed circuit TV replacement". But why the goddamn hell does it need to connect to the Internet? Why is the only option available to needlessly stream video out of my home network to the cloud, only so that I can then stream it back into my home network for viewing??? WTF? That's both a waste of outbound bandwidth and a waste of inbound bandwidth. I should be able to put it on my network, switch off the cable modem, and still be able to view video locally. How hard is that? I could do that with a webcam and a really long USB cable!
My guess is: if they offered the version you describe, they'd need to make it much more expensive. Which many consumers would find odd: the one with fewer features would cost much more. Granted, those consumers wouldn't be looking at the big picture...but I find many consumers don't. Up front costs matter a lot to consumers.
A webcam that sends the data out to the internet then back would avoid the discovery issue by using an external webserver as a rendezvous point.
I don't think people spend a lot of time thinking about their home networking. You could imagine most people just plug in their home routers and it is a crapshoot whether or not the router will support the necessary functionality, whereas a router will always enable communication to the outside world (or people would return it ASAP).
With that said, this seems like a straightforward technical problem that may have technical solutions.
If you willing configure a NAS server somewhere, you can even record the video locally.
If you, as a customer, want to, you can go to Amazon.com and delete all your voice history (or any single interaction).
... Then you gain life experience.
These things would be a lot less "Big Brother" for me if I had a mic key in my pocket that would only turn the mic on when I squeezed it.
I am sure you would not have a problem using these kinds of systems if it were assured that you could not be tracked or monitored because the devices and systems were secured in overlapping ways.
So now this cool audio controlled personal assistant is just another gadget to buy more stuff from Amazon, instead of something you control.
Voice recognition is done on some Amazon server. If it goes down or changes API in five years, it will render this thing a brick.
"Echo Dot is available in limited quantities and exclusively for Prime members through Alexa Voice Shopping. To order your Echo Dot, use your Amazon Echo or Amazon Fire TV and just ask: "Alexa, order an Echo dot"
Also, this makes me sad. I'd kind of like to try this out, but I have no Alexa voice service currently (I don't think)
> Built-in speaker for voice feedback when not connected to external speakers
> Includes a built-in speaker so it can work on its own as a smart alarm clock in the bedroom, an assistant in the kitchen, or anywhere you might want a voice-controlled computer
> With its built-in speaker, you can place Dot in the bedroom and use it as a smart alarm clock that can also turn off your lights, or use Dot in the kitchen to easily set timers and add items to your shopping list using just your voice
See the technical details:
> Built-in speaker for voice feedback when not connected to external speakers
My Echo news is a mix of Text2Speech and audio, so I'm not sure that it would work for News.
Um, and the insane underlying voice API?
Is this the future of tech? Like do I need to have some kind of urban-go-getter lifestyle to find use in any of this? When can I get something useful, rather than "thing I already do, but in a new package"?
I want to be a fly-on-the-wall when someone sets one of these up in their home. I can't picture it fitting in with my lifestyle, so I'm curious to see how others would actually use it. Or would it just gather dust and become a conversation piece?
After that, it's Uber, schedule, and weather on my way out the door. As I leave I ask it to turn off the lights.
So I use at least 5 of its features (and stream Pandora/NPR on it, so 7?), and find it useful. I don't think I would miss it, but I do find myself wishing for it a bit when I'm at a friend's house that doesn't have one.
by 'we', mean my busy family of four. it acts as everything from shopping lists to homework timers to streaming pandora/spotify to telling jokes -- and more. we easily talk to her (she is basically part of the family) a dozen times a day.
i can totally see how someone who doesn't have all this commotion and such would think it useless. for us tho, it's not useless. it's both fun and functional.
Yes you can check the weather a million ways, but those usually require some kind of dedicated screen time, watching tv, loading up a website, checking an app on your phone. Whereas with the echo you just ask it while you are doing something else and it gives you the report.
That being said, they announce partnerships with more and more services every month. Things are looking up.
More importantly, they have done a good job (leagues better than the competing voice services) of opening their service to developers thru Alexa Skills, which has enabled hundreds of added features including things like ordering an Uber.
Alexa, Ask recipes how do I make an omelet?
Alexa, how do I make an omelet?
I imagine it's to prevent conflicts but I'd like the option to put some services in the default namespace as it were.
So the product is quite open. That being said, the third-party experience could be smoother - it is a minor pain to have to specify "with Spotify" every time I want the Echo to play music.
Overall I'm happy with my purchase, though.
Alexa and AWS Lambda are two of the things I'm most interested in these days (disparate, I know) but they're also things without open source equivalent. I'd love to see that change.
Ahh -- the Tap is a portable device with wifi speaker.
(Probably wouldn't call an audio monitoring box the "tap"
Dot - $90 external-speaker-port echo.
Tap - $130 battery powered speaker, wifi, Bluetooth with echo, for portable use. (I'll get one if it works great in hotel rooms; otherwise won't.)
I wrote up a little post on it here: https://medium.com/@MathiasHansen/hacking-an-amazon-echo-and...
Obviously, actually having bluetooth speakers with the Echo Dot is a much better solution, but after using the Sonos setup for 3-4 weeks I must say that it works surprisingly well, and despite the audio hack the sound quality is excellent on my Play 1's.
My soundbar would work well, but Alexa would get muzzled every time I turned on the TV to watch something. On the other hand, my portable bluetooth speaker will run out of battery if left on its charger.
The AUX connection is almost a better option, but then am I supposed to leave my amp turned on all the time? There's also the same problem where Alexa loses her voice when I switch the amp over to the Bluray player.
'Everyone be quite so I can shout across the room to change my music'
1) Can you do this on the Alexa servers efficiently
2) Do you want to? Seems like setting it up could be a hassle. Right now there is zero friction and it just works.
The Alexa iOS app has a good drop down to manage each device separately.
* A U.S. Amazon account
* A U.S. shipping address (50 United States and the District of Columbia only)
* An annual Amazon Prime membership or 30-day Amazon Prime free trial
* A payment method issued by a U.S. bank with a U.S. billing address in your 1-Click settings
* A device with access to the Alexa Voice Service (such as Amazon Echo)"
Googleglass problem. The interface is me yelling publicly. So not super sure that is going to be adopted well.
Echo is one of those things where it became magically awesome by being somewhat more accurate than I'd expect. Also, Amazon is updating the service back ends, and it is now extensible.