However, interconnectivity between devices (and software) of different vendors seems to get worse and worse; standards seem to have become irrelevant. Time to market is the only thing that matters and long-term customer satisfaction and durability seem to be of no importance any more. They just don't care about integration with other vendors any more.
When I saw Minority Report (2002) a few years ago, I thought dragging windows and applications across devices with a gesture would be a possibility in the not-too-distant future. Now, in 2016, this not-too-hard-to-develop feature seems almost impossible to imagine. Sharing content between devices is utterly painful or even impossible: copying large files between computers in the same Wifi without going through the Internet; playing a video from your Android phone on a Samsung TV; moving application state from your laptop to your desktop PC when you leave work; playing music from your Android phone in a brand new Audi via Bluetooth ... All of these things are absolutely achievable if vendors worked together or standards were to be developed/followed. Right now, though, it just looks like technology fragmentation is getting worse every day.
Brands are what's missing from all the sci fi films where this stuff works, in Minority Report's D.C. Crimelab the displays aren't Samsung monitors hooked up to iPads, and THAT is what makes them work so seamlessly. No vendor wants to allow you to buy from other vendors so naturally the only time you get a seamless experience is when you sell your soul to the devil of your choice (in my case, Apple) and then your iPhone, your Macbook, your iPad and your Apple TV all work miracles right before your eyes.
If I ever get an Android phone, it will never sync up properly like the iPhone. And I just have the garden variety stuff, if you buy any of this IoT crap you damn well better hope that your given manufacturer will be around for long enough to make all the things you want and that they offer good support.
I have a lot of plans to make our home smart once we buy one, but when I do it will be built on open source modules that I can modify and control, and more importantly SERVICE myself when they inevitably break down after a time.
Noone cares about interoperability, noone cares whether all the data and content we're generating today will be accessible in a few decades or centuries.
Standards bodies are being taken over by company interests and we're still piling stuff on top of a technology stack designed with a ~50 year old technology landscape in mind.
Noone wants to think further than next quarter's profits.
Our industry has gone to shit.
> Noone cares about interoperability, noone cares whether all the data and content we're generating today will be accessible in a few decades or centuries.
It's not a consequence of capitalism alone. Capitalism wouldn't result in that
if its actors were rewarded for interoperability and future-proof access to
data. Sadly, customers don't care about these two aspects (or long-term
durability of a product, or servicability), so almost no company makes any
effort to provide them.
> Our industry has gone to shit.
It's hardly specific to IT/consumer electronics. You have the same with cars,
tools, clothing, everything really.
Customers have no realistic way of assessing those things, especially in a market where all the products are obsoleted every year or so. It's Akerloff's "Market for Lemons" all over again.
You can't expect that everyone should, before buying a product, perform their own accelerated-life testing and code security audit.
(I think we've yet to have the statutory rights lawsuits where people's IoT devices stop working after a couple of years and they claim this is a "manufacturing defect", which under UK law must be warrantied for six years.)
But I agree, there is woefully little information even about what I was talking about which was interoperability, or even just functionality. Pretty much your only recourse is to look into reviews of existing products from a given manufacturer and hope that said review wasn't bought and paid for.
And long term loyalty is even less a concern: It's all about the new gadget/social platform/etc. of the day.
(This is in response to the general angst over interoperability, not the IOT space, which I agree is underwhelming at the moment. However, in what I assume to be opposition to most people in this thread, I am hugely optimistic (if not yet invested).)
Sure, every once in a while something that would appear to be easy turns out not to be, and that's really annoying. But most things works well most of the time. Transferring state between devices? That's the cloud. My email is in sync 100% of the time between 4-5 different computers and devices of different brands and OSs. Dropbox takes care of my personal files, box.com of those for work. I frequently collaborate on documents in both Google Docs and Quip. Not too long ago, transferring files between Mac and Windows was hard (I think macs had a proprietary compression program that wasn't available on windows?) - luckily that's entirely in the past. USB is ubiquitous, FireWire, PS/2, ADB, serial, all gone. A Mac keyboard works on a Windows box, and vice-versa. Bluetooth definitely has kinks, but it also has a lot of "just works" along very long stretches. With a few annoying exceptions, people using Linux, Macs and Windows can work together seamlessly. Websites (with a few annoying exceptions) generally works well in all major browsers on all major platforms, on desktop, tablets or mobile (both IOS, Android and Microsoft). I got a new wifi-printer a few days ago - after joining to the network, it was automatically available on our computers. Took maybe five minutes, no messing about with IP addresses and drivers.
I can cast to a ChromeCast on my TV from my Mac, my Android and random guests iPhones (I can also "cast" Youtube directly to the TVs Youtube app, but it's pretty flaky -- and this is some fancy standard, as opposed to the ChromeCast, it's just that Sony is shit at implementing the standard). I can play music from Spotify from my laptop or smartphone on a Denon speaker in the kitchen, a "GramoFon" wifi-sound-device attached to my "dumb" stereo and my dad's "smart" Marantz stereo.
Interoperability is doing fine, but yes, it's messier than it might have been if some magic omniscient body had come up with clean standards for all this. That doesn't mean that it's not there.
> Transferring state between devices? That's my butt.
No, it isn't. It is for transferring state between instances of the same application (or group of application by the same vendor) between devices.
> Dropbox takes care of my personal files, box.com of those for work. I frequently collaborate on documents in both Google Docs and Quip.
Did you try to make them work together? Oh, you can't really, because Google Docs decided to be cloud-first and you don't have files with actual data on your hard drive anymore. You can't open them in a third-party application anymore.
Here's the thing - seamless operation is getting worse than it was few years ago. A lot of that came from the push to cloud and mobile - even with propertiary formats on the desktop you could work with the files using third-party apps, because the data was actually on your hard drive. Now the cloud services locked your data in, and they're giving you access only through a way of their choosing (which is usually a lowest-common-denominator webapp).
What we see now is a lot of companies trying to commoditize each other. Third-party software developers try to commoditize the platform makers by routing the data through the Internet. So sure, Spotify works and syncs up nicely between your iPhone, Android tablet, Macbook and Windows PC. But why on Earth do I have to use the Spotify app to listen to music, instead of - I don't know - Foobar2000? That's right - because files are not there.
> Using a sci-fi film as the yardstick for what interoperability should look like is silly.
Actually, I think it's very good and sane. It's a perfect yardstick - because we get to ignore all the market forces and imagine how things could work if they were designed to be actually useful. And then we can ask ourselves, why things are not like this, and how to make them more like this.
Indeed I did. Tech news are less bad for your sanity with it enabled.
I used Spotify to illustrate interoperability between brands, but all of the mentioned devices support (many) other sources as well, including DLNA.
> we get to ignore all the market forces and imagine how things could work if they were designed to be actually useful.
Movies show things that are pretty, not useful. It's a flat out cliche that practically anything that happens in any movie (not just in tech, but pretty much any field) looks ridiculous to people who actually knows a little about what's going on.
And all we had to give up was security, privacy, reliability, longevity, speed and more money. :-(
Unfortunately, as with so many adverse consequences when IT goes wrong, most non-technical people don't really understand the risks until something bad happens to them, and by then it's too late. In fact, these days with the trend for trying to outsource IT instead of maintaining in-house expertise, even a lot of technical staff don't seem to understand or properly control the risks. Just look at how many businesses grind to a halt every time one of the major cloud services has a significant outage.
The move to Internet-hosted services and subscription-based products is entirely understandable from the industry's point of view: it gives them lots of new ways to exploit their customers and make more money.
However, from the customer's point of view, I think we would be much better off if we invested more effort in decentralisation, standardisation and interoperability, and "private clouds" and VPNs. There are few advantages for customers to having important functionality reliant on a very small number of huge service providers, as opposed to having many smaller providers able to offer compatible variations and having options for self-hosting with decent remote access and backup provisions.
Unfortunately, we seem to have reached a kind of equilibrium now where the huge players are so utterly dominant in their industries that disruption is all but impossible. Their worst case is that they buy out any potential serious threats before they're big enough to become actual threats, but much of the time, the lock-in effects create sufficient barriers to entry to protect the incumbent anyway. There is no longer effective competition or disruption in many IT-related markets, just a lot of walled gardens where you pick your poison and then drink as much of it as they tell you.
I'm sorry to say I don't see any easy way to break the stranglehold the tech giants now have and get some competition and interoperability back into the industry. It's going to take someone (or possibly a lot of someones) offering products and services that are both competitive in their own right and built with a more open culture in mind to disrupt the status quo now, and it's hard to see either startup businesses or community-led efforts achieving escape velocity any time soon.
Even knowing I'm going to hell, one button press to chuck my twitch stream from my phone onto mum's apple tv makes it feel worth it.
This might just be me but I just don't have that good of an experience with android, hence why I switched.
I had phone->computer file transfer working via Bluetooth back on a sketchy old pre-Android Motorola, if such a simple workflow fails then what hope is there for anything more complicated. </rant>
edit: sorry this is slightly OT, it's irritated me for a while and this felt at the time like a good time to share/vent!
Their product integration is the most obvious example, but it's far from the only one. Everyone knows that Apple computers "just work", and that when they don't you throw them in the bin because everything is too connected for easy repair.
Even bureaucratically, I once spent two months trying convince Apple that my MBP existed. They swore no machine with that serial number had been made, and flatly refused to service it until I gave them the 'real' number.
None of these things get resolved without forcing a real human to acknowledge the issue you're facing. The one thing I'd say in Apple's defense is that it's mere hubris, which I find far more pleasant than Google's support outlook of "yeah, it's broke, now go to hell."
But even then, I noticed that the Apple equivalent tended to be, "...". Not much different today.
It was a tough adjustment.
Bluetooth is not fast and reliable enough for sharing modern file sizes but there is also no guarantee two phones that are near each other will share a wifi network.
And then there are the security requirements.
My point is that it's very understandable that such possibility would be useful for some people, and that they would very much like that workflow. It's just that if they are a minority - the industry will not prioritize such development, and it's a good thing. This means that they will develop more needed features.
In a way, everything you describe already exists in the form of Web Apps. Log into Gmail on your PC, do some things there, then switch to an iPad and open the same address there. - voilà you have same app on multiple devices. It will even sync flawlessly...
Regarding file sharing - I can only speak for the technologies that I've been using personally, but both Samba, Dropbox, FTP, SFTP bluetooth filesharing and bluetooth sound have all been working amazingly great for me... It seems like a perfect example of a developed and implemented standard which most devices agree on. What am I doing wrong?
I disagree this is a good thing. IT is an extreme example of the fact that people don't know what they want until you show it to them. For all you can tell, seamless transfer of files between multiple devices could be a feature people wouldn't imagine living without if they had it. But you can't explain mass complains about lack of such features because people have stuff to do, and they adapt their workflows to the capabilities of the tools they know - not the other way around.
Sure, it didn't include direct LAN sync to begin with (it does now), but that's the kind of "perfect is the enemy of good" implementation detail that the vast majority of people couldn't care less about.
Then it went away and never came back. Why? Because Microsoft released OneDrive, a DropBox-like cloud storage system, and the 100% free FolderSync was a competitor to it. Microsoft can make money selling OneDrive, they can't make money selling FolderSync, so it's gone.
Basically, the product you're lamenting doesn't exist used to exist, but no longer does because nobody can make money from it.
I used to use FolderSync to exchange multi-gigabyte video files with my friends while we were doing video editing, and it was an amazing awesome product. It was dead-easy to set up, traversed NAT and firewalls without any troubles, maxed-out whatever internet connection it had access to, and used the LAN connection if possible. Now we'll never see anything like it again.
... anyway, TLDR: the problem doesn't "remain unsolved", it was solved but is now no longer solved.
Seamless transfer among my machines only is a different problem.
> I thought dragging windows and applications across devices with a gesture would be a possibility in the not-too-distant future. Now, in 2016, this not-too-hard-to-develop feature seems almost impossible to imagine.
Moving a running process from one machine to another, seamlessly and instantly, is not at all easy on current architectures. Not that it's impossible (well, it might be impossible to do it instantly in all cases), but it would be a hell of a lot of work, even across a single platform, and have a lot of unpleasant snags to deal with. (What happens when important app/system settings are different on your desktop and laptop?)
More recently, all enterprise application UI are actually web UI. So that's just moving a browser window from one screen to another, so just a refined Apple "Handover".
On a more hardcore fashion, we can move a whole VM almost seamlessly. At some point that could be the case with containers too and that may not be that far fetched to move single process after all.
Let’s see how many of these words sound familiar: Xinerama, Zaphod, XRandR.
Not to mention network transparency. So slow, so fragile, so never ever going to support audio or USB devices or video acceleration. I see Microsoft RDP and I weep. I wonder how the Sun Ray protocol compares.
Moving processes is possible, though impractical when devices have different processors. See the enduring appeal of things like Continuum for smartphones. Moving just parts of application state, like Apple’s Continuity, tends to have vendor lock-in and third-party adoption issues. So, similar problems as what a practical IoT ecosystem faces.
Yes, and so are most consumer apps, and "moving state" to another device is simply a question of IM'ing a link. Sure, the UI in the movie looks way cooler, but it's a movie.
The point isn't that this is uncharted waters technically but that too many companies decided a good user experience isn't compatible with their desired profit margins. In some cases like security and bug fixes that might change due to regulation but that's far from certain and it's really hard to imagine that extending to broad interoperability.
But for webapps that want to be able to transfer state, the mechanism is the URL, and for those app, this works perfectly and unceremoniously well today.
> > all enterprise application UI are actually web UI
> Yes, and so are most consumer apps, and "moving state" to another device is simply a question of IM'ing a link.
That's a great aspirational goal but it's simply not something which most people can assume will work – I still routinely find apps from major companies where you can't even use the back button within the same session!
If you're looking at back-end-mediated stuff, this is pretty much what happens when you synchronise browser sessions across devices, modulo the rate of transfer. The key is managing state intelligently.
This is also done, mostly, on server-side infrastructure, where front-end systems have little to no state on them -- individual client requests come through, state is managed usually in the datastore itself.
But that's a relevant point. The main question in the automation debate is not whether it happens, the question is whether job destruction due to automation happens fast enough to outpace the usual job creation mechanisms (appearance of new market segments etc.).
The sad thing is I would totally be up for IoT products that costed more, because then I would at least have an idea that whoever built it built it to last instead of with the accountant standing over their head.
Interconnectivity is a double-edged sword: it sometimes precludes innovation. To interconnect, you need an agreed-upon spec, which constrains what you can do. If you can think of a better way to do something, you may not be able to implement it.
As an example, IMAP lets you use any client app with any server. But IMAP lets a message belong to a single folder. Gmail, on the other hand, lets you apply multiple labels to an email. Doesn't map well to IMAP. Gmail also lets you star a particular mail in a thread, while applying a label to the entire thread — these don't map well to IMAP. Neither does priority inbox, for example. And so on. Which is why you get a second-rate Gmail experience if you use IMAP.
Standards and protocols sometimes preclude innovation.
I don't want to live in a world where everything is interoperable, because that's a world where everyone is forced to conform to a straitjacket. That doesn't mean, of course, that interoperability is completely useless. It's a matter of balance. I don't want too much interoperability or too little.
The point I'm making is that interoperability has a cost. It's not all good.
Edit: The technology industry needs things not to work for it to be profitable. Imagine if things just worked. Dropbox, Box, Google Drive, and hundred other solutions would never be paid for and would not be needed. Waste creates jobs.
Android had beam that just worked so well to transfer practically anything. Recently it fails to transfer photos about 50% of the time on the latest and greatest Android devices.
Btw apple is probably just as bad as everyone else, the only reason things seem to work in the apple universe is because the life expectancy of an apple device is 2 years max, so they can just focus forward.
The best way to transfer files to an other computer if we are on the same wifi is still `python -m SimpleHTTPServer`.
Old things always seems to work better than new ones. If we deprecate the 3.5m jack, we are doomed.
Basically, it started with light switches already decades ago, at least in Europe. In the past, you had switches that themselves indicated what state they are in, so if you cluster them on a board it's incredibly easy to find the one switched on at the moment .
Then, some moron came up with switches that don't show anything anymore .
Nowadays you're lucky if you get switches on anything at all. This  is how a standard stove looks like in new Swiss apartments nowadays. Good luck explaining this to your grandma. You idiots, it has one job, getting more or less hot!
Recently I took a residential elevator that just had an empty touch field when you came in. No indication of what you could do whatsoever. This immediately gave me anxiety, and I'm just 31 goddamnit. Anyways, what happened was that once the elevator door closed it gave me a selection of floors to go.  facepalm
Please, for the love of what's holy, stop improving what doesn't need improvement! In german we have a word for this: "Verschlimmbessern". (a combination of verschlimmern='making it worse' and bessern=improving)
I feel like you could drop "in household appliances" from that statement and still be telling the truth. I'd be willing to bet that anyone reading this could think of examples in software where those making product decisions seem to be operating off of these principles:
* Refine relentlessly (vs stop improving what doesn't need improvement)
* Be heavily state-dependent as a way to minimize presentation of options. But don't call attention to state.
* Minimize any affordances. Rely on implicit interaction patterns you assume the user has already learned.
* Minimize everything. Remove options. Hide what you can't remove. The less the software does, the less the user has to think, right?
This is one of my least favorite design patterns, and I keep seeing more of it. It feels like half the products I use have bizarre state rules with no documentation. Key features disappear as I scroll, or are only available from certain (unrelated) screens, or are under one of six distinct "options" menus in different locations.
Since I spend a large part of my life dealing with this, I memorize the tricks and find it merely annoying. But I still regularly discover features in products I don't use often, buried behind some utterly incoherent state dependency.
With three things that can be switched on or off there are 7 state permutations (plus all off)
> It's awful.
The alternative being three separate toggle switches.
While this would be easier to scan and make a decision if you wanted granular decisions every time you cooked something, 90% of the time the oven will be set to the same setting; ALL ON. I think that having a single switch, rather than three is a more efficient UI for this.
The problem is that the people designing UIs for products don't actually know how to use the products. They don't have any concept of how a user uses the UI. So they just assume that less is better without stopping to consider why someone would want to control the temperature of their stove.
you have to go out of your way to buy this kind of stuff, and it's more expensive, but it's still out there. the low end of the appliance market competes on features and price ("race to the bottom") while the mid and high end compete on component and build quality and reliability.
Devices are produced not to be useful, but to be sold. People want more value for money so they select a thing with more buttons (seeing a button for a function) or more splashing lights. Usually the cheapest devices of established lot are simple, but may be not energy efficient. Probably the flashiest are mid-tier and the most expensive are sometimes like good looking cheapest option, but more energy efficient.
I am thinking of getting big screen display. What I would prefer is to get computer monitor around 40-50 inches connected to my computer and Chromecast. For audio I would like to use separate appliance. Cheapest most effective solution is probably to buy TV (and it seems all of them are to some extent marketed as Smart) even if I don't intend to use most of it. Other option that I consider is to have 24-30" monitor on wheeled stand so I can easily pull it closer to couch.
(I only semi-jest... I've seen this process IRL.)
Did you carefully couch your statement because of iOS7 specifically? Gossamer font-faces and weights, "abstract art" choices for icons and palette really felt like "change for the sake of change" to me, an overcompensating reaction to the "skeuomorphism" backlash.
I will say this much: that seems totally intuitive to me. You tap the button for the burner you want, then tap the +/- buttons to get your temperature, right? I like having each burner's setting readout all together in one place, as well.
Everything else about it is terrible. Oven controls need to be tactile, they need to be immediate, and they need to not be three inches from a hot pan. That'll cause enough burns just from fumble-fingering in normal use--now imagine that your pan has caught fire and you're trying to select the left front burner and press - 8 times while your hand is showered with flaming grease. Who could possibly have thought this was a good idea?
That in fact sounds like a really bad idea, but in the European countries where I have lived, the typical "good ol' knob" stove had the knobs placed vertically right above the oven, not on the top of the stove. Like this:
So the problem you mention doesn't exist because the knobs aren't something you have to clean after cooking, you're OK cleaning them once per week.
I personally prefer the knobs because stoves with touch buttons tend to beep and turn themselves off when water or oil gets on the controls, which is a quite common circumstance when they're 2 or 3 cm away from a boiling pot. I don't know who had the great idea of designing controls that don't work when wet and putting them in a place that will very often get wet - OK, if you are careful you can avoid it, but I shouldn't need to be careful with that when cooking. Sometimes (e.g. for cooking a big crab) it's very convenient to fill a pot almost to the brim with water for boiling and just dry the water that comes out afterwards, with the touch buttons you just can't do that.
The real problem with these new UIs is lack of choice. I can live with the touch buttons, but my grandma can't, she's 86 and she just doesn't learn new interfaces at that age. Her stove recently broke and we had a really hard time to find another vitroceramic one with knobs, because apparently they don't make them anymore. We finally found an old model somewhere and probably paid it quite overpriced.
But yes, overall I do agree with you. My one major annoyance is that changing temperatures is slower than with a rotating knob.
I ended up going to a showroom and twisting all the knobs of all the stoves to find out which ones didn't have that limitation. I must have looked like a crazy person, but I got one I liked in the end.
The stove I ended up buying had one variable-size burner, and that knob had a bounce-back switch at the end, so you have to turn that knob the wrong way around to max it, but I could live with that. Why a regular burner would have the limit beats me. The only thing I could think of would be some sort of extremely crude and ineffective child-protection, but all stoves have a child-lock anyway, so what's the use?
It's absolutely amazing what people can manage to screw up.
Edit: To clarify - Aglio olio is what they cook in prison in Good Fellas. It's both the most simplest pasta dish and the easiest to screw up - overcook the garlic a bit and you get garlic chips instead of melting their flavour into the sauce.
Power knob (the kind that clicks to exact positions), timer knob. Only interface. Set power (or don't if it's already where you want it), turn timer to where you want it. Done. Pull door to open, not even a button for that. 100x better than the interface on modern microwaves.
Nothing improved, just the same interface with the same two buttons, but now in a touchscreen that was more expensive and can crash.
Try that with a touch screen.
It's always interesting to see what should be simple, dedicated-hardware devices taken down by a Windows 10 update or some revealing error message (why does the mall's you-are-here screen use Internet Explorer?!)
Oh boy, how fugly it is. I had the same xD
Still, it did an excellent job and was almost indestructible.
for some reason I can only find the remotes in Japanese sources. I suppose they're afraid of English customer support calls...
If it becomes a thing where only that oven can be bought, and all ovens have those insane controls?
I'll quit eating. I swear to God, I will give up eating altogether.
We need a movement similar to RMS' for this kind of thing.
Don't give up eating, just give up cooking.
There was a public restaurant on the top floor, so the lifts frequently contained panicky people riding up and down until they eventually got to the ground floor.
Also the lift systems occasionally crashed mid-journey. The lift would stop, the lights would go out and the LCD screen would blank and run through a boot sequence before everything came back up.
Stephen Fry once got stuck in one of them.
These were private office buildings though, I can definitely see the public restaurant causing havoc with its endless stream of noobs...
The building's now been turned into posh flats - there's no way I'd consider living there (apart from being almost infinitely out of my price range) without being sure they'd been replaced.
That wouldn't require changes to the user interface, and the doors would only open if someone can actually get on.
That, plus an assumption that most trips are between the ground/garage floors and one of the higher, occupied floors should be sufficient to reduce congestion without requiring someone to push a floor button before entering the cab. Adding a clock to the elevator controller would also help, such as by stationing empty cabs at the exit floors in the mornings, and distributing them among the occupied floors in the evenings.
I can think of a lot of ways to improve elevator scheduling without changing the user interface, and even if I did change that, I could certainly provide some mechanism to select a specific floor without installing a touchscreen GUI. Wave your employee access RFID badge at the button panel to pick the floor where your cubicle is, for instance.
Also, surprisingly large no. of people make a mistake in selecting the floor they need to go to (visitors, new joinees...)
A traditional elevator only know where your are. This type of elevator also knows where you want to go and can optimise for passenger throughput.
The funny thing is that less prestigious brands like Citroen, go all "futuristic", trying to be fancy, and doing so remove all tactile interfaces. It's one of the reasons I would never buy a car like this.
One had a knob for the oven, but that was maybe notable.
So the 'functional' marketplace hasn't abandoned sense for nice looking stuff with complicated hidden controls.
Siemans still uses knobs on their gas cooktops:
Presumably the mechanical valve works better than an electric one.
I'm German, too, and I only ever saw these switches in contemporary settings UIs instead of checkboxes.
I'm not sure how common multiway switching is in the US, but it's certainly super common in Europe and I guess that's why we use non-state-indicating switches, even in circuits with a single switch, because so many circuits are controlled from multiple switches that there's no point in having them indicate state.
It’s the best solution that still allows showing state, and is done commonly here.
IMO you can make things smart, but connectivity is only the answer once you figured out all the other stuff. Otherwise it's like getting the internet on an 8bit Atari.
Anecdote: We have those in my apartment but they're wired up to be on when the circuit is off. This is implemented in what is likely the worst possible way. We haven't quite figured them out but when off, there's still juice on the line - enough to make some cheap LED bulbs flicker, but not enough to light up a "classic" light bulb or an LED with high resistance. It's great for making electricians cry about the stupidity, as I can take out one of the LEDs in the hallway (when it's off) and the others start to do a strobe effect.
They work nicely for home automation purposes, since you can set them up so that the LEDs on the side match up to whatever the current light status is (dim to bright from bottom to top).
The problem: in the evening it gets dark, you want to turn on the light and come to the dark corner with the panel where the 20 buttons are, and you have no idea which of 20 will turn on the damn light (to add the insult to the injury, it's some in the middle of course, these in the corners are something whatever). And as the punishment, if you hit the wrong buttons you will raise or lower the blinds or change the heating or turn on the fans in the ceiling.
I ask these who installed the system: can you make it having the LED by the button on when the light is off (invert the logic).
"Can't do, the whole module (20 buttons and the LCD display) is made in the factory and can't be tweaked."
A "smart" control panel from hell.
My other favorite comparable bad button is one actual button on the remote controller for one TV-on-computer device. It was probably some generic remote customized by the company for that device. The problem: one button, easy to hit, just blocks the whole device and leaves the software running in the background. Of course, it can't be reprogrammed.
If the light server crashes and the lights reset to maximum brightness while you're the only one in the office... I hope you brought your sunglasses because they're staying like that until morning.
The "damn thing" is a "smart" as it controls the light over the network cable -- there aren't even "normal" cables there with which the "normal" light switch would be possible to be installed.
The state of the lightbulb is decoupled from the state of the switch - the switch itself is always on .
: Yeah, this isn't great because idle power is >0W. This is where a centralised lighting controller with mechanical switches would help, but that's more invasive than just putting in some new bulbs.
That's odd — the word 'persphinctery' seems to appear only in Monday Note articles. Is this some sort of trap street for medium-form articles?
The end result is certainly unlovely though: rather colonoscopacetic, if you will.
How do you even beat that? And what do you offer beyond that?
It feels like we're reaching the flat part of the logarithmic progress curve (a much more appropriate curve for progress than the hockey stick curve) when it comes to what personal computing is going to bring to the daily lives of consumers.
Of course, there are still many areas not explored by computing. Computer aided medical procedures and diagnoses, monitoring and upkeep of crops, and so many more fields will grow in the years and decades to come. And improvements to personal transportation, through i.e. self driving cars, is arguably a consumer technology.
But the whole IoT movement is just a parody of itself. The vast majority of people do not want internet connected water cups or juicers or microwave ovens; and those who do soon get frustrated by the real world logistics that come with these things (higher costs, more frequent failures, lack of interoperability, etc.). I had Philips Hue bulbs for a while, and the girlfriend I lived with at the time hated them with a passion - for understandable reasons. When it comes to turning lights on and off, you can't beat a light switch, and the same logic applies to every single item we interact with daily. For instance, the Nest we had in our apartment would randomly start on and off, or suddenly stop being visible to the app, etc.
At some point, you really just want to rip the thing out, slap the old fashioned, bimetal thermostat back in, and hit the Nest with a high-powered electromagnet.
My own smart thermostat — from another company — has mostly been very good, but it randomly wants me to re-enter my (high-entropy, unmemorisable) password in their site to use it. Why can't I just connect to my thermostat and set up the authentication I want? Why can't I use a client certificate, or an SSH key, or just have a $&% token which lasts approximately forever?
It's my* device; I should be able to do whatever I want with it. Give it (not the vendor's site) a clean API, and I can do anything.
HVAC is certainly enhanced by the ability to control it remotely, first, because of the potential effort savings in being able to control it from anywhere in your house without having to seek out your control thing or remote, and second, because you can adjust the temperature remotely, which is good for those of use who don't want the house at 18 C all day long when we're not home, you can set the temperature to your desired temperature when you head home from work, which may not be a set time.
IOT toasters though? That's just taking the piss.
The problem is that in the current context, almost any IOT device has an additional negative value attached to whatever positive value its features give, because basically every company doing it makes overcomplicated crap.
In a perfect world, you might be able to plug in an IOT toaster and have it automatically connect to your Amazon/Google/Apple/whatever hub (with a simple "I found <yourname>'s hub, is this correct?" interface). At that point the hub would track the toaster status and the household context, and then do stuff like say, via your phone or a set of discreet speakers throughout the house, "The toast was burning, so I stopped the heat" or "You left your toaster on when nobody was home, so I turned it off".
None of this would be impossible to implement today, but it would require thinking about long-term holistic benefits instead of being able to slap "IoT device" on something to try and sell more to nerds.
And it's possible for a toaster to detect burning without requiring an internet connection.
Start toast, go and grab the mail, get distracted by a neighbor, forget about the toast, go over to look at their new riding lawnmower.
> And it's possible for a toaster to detect burning without requiring an internet connection.
Nothing about the scenarios I described would require an internet connection for anything but communicating outside the house (e.g. phone notifications).
Some days I'm home at 1800, other days I won't get home until 2200, and I often won't know until sometime during the day.
Now, I don't use heating, so it's not an issue for me, but I could foresee it being an issue.
The ability to use your phone as a universal remote is alluring as well. Being able to control the lights, TV, and heating all with one device is pretty cool.
The curve you are looking for is called the logistic. A lot of apparently exponential curves are actually logistic.
Once something more useful comes around companies will invest more resources but I mean who wants a smart toaster? You pop some toast in there and let it do it's thing. Simple.
We need some major breakthroughs in, for example, 3d printed gourmet meals or something.
I refuse to buy a smart TV, and when my TV finally dies, I will probably be forced to buy one, and the first thing I will do is either disable, or just refuse to setup and use, the smart part.
I will then plug a Chromecast in. That cost me $35. That actually works with the services I pay for correctly.
Even those Android TV-based smart TVs are useless, because Google does not control them and cannot force the OEM to push Android updates.... although, Android TV does let you Chromecast to it, which is probably what I would use it for.
Although, if I wanted Android TV, I'd buy an Nvidia Shield TV and use that instead, since it actually has a reasonable amount of horsepower, supports H265 Main10 and Rec2020 colorspace (in the hardware, AndroidTV support of it itself is upcoming), thus, true 4k support (not merely 2160p support using 8-bit Rec709/sRGB, which a lot of so called 4k devices and TVs are; for a historical perspective, see all the TVs that can only do 720p and 1080i but not 1080p, thus are not actually HDTV/Bluray compatible at all, this is the same thing al over again).
Technology advances too quickly for a TV to ever be smart. I'd pay more for a dumb TV that has two more HDMI ports instead, ripping out any smart TV SoC, and ripping out the (extremely useless) cable tuner.
Side note: the cable tuner is useless on cable, due to all cable companies moving to encrypting all channels, or moving to IPTV platforms entirely, thus always requiring a box (CableCard is a dead standard and was a mistake, similarly to how smart TV is a mistake and should be just as dead); the cable tuner is always useless on satellite; and if you're trying to do OTA, many TVs, even ones produced today, do not have sufficient sensitivity to dial into channels for many reasons (distance, obscured line of sight, reflections), and even with large enough antennas, you'll get better performance (and sometimes, the ONLY performance) out of a dedicated OTA box.
As of which OTA box, if you need signal performance for extreme OTA situations, Channel Master's tuners, as a set top box, kind of suck, but often are the only ones that can coherently decode a signal.
Heh. That's what happened to me. Got a "smart" Vizio TV 4 years ago. It had a Skype + camera option. One night saw camera light come on when I wasn't using it. Quickly yanked that out. Then at some point the "apps" for Amazon stopped working for it. So that's when I disabled its networking plugged in a fire-stick and just using it as a dumb large screen for Netflix.
Actually that is an interesting niche to play in -- sell a high quality TV display but just with lots of USB and power ports in the back so people can plug in their favorite stream devices. It would be lighter and thinner as well. More power efficient.
Can even make a play on "this is secure, unlike other such devices". And of course the "you save by not paying for extra crap you don't want to use". Advertise that to a few sub-reddits which do xmbc, kodi development, cord cutters. Maybe get Costco on-board as well.
all possible things that are considered as TV receiver - any smart phone, tablet, PC including all *book, and of course all possible TVs. "reasoning" - you can still watch stuff online.
and there might be sub-fee for just radio receiving, in case you really don't possess any video-displaying device. And again any of those devices (over internet for example) apply.
I don't pay them where I live currently (Switzerland) - the private company which has government mandate to extract fees (Billag) has questionable legality over incurring any fees (which can go up to 5000 CHF =~ 5000 USD), and they need to physically inspect your apartment before any action. Of course there is no force in universe that would make me let them in. Reason - I haven't watched any broadcast TV station (or any other) for last 5 years. Plus we pay for TV channels to internet provider quite hefty sum already (part of the package, unused). If it's not illegal, it's immoral to extract those fees.
You could buy a short-throw projector instead. These are usually Internet-free and have many inputs for discrete media sources, which may or may not have Internet.
Maybe on your market. In Germany, unencrypted DVB-C is alive and kicking, especially for public broadcasting which everybody has to pay anyway. (I think private broadcasters have switched to an encrypted model or are in the process, but I don't care about them because German private TV is utter crap.)
I figure that if basic infrastructure stuff, like automatically reconnecting to WiFi, doesn't even work, what chance do we have of harder things like security working properly?
That said, when they do work they work a lot better than their price would suggest.
Then I came to my senses and reminded myself that if these were truly important they'd be hardwired. The cameras are capable, I just have no desire to run the cables.
At the same time the software is usually just a mix of bad to meh. A lot of companies have retrofitted themselves as SaaS Industrial IoT outfits after having spent 15 years in industrial automation doing something that sort of looks like IoT but isn't. You cannot just pivot so easily - tech, culture, operations etc. I've seen factory control plane agents (the stuff used to make a touch screens in factories that workers monitor to see stats or operate machines) get retrofitted into massive data collection tools that are completely arcane to modify or extend.
I don't have anything IoT in my home. Why spend hundreds/thousands to: replace my thermostat with an app, make all of my lights remote controlled, motorize my blinds, and what else is there?
It's just spending a lot of money and time for hardly any perceived advantage. Combine this with the fact the average consumer is very cash-strapped in these times.
The fact that getting something this simple this set up takes technical chops and a ton of cash is a massive failure on the part of the companies making these devices.
Is that true though? Isn't the massive influx of cash into these proving the opposite?
I see this sentiment a lot. "I don't see the point of smart things, so nobody must want them!" But that just isn't true. There's a lot of benefits to be had, and a lot of people want them.
1) is still a mess. Things that are hooked to line power ought to talk over the power line. There are lots of standards for that, from the old X10 (1980s, low bandwidth, poor noise immunity, no security), bidirectional X10, Echelon (1990s, low bandwidth, very good noise immunity, some security), HomePlug (2000s, high bandwidth, some security), plus some proprietary systems. X10 refuses to die, and HomePlug's bandwidth is overkill for lighting. Echelon mostly gave up on the home and went on to become the standard for subway and rail automation (signs, lighting, HVAC, doors, etc.) because of the good noise immunity. They're working on a new approach to lighting, where LED lights run on 48VDC. This is a bit radical for home automation.
If you have any of these, it's useful to have a whole-house RF filter where power enters the house or apartment to isolate your network segment from everybody else on the same pole transformer. These are cheap ($6 or so) but have to be installed by an electrician. This is a big obstacle to power line networking.
Then there are the RF-based networks. WiFi, Zigbee, etc. These have range limitations and may not work through walls. Despite all the headaches of RF networking, most of the IoT vendors are going that way because they get to dump the range problem on the user.
I considered X10 a few years back, and when this issue came up I scrapped that idea.
I think it would work better if these controllers used out of band communication, but that would require running an extra set of wires.
I'm a recent and quite happy Hue owner. I decided to go for the expensive good stuff because I learned the hard way that cheap consumer electronics is always shitty and not worth the hassle - all those trivial annoyances add up to your stress level. Also, I know a bit too much about Chinese cheap LED bulbs manufacturing quality - I don't trust them that they won't burn my house down.
Like I said in the comment you responded to; on its own the Echo is pretty borderline whether it's useful at all, and it can't do many things your phone assistant can.
My point being; just because your combination works with your Echo and not your phone assistant doesn't mean that Echo is a device most people should buy.
If a device works with Samsung SmartThings, Philips Hue, Belkin Wemo, Insteon, Lutron, Wink, Nest, HomeSeer, Almond, LIFX, GE Link, TCP, iHome, Leviton, Honeywell Lyric or TotalConnect, Ecobee, Haiku, Keen, Garageio, Z-wave or Zigbee (via a hub)... then it also works with Alexa, out of the box, because Amazon has relationships with all of them already.
You just say "Alexa, discover my devices" and she finds whatever you happen to own on your network on her own, instantly knows their names (that you gave them if applicable) and capabilities, and can address them through natural language.
> This is something Amazon did; "it doesn't say much about Echo itself" is completely mistaken.
You're misunderstanding what I mean by that sentence, which I guess could be clearer. What I meant is; it doesn't say anything about Echo's independent functionality apart from any other devices. Does that help clarify?
My point is that ability to integrate with other devices is great but it's only one thing that a digital assistant has to do. Echo is missing other things that assistants on phones can do not only because the phone has capabilities Echo does not hardware wise but also because our phones are so integrated into our daily lives.
Anyway this discussion has really gone off the rails. My original point was this article is clearly just an Echo ad and it's a bad one at that.
EDIT: how did IOT end up there?
Evocative term ("persphinctery"), what does it mean? A web search mostly references mondaynote.
So "persphinctery" would be a synonym of "excretory", were it actually a word.
"persphinctery", one word to capture the brokenness of waiting for updates the provider has no incentive to create.
I laughed because it's a very erudite sounding word with a very vulgar interpretation: "persphinctery" == "as they are shat out".
If it's Linux based, this just means that the SoC vendor hasn't kept the board support package (BSP) up to date.
These kinds of things are hardly ever mainlined, so it becomes difficult to update drivers, kernel versions and then even software versions (think X11)