The downside is that, being 30fps, your bitrate is heavily limited, thus your address space is limited, but this can be alleviated by geo-localization or combining IR beacons with BLE beacons, either by using BLE beacons to localize, or by synchronizing the BLE beacon broadcast with an IR pulse picked up on camera.
I would love to chat with you more. I'm a recent MIT alum and still in the Boston area.
How does your team try to combat the cost and the effort required by making this as economical as possible and also easy to use such that a person who is a beginner at technology such as using smart phones, computers, etc. can use it.
The moment real products will be in the marked that make use of the Reality Editor, you will power them up and just point your phone on them. It should not be more complicated.
Assuming we'll have rooms, buildings, and blocks absolutely packed with low-bandwidth radios like this; what radio standard is most promising at the moment?
I've seen some work out of Stanford (10.1109/VLSIC.2014.6858380) which looks interesting, but I'm not sure passive radios can harvest enough energy to do auth, which is a deal-breaker.
Apologies I haven't looked at the technical stuff yet (so may be an obvious question), but I was wondering:
Are the "API"s (what an object can be programmed to do) for the devices/objects (eg the switches) accessible locally?
As in is there enough embedded within the QR-like code to understand the API (or at least connect to the device, and it can "tell me more") or will it require cloud-based interaction?
Example case: if I'm in a room I've never been to before, no internet connection, and see a switch - would Instill be able to reprogram it then and there? Or do you see the (I)nternet of IOT as necessary?
The Reality Editor does not need to know a fixed API because the interface is basically a webpage that comes from the object it self. In that case you could say, the API is HTML5.
We have found a way to break down abstract standards in simple numbers:
You can read here why it is so relevant:
And here you can read, why we will use more and more the physical world instead of the touch screen:
The Reality Editor is a digital screw driver. You only need it from time to time.
Most of the time you will operate physical things around you.
Minor beef, however. The IoT will never take over if it isn't easy, easy, easy to use, and this has that in spades.
This is something we continuously work on with the open hybrid development tools.
On the other side, the marker is limiting the visual shape of an object. The limitation do provoke new shapes as well. It has some interesting properties.
Long story short. Technology is on its way and the marker will disappear. ;-)
There is so much more that we want to implement and some of it would give answers for your situation.
I think the Reality Editor will grow incremental and every version will be a bit better then the version before. In some years, when your car actually has this functionality, your scenario will be solved.
I had a talk at the last Solid Conference. Maybe that was the event you know me from?
However, the reason that this is interesting to people is that it makes device discovery easy, and effectively makes the node-red interface builder available as an AR thing.
Setting up a webpage to control a network of lights, monitors, garage doors, thermostats, cameras, etc. might seem simple to you and to most of HN, because many here are familiar with the hardware, the languages, the ISA's, the basic device networking principles and best-practices that make these things work. You are in the very small minority of the 300m people in the US and the 7bn people in the world. For everyone else, visually dragging and dropping will be a much easier way to visualize and set up networks.
This product is not for you - it's for the vastly larger market outside of computer scientists/embedded engineers.
When you can demonstrate this value prop to the mass consumer market, under the MIT brand nonetheless, big companies like Audi will start to pay attention because they see an avenue to market an increasingly heterogeneous product (luxury sedan) to the rich, older folks who buy these things based on cool new features - automated avoidance, tying volume into acceleration, "Intelligent Drive" , projected HUD , etc.
As we see from the video, Audi has granted this researcher access to APIs that would be completely inaccessible otherwise. This is another key - API access. Could you control the features in your car through a webpage? No, because you don't have access to your windows or your stereo, unless you do some intensive and illegal hacking.
M2M and IoT has been contained mostly to the realm of industrial/auto applications so far. So once big companies start to see the consumer market opening up through a simpler, more intuitive interface, they might start opening up APIs to new machines and products.
Of course, allowing access to these APIs opens up a whole new can of safety worms...but that's going to be an increasingly large issue that we will have to deal with as we continue to integrate software and connectivity into new products.
I would agree though with the view that it's a bit convoluted as shown. Without even building out prototypes you might arrive at this as a V1 in an iterative thought experiment and quickly move on.
A slightly more user friendly iteration might be using image recognition to detect devices in your surroundings and build a database of them for use in a different, more suitable UI. If glass took off this data collection could be happening seamlessly with visual feedback while you are just looking at stuff. RFID tech may be a good alternative to image recognition. Ownership(authorization, authentication) sounds like a real challenge in any case. To work with minimum friction digital ownership might need to be assigned at the time of purchase.. In which case you wouldn't need image recognition OR RFID to build a database of your internet things..
Ok so then you wouldn't even need to connect your stuff together because there would be a database of stuff and everything would be able to self connect. You'd just be left with managing preferences similar to phone notifications. "Your chair would like to interface with your lights; Allow(Y/n)?".
Transferring ownership.. Lots of problems to work out.
I think the big thing for AR is hands-free/HUD devices. There are niche use cases for phones, but they're described as CV problems rather than AR.
It's a commercial-like video to attract attention - but fair enough if it doesn't impress you. It's hard for me to judge UI without using it myself.
It's cool that they're at least trying to solve the problem of a better UI because I believe that's a problem that exists. It reminds me of one of Brett Victor's talks.
People don't want to "rewire" their products, they don't want to hack on this kind of thing. Except for geeks and technical people, who can also handle normal apps and interfaces.
The analogy with the Internet breaks down because the Internet is mainly successful because it connects people. That's what people care about, other people. The telephone was a success because it lets you talk to other people.
Unless this thing helps people deal with other people, it's too complex and uninteresting for the masses and probably too simple and tedious for technical people.
The most successful webpages are those where people are authors.
People want tools that empower them to engage with their environment.
To say people don't want to "rewire" their products is like saying that you don't want to connect your Electric Guitar with a wire to the Amplifier.
Exactly. But not because they enjoy tweaking some website settings or enjoy the control over the layout of a page. No, the reason is connection with other people. They write a blog so that they get feedback from people. They post pictures to Instagram and Facebook to get likes and show off.
Connecting the electric guitar and repairing things with screwdrivers is seen by most people as a chore. Sure, techie, geeky tinkerers enjoy messing around with things, the same ones who enjoy configuring linux, for example.
I just haven't seen a single use case that would make sense in an everyday setting. Having a knob that controls the lamp? Well, lamps already have that, without being radio controlled.
If a car dashboard is designed well, then there's no need to remap the buttons. Very few people like to customize their stuff that much. If there is a trend in software (and hardware) it's that they remove options. We have less features and customizations than 10 years ago. Developers realized that people just mess their settings up and they call support. Also, the more options there are, the more bugs there will be. So now instead of supporting a myriad of settings, they just go with a default and leave it at that. Why don't we have the option of vertical browser tab arrangement in Chrome? Well, because they decided it's too complex and unnecessary for most users.
People just want to get their things done. They want to get business things done with other business people and they want to have fun with their friends.
The success of the Internet is all about human communication, sharing experiences (mainly through photos), and gossip.
Imagine building becomes so easy that it is more of an "amazing" experience for you then a "geeky" long lasting frustration.
I'm an AR developer and it's been disheartening over the past 12 months to see how little people actually want to do anything other than "like" things. Even posting comments seems to be a big leap for most people.
People like things to just work—and people also like to be able to customize things. I could see this concept being used as a superpowered versioned of IFTTT if it gets baked into products. People can tinker with building things, but there are also premade recipes that you can load and use.
Being anti-IoT seems like sour grapes; thinking "I don't want this in my life so no one should have it" is completely irrational.
Awesome, maybe, but why? This is IoT in a nutshell. Let's add connectivity where there is articulable value, and where it actually makes the core task simpler. Usually, adding a remote interface makes things more complex even as it adds functionality. So have a good reason.
If you can run nodejs in your object, you will be able to make it in to a Hybrid Object the Reality Editor can talk to.
The inspiration is the World Wide Web.
It was created with a clear simple and open foundation.
You can still read the first webpages ever made.
You should be able to still interact with the first Hybrid Objects ever made.
Maybe I'm too old for this. There are millions of important things to be solved by computer and internet, yet big-names like Google and MIT push this typical high cost, low return technology, for what? to sell chips and standards to factories?
I agree that the examples we are usually given are not massively impressive, but neither is the ability to post info, or share your videos online.
I think IoT has been oversold in many ways for the short-term, exactly as you are pointing out. But in the long-term, I think it will be very impressive.
For example, my fridge knowing that I'm out of milk and reminding me to buy some. Sure, that's a pretty useless case. But what happens when we aggregate that over a large population and then we connect that with the grocers and dairies. Will we have JIT milk processing and production, saving significant energy and resources?
We just went from a very big 'so what' to potentially having a significant impact on the environment.
What troubles me more is not IoT, but the storytelling. It has been years people from this segment come out boring, meaningless videos, talks, articles usually feature microwave, doors, windows. Steve Jobs sold iPhone with 3 functions in hours. Yet after billions spent and thousands of smart minds working for years, we still have this?
I'm not sure if IoT has potential in long term. The most important thing about internet is it connects people, minds, creativity, imagination and consumers' pockets. I think as an industry we have too many more important and meaningful works to do, directly with people. I'm willing to bet that a well thought, carefully made video on youtube will make more positive impact on environment that the whole IoT segment combined today. If so, why bother?
If IoT really has potential, at this point it's very badly executed.
I recognize that this is an opinion and I don't have any proof, but I've spent some time in/around the media lab and I just thought I'd share my impression of the place.
"What will be the digital equivalent of a screwdriver in that future? Thats what the Reality Editor is! A powerful tool."
No comments on this one...
What do you think he thinks the word thoughtless means? What do YOU think the word thoughtless means?
Did you seriously register an account just to tell him that you don't think he thinks the word thoughtless means what you think it means?
But companies like Intel are doing it because they want to sell the servers that will eventually back all of this stuff.
Exposing the IoT functions as services seems brilliant to me (has it been discussed before?). What will we end up with? A overly redundent mess of IoT functions (e.g., how many devices have clocks)? Or maybe an IoT server on each home/office/car/phone (or more likely, in the cloud) that provides a central source for basic functions like timers, monitors, etc.?
You can read more about it here:
The interfaces them selfs are web pages.
You can read a bit more about this problem here:
I'm trying think of a complex physical interface - not one where the abstractions are buttons rather than a touch screen, but one without abstractions (if I understand your meaning on the webpage). Most man-made physical interfaces are simple, probaly for good reasons. A large sailing vessel, operated manually, is the best example I can think of, with the many sails, ropes, the rudder, etc. On one hand, my impression is that it's easier to conceive of them when using their physical interface; on the other I think an abstract UI could hide a lot of the complexity; for example it could present only the part of the interface I need for a particular task, hiding other parts and underlying mechanisms; also it could show helpful things that would be hidden in a physical interface, such as somthing blocked from view or info such as the stress on a mast or wind speed.
But I'm just thinking about it now; I assume someone has researched and thought about this issue ...
Most likely, the switch controls the lights, which is perfect when you want to turn them off and go to sleep.
But maybe when you wake up, you'd actually want it to open the windows, or start the coffee maker.
And if someone else comes in to your room, maybe they'd prefer it if it started the audio system.
If we could manipulate all the physical objects around us with software, can we produce an environment which we can manipulate not just physically, but also electronically, to map our intents and needs as we have them?
And if we could, how would we go about "reprogramming" the world around us?
It's the ultimate vision of AR and "IOT".
Right idea but it's implementation was very very clunky.
It was an earlier project by the same researcher.
Integration with Nest-style devices seems like an obvious first choice for a consumer product.
On maintenance: It seems like there'd be a bit of difficulty around managing a nest of "wires" between devices that are only visible under a 5" smartphone glass. So, you'd want something head-mounted (glasses), preferably with zoom. Microsoft HoloLens? That said, you'd definitely need an abstract schematic view to manage a dense network.
It would be a really curious development if, in addition to plumbers and electricians, new construction required "reality editors" to wire up behaviors between such interfaces. "This won't do; your reality isn't up to code."
Final thought: lose the digital tattoos and integrate Bluetooth/Zigby beacons for device advertisement. There's no way that those patterns are going to become fashionable. Or, use something else that's easy to print but is outside of our visual range.
My main beef? Frankly, it's those stickers that the camera needs to recognize the device. Ugly, ugly, ugly.
I feel old fashioned saying this but.. I think I prefer my objects stateless.
Also, HRQR just looks awesome.
The problem of all other standards is that they have to think about complex abstract standards.