Hacker News new | comments | show | ask | jobs | submit login
Reality Editor – MIT Media Lab (realityeditor.org)
201 points by piyushmakhija on Dec 13, 2015 | hide | past | web | favorite | 97 comments

Hi its Valentin, the guy in the video. Let me know if you have any questions. :-)

Hi Valentin, Nice work here. I've played around with a very similar thing in my apartment before, but instead of using visible codes, I thought about various ways to make them invisible to the naked eye. One relatively simple way is using IR LED beacons. Cell phone cameras often pick up near-IR (in fact, the crappier the cell phone, the better). They can continually flash codes that can be read directly by the phone over 30fps video.

The downside is that, being 30fps, your bitrate is heavily limited, thus your address space is limited, but this can be alleviated by geo-localization or combining IR beacons with BLE beacons, either by using BLE beacons to localize, or by synchronizing the BLE beacon broadcast with an IR pulse picked up on camera.

I would love to chat with you more. I'm a recent MIT alum and still in the Boston area.

This is very interesting and great work on it but I feel in reality the problem being solved does not benefit people a great deal that they will be willing to spend a lot of money or spend a lot of effort trying to configure things around them. The interface is easy to most but from doing home automation it does require a higher skill set to configure physical buttons or input devices to do what you want on another physical device.

How does your team try to combat the cost and the effort required by making this as economical as possible and also easy to use such that a person who is a beginner at technology such as using smart phones, computers, etc. can use it.

With all due respect, I think your comment is a bit misguided. The applied nature of this work might lead some to believe this is a developed product, but ultimately this is research. It's a unique idea and one that may inspire a product that is economical/easy to use. In a few years that is.

You can checkout openhybrid.org for some of these questions. We do research in the Media Lab not product development. The Reality Editor is a projection in to the future. But have a look at the costs of SoC's. It's going down so rapidly that in 2 years your question probably will be answered.

The moment real products will be in the marked that make use of the Reality Editor, you will power them up and just point your phone on them. It should not be more complicated.

The openhybrid.org example sharing a timer between toaster and food processor _using a phone, which has its own builtin timer_ is non-compelling bordering on comical. do you have more realistic examples that make meaningful use of the components of a physical object?

I would imagine this could be a good fit for roadies, who have to set up and tear down audio/lighting/video installations on a daily basis.

I was wondering: Clearly not everything can have its own WiFi radio, it'd be expensive and pollute the spectrum.

Assuming we'll have rooms, buildings, and blocks absolutely packed with low-bandwidth radios like this; what radio standard is most promising at the moment?

I've seen some work out of Stanford (10.1109/VLSIC.2014.6858380) which looks interesting, but I'm not sure passive radios can harvest enough energy to do auth, which is a deal-breaker.

I think the new specifications for bluetooth with ip6 and Mesh networking are interesting. But the open hybrid platform builds up on the most common ground. If WiFi pollutes the spectrum, jet there are more and more SoC's with WiFi on board pushing in to the IoT marked, maybe the WiFi consortium will take care of the pollution with a LE standard update.

Hi Valentin, great video!

Apologies I haven't looked at the technical stuff yet (so may be an obvious question), but I was wondering:

Are the "API"s (what an object can be programmed to do) for the devices/objects (eg the switches) accessible locally?

As in is there enough embedded within the QR-like code to understand the API (or at least connect to the device, and it can "tell me more") or will it require cloud-based interaction?

Example case: if I'm in a room I've never been to before, no internet connection, and see a switch - would Instill be able to reprogram it then and there? Or do you see the (I)nternet of IOT as necessary?

Yes it would. The Marker (for visually identifying the object) and all interfaces are stored in the object it self. When you start the Reality Editor it finds automatically all objects in the same network.

The Reality Editor does not need to know a fixed API because the interface is basically a webpage that comes from the object it self. In that case you could say, the API is HTML5.

We have found a way to break down abstract standards in simple numbers: http://openhybrid.org/how-to-connect-everything.html

You can read here why it is so relevant: http://openhybrid.org/direct-mapping.html

And here you can read, why we will use more and more the physical world instead of the touch screen: http://openhybrid.org/learn%2c-setup%2c-operate.html

The Reality Editor is a digital screw driver. You only need it from time to time. Most of the time you will operate physical things around you.

Hey Valentin. I love this concept. Do you have any notions to get rid of The Marker? I can see a lot of potential applications for this where the Marker would possibly interfere with a product's visual identity.

Minor beef, however. The IoT will never take over if it isn't easy, easy, easy to use, and this has that in spades.

Yes, this is a big thing. Our goal is it to make the creation of objects so easy that it can become a simple part of the workflow of a product + web designer. Very very very low entry barrier.

This is something we continuously work on with the open hybrid development tools.

On the other side, the marker is limiting the visual shape of an object. The limitation do provoke new shapes as well. It has some interesting properties.

Long story short. Technology is on its way and the marker will disappear. ;-)

Why did you choose HTML5 API and a heavy HTTP(S) protocol instead of more lightweight MQTT or CoAP? How about security?

As most people have pointed out, this doesn't seem like anything new. If all of those devices are IoT, you can have an app that controls them. Seems pretty pointless to have to use your camera and drag between objects. You could achieve this much easier with icons in an app. Do you personally think this a good product?

How would you connect two objects in physical space with your app centric interface? This is not a trivial task. Its very complex and has a lot of abstract steps. Steps you wold not be able to explain a 8 year old child in 5 minutes. We gave the Reality Editor in to the hands of a 8 year old child and we did not even need to explain how to do it. He just ran with it.

Try to make him do it while watching a 5 year old and an 8 year old while entering a car. It's a more realistic scenario for when you need a knob to work as expected.

That is a good scenario. What you see in the video and on open hybrid is really as much as we can put in to code at this point and I believe that it is a very good beginning.

There is so much more that we want to implement and some of it would give answers for your situation.

I think the Reality Editor will grow incremental and every version will be a bit better then the version before. In some years, when your car actually has this functionality, your scenario will be solved.

It's useful for the same reasons that a 4yo can use the touch screen on an iPad but can't use the track pad on a computer: It removes one additional step of indirection and thereby reduces cognitive load.

No questions, but wanted to show my support. It looks like a great way to bring programming to the masses.

It was a pleasure meeting you at the TechCrunch Disrupt conference. I was one of the judges for your demo!

I was never at TechCrunch Disrupt but I watched the Silicone Valley Show. :-P

I had a talk at the last Solid Conference. Maybe that was the event you know me from?

Someone please enlighten me, maybe I'm missing something, but I don't get why this augmented reality interface is better than having a centralized web page where I can manage them using the good old textual interface. It feels like a very convoluted rube goldberg machine approach to me. For example, the guy says something like "the car knows because I connected my chair to my car." Imagining how he would have done this makes me doubt the convenience. I can imagine he would have started dragging when he left the office and kept his thumb on the screen until he got to the car and finally connected it to the knob. He could have just opened a web page and connected these two devices without having to go through all this trouble.

I actually agree with you, I think that this is really interesting, but I don't really think this is much more useful than: http://nodered.org/

However, the reason that this is interesting to people is that it makes device discovery easy, and effectively makes the node-red interface builder available as an AR thing.

This is important because it represents the next step in allowing regular, non-technical (but increasingly tech-familiar) consumers to set up and maintain their own networks and systems.

Setting up a webpage to control a network of lights, monitors, garage doors, thermostats, cameras, etc. might seem simple to you and to most of HN, because many here are familiar with the hardware, the languages, the ISA's, the basic device networking principles and best-practices that make these things work. You are in the very small minority of the 300m people in the US and the 7bn people in the world. For everyone else, visually dragging and dropping will be a much easier way to visualize and set up networks.

This product is not for you - it's for the vastly larger market outside of computer scientists/embedded engineers.

When you can demonstrate this value prop to the mass consumer market, under the MIT brand nonetheless, big companies like Audi will start to pay attention because they see an avenue to market an increasingly heterogeneous product (luxury sedan) to the rich, older folks who buy these things based on cool new features - automated avoidance, tying volume into acceleration, "Intelligent Drive" [1], projected HUD [2], etc.

As we see from the video, Audi has granted this researcher access to APIs that would be completely inaccessible otherwise. This is another key - API access. Could you control the features in your car through a webpage? No, because you don't have access to your windows or your stereo, unless you do some intensive and illegal hacking.

M2M and IoT has been contained mostly to the realm of industrial/auto applications so far. So once big companies start to see the consumer market opening up through a simpler, more intuitive interface, they might start opening up APIs to new machines and products.

Of course, allowing access to these APIs opens up a whole new can of safety worms...but that's going to be an increasingly large issue that we will have to deal with as we continue to integrate software and connectivity into new products.

[1] https://www.mbusa.com/mercedes/technology/videos/detail/titl...

[2] https://www.youtube.com/watch?v=sg3mmfVILRA

I think you're assuming a lot of things about mainstream users. If AR was that intuitive, it would have already been mainstream. Many apps tried to port existing good old text based interface into AR but none have yet to gain traction. You need to remember the mainstream users do know how to use computer and do know how to access websites and enter forms. But they do not know how this geeky augmented reality technology works. Also my point here was not criticizing AR itself, I'm just saying his approach makes things needlessly convoluted for the sake of making it look cooler. Read my comment about how someone would drag and drop his chair in his office to his car.

I believe the research has value in its own right though.

I would agree though with the view that it's a bit convoluted as shown. Without even building out prototypes you might arrive at this as a V1 in an iterative thought experiment and quickly move on.

A slightly more user friendly iteration might be using image recognition to detect devices in your surroundings and build a database of them for use in a different, more suitable UI. If glass took off this data collection could be happening seamlessly with visual feedback while you are just looking at stuff. RFID tech may be a good alternative to image recognition. Ownership(authorization, authentication) sounds like a real challenge in any case. To work with minimum friction digital ownership might need to be assigned at the time of purchase.. In which case you wouldn't need image recognition OR RFID to build a database of your internet things..

Ok so then you wouldn't even need to connect your stuff together because there would be a database of stuff and everything would be able to self connect. You'd just be left with managing preferences similar to phone notifications. "Your chair would like to interface with your lights; Allow(Y/n)?".

Transferring ownership.. Lots of problems to work out.

I think the AR UI in the video is pretty cool. The trouble is, AR itself hasn't taken off. My theory is people really dislike reality. Reality is ugly, it's dirty, and when people operate upon information, they(we) do want abstraction over reality, want simplicity over complexity, desire focus over distractions.

What AR technology exists, and how good is it really?

I think the big thing for AR is hands-free/HUD devices. There are niche use cases for phones, but they're described as CV problems rather than AR.

It's so you can use the 'codes' presented on the objects and create these connections live. There's less steps and it's much more fluid than taking a picture of this code, saving that object, clicking on a features dropdown, and selecting another objects feature you want this to be attached to.

I would argue this was on the wrong track the moment it started out with QR code. "Why don't we bring two geekiest technologies--QR code and Augmented Reality--together? Sure! That will make it mainstream!"

The challenge of innovation is to see the possibilities. That's what makes it hard; anyone can see potential flaws - if that's what it took everyone would do it. I'd much prefer that we focus on the former.

What new possibility does this bring to the table? Connected devices concept has been around like forever. Sure it looks cool (like flying cars, or those impractical UIs from minority report) but how does this make things more convenient? I just was put off by how this guy makes it sound like he came up with all these concepts. "Imagine you could ... " ==> all the things he mentioned with this phrase are nothing new. The only thing that's novel in the video is the UI and I was pointing out how he talks as if it's his new UI that makes all these imaginations possible, but in fact in my opinion it's actually worse than existing solutions (just looks cooler).

The UI is the pitch - technology people can't use is useless to people (tautology).

It's a commercial-like video to attract attention - but fair enough if it doesn't impress you. It's hard for me to judge UI without using it myself.

It's cool that they're at least trying to solve the problem of a better UI because I believe that's a problem that exists. It reminds me of one of Brett Victor's talks.

I don't see what value that adds to the discussion; why do I care?

Most people don't want to reprogram stuff, they don't want to customize stuff. They want things to "just work", they don't want to choose which button controls the car windows and which knob controls the bass. This is the job of the designer. Most people want a finished product.

People don't want to "rewire" their products, they don't want to hack on this kind of thing. Except for geeks and technical people, who can also handle normal apps and interfaces.

The analogy with the Internet breaks down because the Internet is mainly successful because it connects people. That's what people care about, other people. The telephone was a success because it lets you talk to other people.

Unless this thing helps people deal with other people, it's too complex and uninteresting for the masses and probably too simple and tedious for technical people.

I disagree! The internet is successful because authoring and publishing became as easy as consuming. The first web browser was an editor and a browser in the same time.

The most successful webpages are those where people are authors. People want tools that empower them to engage with their environment.

To say people don't want to "rewire" their products is like saying that you don't want to connect your Electric Guitar with a wire to the Amplifier.

> The most successful webpages are those where people are authors.

Exactly. But not because they enjoy tweaking some website settings or enjoy the control over the layout of a page. No, the reason is connection with other people. They write a blog so that they get feedback from people. They post pictures to Instagram and Facebook to get likes and show off.

Connecting the electric guitar and repairing things with screwdrivers is seen by most people as a chore. Sure, techie, geeky tinkerers enjoy messing around with things, the same ones who enjoy configuring linux, for example.

I just haven't seen a single use case that would make sense in an everyday setting. Having a knob that controls the lamp? Well, lamps already have that, without being radio controlled.

If a car dashboard is designed well, then there's no need to remap the buttons. Very few people like to customize their stuff that much. If there is a trend in software (and hardware) it's that they remove options. We have less features and customizations than 10 years ago. Developers realized that people just mess their settings up and they call support. Also, the more options there are, the more bugs there will be. So now instead of supporting a myriad of settings, they just go with a default and leave it at that. Why don't we have the option of vertical browser tab arrangement in Chrome? Well, because they decided it's too complex and unnecessary for most users.

People just want to get their things done. They want to get business things done with other business people and they want to have fun with their friends.

The success of the Internet is all about human communication, sharing experiences (mainly through photos), and gossip.

I would say that there are a ton of people in the world that love to build things, because its how we came to that point where we are. This is the sole reason why the world around you exists.

Imagine building becomes so easy that it is more of an "amazing" experience for you then a "geeky" long lasting frustration.

I think the original poster is trying to say that 99% of people aren't builders and don't even get to the point of geeky frustration - and I have to say I agree.

I'm an AR developer and it's been disheartening over the past 12 months to see how little people actually want to do anything other than "like" things. Even posting comments seems to be a big leap for most people.

I can definitely agree with both sides of this argument.

People like things to just work—and people also like to be able to customize things. I could see this concept being used as a superpowered versioned of IFTTT if it gets baked into products. People can tinker with building things, but there are also premade recipes that you can load and use.

I don't think publishing is as easy as consuming. Have you heard of the 1% rule[1]? Out of 100 people approximately only one is willing to create anything. People are generally lazy, they want to consume content rather than publish. Therefore non-technical people would probably not be happy to rewrite things they got used to.

1: https://en.wikipedia.org/wiki/1%25_rule_%28Internet_culture%...

Seems like a lot of people aren't really convinced of the value of this kind of tech (and maybe of IoT in general), and I'll admit that I'm a little skeptical of parts of it as well. I don't really see the value in linking devices to each other (eg I don't need my shower to automatically turn on after my alarm goes off), and I don't need to be able to read HN on my fridge, but I do think there's a lot of value in making the smart phone a remote control for the physical world. I think it'd be awesome if none of my devices (thermostat, TV, TV peripherals, lights, door lock, sound system, etc) needed their own physical interfaces but I could just control them all with my phone. My phone is already my remote control for a bunch of other things, like transportation (Uber), food (Seamless), money (Venmo), shipping stuff (Shyp), love (Tinder - just kidding?), etc, so why not have it control my physical devices too?

Agreed. I really don't understand people who are against this sort of research. A light switch will still work as a light switch does now if you don't program it. There's another layer of complexity in the product design to go wrong, and add cost, but so long as when things break they still work in "dumb" mode and the cost added is minimal then a smart switch that you can program is a win for those people who want it and has practically no effect on those who don't.

Being anti-IoT seems like sour grapes; thinking "I don't want this in my life so no one should have it" is completely irrational.

I agree. This is just research, and not a market ready product. And research is really necessary in IoT, not just to figure out what is useful and what isn't, but also to figure out security and privacy concerns with everything being connected.

> I think it'd be awesome if none of my devices (thermostat, TV, TV peripherals, lights, > door lock, sound system, etc) needed their own physical interfaces but I could just > control them all with my phone.

Awesome, maybe, but why? This is IoT in a nutshell. Let's add connectivity where there is articulable value, and where it actually makes the core task simpler. Usually, adding a remote interface makes things more complex even as it adds functionality. So have a good reason.

The idea that everything can be integrated via AR is fantastic... it's the next step from IFTT. The challenge is replacing everything you have with IOT versions that can be connected to these apps and hoping those things won't go obsolete in a couple of years. Technology is moving so fast, not everyone can catch up, thus there are no standards.

We designed the platform behind the Reality Editor in such a way that it can adopt. You can read more about it here: http://openhybrid.org/how-to-connect-everything.html

If you can run nodejs in your object, you will be able to make it in to a Hybrid Object the Reality Editor can talk to.

The inspiration is the World Wide Web. It was created with a clear simple and open foundation. You can still read the first webpages ever made.

You should be able to still interact with the first Hybrid Objects ever made.

I'm not sold. The whole IoT thing fails miserably in story-telling. Until now, I have yet to read/hear/watch one compelling example. Who really cares if my chair is connected with the light so that I can get the best light shedding angle? who cares if my refrigerator is connect with my micro-wave to de-freeze meat according to my online menu?

Maybe I'm too old for this. There are millions of important things to be solved by computer and internet, yet big-names like Google and MIT push this typical high cost, low return technology, for what? to sell chips and standards to factories?

Your comment reminds me of how people first looked at blogging, twitter, youtube, and many other technologies.

I agree that the examples we are usually given are not massively impressive, but neither is the ability to post info, or share your videos online.

I think IoT has been oversold in many ways for the short-term, exactly as you are pointing out. But in the long-term, I think it will be very impressive.

For example, my fridge knowing that I'm out of milk and reminding me to buy some. Sure, that's a pretty useless case. But what happens when we aggregate that over a large population and then we connect that with the grocers and dairies. Will we have JIT milk processing and production, saving significant energy and resources?

We just went from a very big 'so what' to potentially having a significant impact on the environment.

Thank you for a serious reply.

What troubles me more is not IoT, but the storytelling. It has been years people from this segment come out boring, meaningless videos, talks, articles usually feature microwave, doors, windows. Steve Jobs sold iPhone with 3 functions in hours. Yet after billions spent and thousands of smart minds working for years, we still have this?

I'm not sure if IoT has potential in long term. The most important thing about internet is it connects people, minds, creativity, imagination and consumers' pockets. I think as an industry we have too many more important and meaningful works to do, directly with people. I'm willing to bet that a well thought, carefully made video on youtube will make more positive impact on environment that the whole IoT segment combined today. If so, why bother?

If IoT really has potential, at this point it's very badly executed.

I think that like 3D printers, this stuff is useful to a fraction of the population, as a tool with which to prototype products for everyone else.

What happens is that you exchange both privacy and agency for no real return. It's a bad deal.

Can you expand/clarify what you are referring to? Is that in regard to my example of milk?

The thing to understand about the Media Lab is that it produces research, not products (although, the line is blurry, I give you that, and some groups at the Media Lab have spun off separate companies.) People at the Media Lab are working on figuring out what is possible, not necessarily what is useful. By inventing or exploring new techniques, it's possible for others to apply those techniques to "real" problems.

I recognize that this is an opinion and I don't have any proof, but I've spent some time in/around the media lab and I just thought I'd share my impression of the place.

I agree. At this point it really makes no sense to use this. But hey people said that about the internet back then. ;-)

The big deal of the internet is it connects people. Before AI takes over the planet, or invasion of alien, the most powerful thing is the communication amongst human beings, its implication we have not fully explored.

Comparing that to the internet? Don't even go there!

Why not? The Reality Editor is an equivalent of a Web Browser for the Physical space. We are spatial oriented creatures operating the world around us with muscle memory and physical tools. The real power of technology comes when we combine digital and physical tools seamlessly. What is in front of us will change the World around you even more then the Internet has done it up to this point. How do you want to engage with all of the self driving cars and digital/physical content and objects around you? Browse through 1000 apps with all the drop down menus? Who wants to remember all of these physical<->digital links? My question for that future now is: What will be the digital equivalent of a screwdriver in that future? Thats what the Reality Editor is! A powerful tool.

I can understand where you come from, but honestly I don't have a 1000 app to deal with and I don't want to connect my office chair to my fancy car seat. I feel sad that the world is what it is today, and it's mostly because of people like you.

"What will be the digital equivalent of a screwdriver in that future? Thats what the Reality Editor is! A powerful tool."

No comments on this one...

But this is what makes us Humans! The development of tools. You would not be able to write these messages if not for all the tools build along our path. Simple physical tools like a screwdriver enable us to fix and create objects in the physical space. Throughout history we build new tools for every new development to the point that we where able to build the internet. There are many things that are not quite right with the internet today, maybe this is where your sadness comes from? But as long as we build tools to empower individual humans to expand their horizon, I think that it is worth to build and use new tools. The Reality Editor opens up a new category of tools, this is what makes it powerful. It is a power for your own hands to enable you, similar a screw driver did for generations before.

How disapointing that this is the top post on an HN thread: A thoughtless rant dismissing all of IoT in a few sentences.

Why not counter it with a use-case that proves OP wrong instead of a meta-argument?

You use that word thoughtless, but I don't think it means what you think it means.


What do you think he thinks the word thoughtless means? What do YOU think the word thoughtless means?

Did you seriously register an account just to tell him that you don't think he thinks the word thoughtless means what you think it means?

Well, I'd imagine that this person is doing it because they think it's cool.

But companies like Intel are doing it because they want to sell the servers that will eventually back all of this stuff.

Do I understand this correctly? It's exposing the functionalites of IoT devices as advertised services (IoTaaS?), allowing them to be linked together (e.g. the timer from the microwave and the lightswitch), and provides an intuitive interface that, hopefully, end users might utilize.

Exposing the IoT functions as services seems brilliant to me (has it been discussed before?). What will we end up with? A overly redundent mess of IoT functions (e.g., how many devices have clocks)? Or maybe an IoT server on each home/office/car/phone (or more likely, in the cloud) that provides a central source for basic functions like timers, monitors, etc.?

It goes beyond that. The interesting thing is that in the moment you have a visual representation for an object, you can break it down into all its components. From that moment on the abstract object is represented by numbers only.

You can read more about it here: http://openhybrid.org/how-to-connect-everything.html

The interfaces them selfs are web pages.

The story written there doesn't make sense. Why the heck would I want to connect a button on my toaster to my mixer? Why not have a "Timer" object in my phone that can be connected to the mixer instead?

Because this kind of new visual interface is scaleable. You can operate 1000s of objects without a problem. This is something your app centric user interface would start looking like Times-Square. The Desktop-Metaphor is not scaleable with the physical space, because you need to remember the digital<->physical links in your own brain.

You can read a bit more about this problem here: http://openhybrid.org/learn%2c-setup%2c-operate.html

Regarding the webpage: That's an interesting perspective. I'm sure it's been considered before, but could the difference between the physical and digital interfaces result from complexity and not whether they are digital or physical?

I'm trying think of a complex physical interface - not one where the abstractions are buttons rather than a touch screen, but one without abstractions (if I understand your meaning on the webpage). Most man-made physical interfaces are simple, probaly for good reasons. A large sailing vessel, operated manually, is the best example I can think of, with the many sails, ropes, the rudder, etc. On one hand, my impression is that it's easier to conceive of them when using their physical interface; on the other I think an abstract UI could hide a lot of the complexity; for example it could present only the part of the interface I need for a particular task, hiding other parts and underlying mechanisms; also it could show helpful things that would be hidden in a physical interface, such as somthing blocked from view or info such as the stress on a mast or wind speed.

But I'm just thinking about it now; I assume someone has researched and thought about this issue ...

This desperately needs someone who can write in plain English to actually explain it without the MIT imagineering jargon. The video helps somewhat, but I suspect if someone described this in plain English it'd seem a lot less exciting than they want it to be.

Imagine you had a switch next to your bed.

Most likely, the switch controls the lights, which is perfect when you want to turn them off and go to sleep.

But maybe when you wake up, you'd actually want it to open the windows, or start the coffee maker.

And if someone else comes in to your room, maybe they'd prefer it if it started the audio system.

If we could manipulate all the physical objects around us with software, can we produce an environment which we can manipulate not just physically, but also electronically, to map our intents and needs as we have them?

And if we could, how would we go about "reprogramming" the world around us?

It's the ultimate vision of AR and "IOT".

IoT functions as services? Check out Java Jini developed way back in 1998: https://en.wikipedia.org/wiki/Jini

Right idea but it's implementation was very very clunky.

I'm a nerd. What's the command line API for this? I really don't want to be screwing with remote 2.0; i'd rather write my own programs and call the actions on these objects.

"We change, but interfaces haven't, therefore we need to change the interfaces" is a flawed argument. Change has a cost. The interfaces may be good enough. The new interfaces may be worse. Change may result in many standards where there used to be one. Change is fresh and nice in other ways, but to me, the foundation for this research seems to be more along the lines of "Glorified knobs for everyone!", which is cool in its own right.

Did they invent that labyrinth pattern just for this, or is it an existing technology?

We use natural feature tracking. The pattern is really just because of a design choice. It can be very kind of visual structure.

I'm pretty sure Valentin also invented it, but perhaps as a separate project and is using it for this.

The homepage for the QR-ish codes is here: http://hrqr.org/

It was an earlier project by the same researcher.

This is an interesting project that practically begs for the availability of smart, pluggable, physical control surfaces able to be reprogrammed using this technique.

Integration with Nest-style devices seems like an obvious first choice for a consumer product.

On maintenance: It seems like there'd be a bit of difficulty around managing a nest of "wires" between devices that are only visible under a 5" smartphone glass. So, you'd want something head-mounted (glasses), preferably with zoom. Microsoft HoloLens? That said, you'd definitely need an abstract schematic view to manage a dense network.

It would be a really curious development if, in addition to plumbers and electricians, new construction required "reality editors" to wire up behaviors between such interfaces. "This won't do; your reality isn't up to code."

Final thought: lose the digital tattoos and integrate Bluetooth/Zigby beacons for device advertisement. There's no way that those patterns are going to become fashionable. Or, use something else that's easy to print but is outside of our visual range.

Maybe I did not get the example? I would not need to get up to turn off the light b/c I can use a trigger like touching something on my night table. But if that something and the light switch are somehow networked, can't I just use my phone and some app to directly turn it off? Same thing with the timer example: isn't my phone already capable of running all sorts of abstractions? Why would I need to borrow the timer function from the tv? Finally, as far as discovering objects and its capabilities, wifi + some sort of IoT protocol should do, right? No need to point a camera, this gadgets will always be broadcasting its API (and ACLs I hope). Sorry, maybe I did not get the product at all.

I think with connected devices, a central machine that figures out the patterns for me and others in my house and then automagically creates the patters (the drawing of lines on the app) would be way cooler.

Valentin's idea has a lot of interesting components to it. The ability to stitch together a few different physical interfaces ad hoc using the camera + smartphone combination is particularly brilliant. I like where his head's at: integrating a bunch of discrete IoT devices should be simple, simple, simple.

My main beef? Frankly, it's those stickers that the camera needs to recognize the device. Ugly, ugly, ugly.

Computers have been huge (ugly?) machines before now they are sexy and small. Screens have been heavy, now its all in your pocket. All of it has been in the Marked and sold wonderfully. Visual Markers (Stickers) for Augmented Reality are a design challenge that can lead to new looks. Its Natural Feature tracking, so everything can be a "sticker". This can actually help companies to differentiate from each other. Its something new to think about. I would not say that a radio with just a touch screen is particular beautiful, but it sells too. ;-) Anyhow, one thing is sure we will get rid of the marker. There is so much technological development going on in that field.

Thanks for answering! I really like the comparison to a "digital screwdriver". I think way too many companies set the setup bar for their IoT devices way too high. If you can make it as easy as Reality Editor, you should!

Anyone else reminded of http://reactable.com/ (Reactable) at all?

This is awesome. Being a coder I can see myself writing IoT services without a problem (just simple REST apis). And who cares about the masses? Code or be coded is what I say. However, tried using the iPhone app and couldn't get it to work. Made me feel dumb :( Be nice if there was a help me in the app or a tutorial on how to hook phone up to other devices.

Thanks for an interesting project shared. I am sharing this at my college, some share concerns about adaptability, but I feel like as this project expands it will feel seamless. I am glad it is so readily available and already showing support for newer boards like the Raspberry Pi Zero. Thank you for your work, I hope to see this and similar projects expand!

A knob that changes functions..

I feel old fashioned saying this but.. I think I prefer my objects stateless.

I think the more valuable contribution is not Reality Editor, but Open Hybrid. Right now there are many crappy home automation standards, and none/few outside the home.

Also, HRQR just looks awesome.

True and not true. Open Hybrid is not useful without the visual representation that breaks down the object in to its components. Short write up about it here: http://openhybrid.org/how-to-connect-everything.html

The problem of all other standards is that they have to think about complex abstract standards.

I see object with beautiful tactile affordances being turned into tactile-free and afforance-free touch interfaces. What's the point of that?

Why don't you just manipulate the physical objects with your hands? They're right in front of you. I think I am missing something.

Tune a .vimrc of my own life.

Are you going to build The Reality Editor for Android any time soon?

Wow, imagine this applied to manufacturing or assembly lines!

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact