
Reality Editor – MIT Media Lab - piyushmakhija
http://www.realityeditor.org/
======
valentinheun
Hi its Valentin, the guy in the video. Let me know if you have any questions.
:-)

~~~
agp2572
This is very interesting and great work on it but I feel in reality the
problem being solved does not benefit people a great deal that they will be
willing to spend a lot of money or spend a lot of effort trying to configure
things around them. The interface is easy to most but from doing home
automation it does require a higher skill set to configure physical buttons or
input devices to do what you want on another physical device.

How does your team try to combat the cost and the effort required by making
this as economical as possible and also easy to use such that a person who is
a beginner at technology such as using smart phones, computers, etc. can use
it.

~~~
valentinheun
You can checkout openhybrid.org for some of these questions. We do research in
the Media Lab not product development. The Reality Editor is a projection in
to the future. But have a look at the costs of SoC's. It's going down so
rapidly that in 2 years your question probably will be answered.

The moment real products will be in the marked that make use of the Reality
Editor, you will power them up and just point your phone on them. It should
not be more complicated.

~~~
srtjstjsj
The openhybrid.org example sharing a timer between toaster and food processor
_using a phone, which has its own builtin timer_ is non-compelling bordering
on comical. do you have more realistic examples that make meaningful use of
the components of a physical object?

~~~
cellularmitosis
I would imagine this could be a good fit for roadies, who have to set up and
tear down audio/lighting/video installations on a daily basis.

------
volaski
Someone please enlighten me, maybe I'm missing something, but I don't get why
this augmented reality interface is better than having a centralized web page
where I can manage them using the good old textual interface. It feels like a
very convoluted rube goldberg machine approach to me. For example, the guy
says something like "the car knows because I connected my chair to my car."
Imagining how he would have done this makes me doubt the convenience. I can
imagine he would have started dragging when he left the office and kept his
thumb on the screen until he got to the car and finally connected it to the
knob. He could have just opened a web page and connected these two devices
without having to go through all this trouble.

~~~
roymurdock
This is important because it represents the next step in allowing regular,
non-technical (but increasingly tech-familiar) consumers to set up and
maintain their own networks and systems.

Setting up a webpage to control a network of lights, monitors, garage doors,
thermostats, cameras, etc. might seem simple to you and to most of HN, because
many here are familiar with the hardware, the languages, the ISA's, the basic
device networking principles and best-practices that make these things work.
You are in the very small minority of the 300m people in the US and the 7bn
people in the world. For everyone else, visually dragging and dropping will be
a much easier way to visualize and set up networks.

This product is not for you - it's for the vastly larger market outside of
computer scientists/embedded engineers.

When you can demonstrate this value prop to the mass consumer market, under
the MIT brand nonetheless, big companies like Audi will start to pay attention
because they see an avenue to market an increasingly heterogeneous product
(luxury sedan) to the rich, older folks who buy these things based on cool new
features - automated avoidance, tying volume into acceleration, "Intelligent
Drive" [1], projected HUD [2], etc.

As we see from the video, Audi has granted this researcher access to APIs that
would be completely inaccessible otherwise. This is another key - API access.
Could you control the features in your car through a webpage? No, because you
don't have access to your windows or your stereo, unless you do some intensive
and illegal hacking.

M2M and IoT has been contained mostly to the realm of industrial/auto
applications so far. So once big companies start to see the consumer market
opening up through a simpler, more intuitive interface, they might start
opening up APIs to new machines and products.

Of course, allowing access to these APIs opens up a whole new can of safety
worms...but that's going to be an increasingly large issue that we will have
to deal with as we continue to integrate software and connectivity into new
products.

[1]
[https://www.mbusa.com/mercedes/technology/videos/detail/titl...](https://www.mbusa.com/mercedes/technology/videos/detail/title-
safety/videoId-fc0835ab8d127410VgnVCM100000ccec1e35RCRD)

[2]
[https://www.youtube.com/watch?v=sg3mmfVILRA](https://www.youtube.com/watch?v=sg3mmfVILRA)

~~~
volaski
I think you're assuming a lot of things about mainstream users. If AR was that
intuitive, it would have already been mainstream. Many apps tried to port
existing good old text based interface into AR but none have yet to gain
traction. You need to remember the mainstream users _do_ know how to use
computer and _do_ know how to access websites and enter forms. But they do not
know how this geeky augmented reality technology works. Also my point here was
not criticizing AR itself, I'm just saying his approach makes things
needlessly convoluted for the sake of making it look cooler. Read my comment
about how someone would drag and drop his chair in his office to his car.

~~~
Rapzid
I believe the research has value in its own right though.

I would agree though with the view that it's a bit convoluted as shown.
Without even building out prototypes you might arrive at this as a V1 in an
iterative thought experiment and quickly move on.

A slightly more user friendly iteration might be using image recognition to
detect devices in your surroundings and build a database of them for use in a
different, more suitable UI. If glass took off this data collection could be
happening seamlessly with visual feedback while you are just looking at stuff.
RFID tech may be a good alternative to image recognition.
Ownership(authorization, authentication) sounds like a real challenge in any
case. To work with minimum friction digital ownership might need to be
assigned at the time of purchase.. In which case you wouldn't need image
recognition OR RFID to build a database of your internet things..

Ok so then you wouldn't even need to connect your stuff together because there
would be a database of stuff and everything would be able to self connect.
You'd just be left with managing preferences similar to phone notifications.
"Your chair would like to interface with your lights; Allow(Y/n)?".

Transferring ownership.. Lots of problems to work out.

------
johann28
Most people don't want to reprogram stuff, they don't want to customize stuff.
They want things to "just work", they don't want to choose which button
controls the car windows and which knob controls the bass. This is the job of
the designer. Most people want a finished product.

People don't want to "rewire" their products, they don't want to hack on this
kind of thing. Except for geeks and technical people, who can also handle
normal apps and interfaces.

The analogy with the Internet breaks down because the Internet is mainly
successful because it connects people. That's what people care about, other
people. The telephone was a success because it lets you talk to other people.

Unless this thing helps people deal with other people, it's too complex and
uninteresting for the masses and probably too simple and tedious for technical
people.

~~~
valentinheun
I disagree! The internet is successful because authoring and publishing became
as easy as consuming. The first web browser was an editor and a browser in the
same time.

The most successful webpages are those where people are authors. People want
tools that empower them to engage with their environment.

To say people don't want to "rewire" their products is like saying that you
don't want to connect your Electric Guitar with a wire to the Amplifier.

~~~
johann28
> The most successful webpages are those where people are authors.

Exactly. But not because they enjoy tweaking some website settings or enjoy
the control over the layout of a page. No, the reason is connection with other
people. They write a blog so that they get feedback from people. They post
pictures to Instagram and Facebook to get likes and show off.

Connecting the electric guitar and repairing things with screwdrivers is seen
by most people as a chore. Sure, techie, geeky tinkerers enjoy messing around
with things, the same ones who enjoy configuring linux, for example.

I just haven't seen a single use case that would make sense in an everyday
setting. Having a knob that controls the lamp? Well, lamps already have that,
without being radio controlled.

If a car dashboard is designed well, then there's no need to remap the
buttons. Very few people like to customize their stuff that much. If there is
a trend in software (and hardware) it's that they remove options. We have less
features and customizations than 10 years ago. Developers realized that people
just mess their settings up and they call support. Also, the more options
there are, the more bugs there will be. So now instead of supporting a myriad
of settings, they just go with a default and leave it at that. Why don't we
have the option of vertical browser tab arrangement in Chrome? Well, because
they decided it's too complex and unnecessary for most users.

People just want to get their things done. They want to get business things
done with other business people and they want to have fun with their friends.

The success of the Internet is all about human communication, sharing
experiences (mainly through photos), and gossip.

~~~
valentinheun
I would say that there are a ton of people in the world that love to build
things, because its how we came to that point where we are. This is the sole
reason why the world around you exists.

Imagine building becomes so easy that it is more of an "amazing" experience
for you then a "geeky" long lasting frustration.

~~~
AndrewKemendo
I think the original poster is trying to say that 99% of people aren't
builders and don't even get to the point of geeky frustration - and I have to
say I agree.

I'm an AR developer and it's been disheartening over the past 12 months to see
how little people actually want to do anything other than "like" things. Even
posting comments seems to be a big leap for most people.

------
lhh
Seems like a lot of people aren't really convinced of the value of this kind
of tech (and maybe of IoT in general), and I'll admit that I'm a little
skeptical of parts of it as well. I don't really see the value in linking
devices to each other (eg I don't need my shower to automatically turn on
after my alarm goes off), and I don't need to be able to read HN on my fridge,
but I do think there's a lot of value in making the smart phone a remote
control for the physical world. I think it'd be awesome if none of my devices
(thermostat, TV, TV peripherals, lights, door lock, sound system, etc) needed
their own physical interfaces but I could just control them all with my phone.
My phone is already my remote control for a bunch of other things, like
transportation (Uber), food (Seamless), money (Venmo), shipping stuff (Shyp),
love (Tinder - just kidding?), etc, so why not have it control my physical
devices too?

~~~
onion2k
Agreed. I really don't understand people who are against this sort of
research. A light switch will still work as a light switch does now if you
don't program it. There's another layer of complexity in the product design to
go wrong, and add cost, but so long as when things break they still work in
"dumb" mode and the cost added is minimal then a smart switch that you can
program is a win for those people who want it and has practically no effect on
those who don't.

Being anti-IoT seems like sour grapes; thinking "I don't want this in my life
so no one should have it" is completely irrational.

~~~
kapad
I agree. This is just research, and not a market ready product. And research
is really necessary in IoT, not just to figure out what is useful and what
isn't, but also to figure out security and privacy concerns with everything
being connected.

------
dfar1
The idea that everything can be integrated via AR is fantastic... it's the
next step from IFTT. The challenge is replacing everything you have with IOT
versions that can be connected to these apps and hoping those things won't go
obsolete in a couple of years. Technology is moving so fast, not everyone can
catch up, thus there are no standards.

~~~
valentinheun
We designed the platform behind the Reality Editor in such a way that it can
adopt. You can read more about it here: [http://openhybrid.org/how-to-connect-
everything.html](http://openhybrid.org/how-to-connect-everything.html)

If you can run nodejs in your object, you will be able to make it in to a
Hybrid Object the Reality Editor can talk to.

The inspiration is the World Wide Web. It was created with a clear simple and
open foundation. You can still read the first webpages ever made.

You should be able to still interact with the first Hybrid Objects ever made.

------
wangii
I'm not sold. The whole IoT thing fails miserably in story-telling. Until now,
I have yet to read/hear/watch one compelling example. Who really cares if my
chair is connected with the light so that I can get the best light shedding
angle? who cares if my refrigerator is connect with my micro-wave to de-freeze
meat according to my online menu?

Maybe I'm too old for this. There are millions of important things to be
solved by computer and internet, yet big-names like Google and MIT push this
typical high cost, low return technology, for what? to sell chips and
standards to factories?

~~~
pedalpete
Your comment reminds me of how people first looked at blogging, twitter,
youtube, and many other technologies.

I agree that the examples we are usually given are not massively impressive,
but neither is the ability to post info, or share your videos online.

I think IoT has been oversold in many ways for the short-term, exactly as you
are pointing out. But in the long-term, I think it will be very impressive.

For example, my fridge knowing that I'm out of milk and reminding me to buy
some. Sure, that's a pretty useless case. But what happens when we aggregate
that over a large population and then we connect that with the grocers and
dairies. Will we have JIT milk processing and production, saving significant
energy and resources?

We just went from a very big 'so what' to potentially having a significant
impact on the environment.

~~~
wangii
Thank you for a serious reply.

What troubles me more is not IoT, but the storytelling. It has been years
people from this segment come out boring, meaningless videos, talks, articles
usually feature microwave, doors, windows. Steve Jobs sold iPhone with 3
functions in hours. Yet after billions spent and thousands of smart minds
working for years, we still have this?

I'm not sure if IoT has potential in long term. The most important thing about
internet is it connects people, minds, creativity, imagination and consumers'
pockets. I think as an industry we have too many more important and meaningful
works to do, directly with people. I'm willing to bet that a well thought,
carefully made video on youtube will make more positive impact on environment
that the whole IoT segment combined today. If so, why bother?

If IoT really has potential, at this point it's very badly executed.

~~~
jkestner
I think that like 3D printers, this stuff is useful to a fraction of the
population, as a tool with which to prototype products for everyone else.

------
hackuser
Do I understand this correctly? It's exposing the functionalites of IoT
devices as advertised services (IoTaaS?), allowing them to be linked together
(e.g. the timer from the microwave and the lightswitch), and provides an
intuitive interface that, hopefully, end users might utilize.

Exposing the IoT functions as services seems brilliant to me (has it been
discussed before?). What will we end up with? A overly redundent mess of IoT
functions (e.g., how many devices have clocks)? Or maybe an IoT server on each
home/office/car/phone (or more likely, in the cloud) that provides a central
source for basic functions like timers, monitors, etc.?

~~~
valentinheun
It goes beyond that. The interesting thing is that in the moment you have a
visual representation for an object, you can break it down into all its
components. From that moment on the abstract object is represented by numbers
only.

You can read more about it here: [http://openhybrid.org/how-to-connect-
everything.html](http://openhybrid.org/how-to-connect-everything.html)

The interfaces them selfs are web pages.

~~~
striking
The story written there doesn't make sense. Why the heck would I want to
connect a button on my toaster to my mixer? Why not have a "Timer" object in
my phone that can be connected to the mixer instead?

~~~
valentinheun
Because this kind of new visual interface is scaleable. You can operate 1000s
of objects without a problem. This is something your app centric user
interface would start looking like Times-Square. The Desktop-Metaphor is not
scaleable with the physical space, because you need to remember the
digital<->physical links in your own brain.

You can read a bit more about this problem here:
[http://openhybrid.org/learn%2c-setup%2c-operate.html](http://openhybrid.org/learn%2c-setup%2c-operate.html)

~~~
hackuser
Regarding the webpage: That's an interesting perspective. I'm sure it's been
considered before, but could the difference between the physical and digital
interfaces result from complexity and not whether they are digital or
physical?

I'm trying think of a complex physical interface - not one where the
abstractions are buttons rather than a touch screen, but one without
abstractions (if I understand your meaning on the webpage). Most man-made
physical interfaces are simple, probaly for good reasons. A large sailing
vessel, operated manually, is the best example I can think of, with the many
sails, ropes, the rudder, etc. On one hand, my impression is that it's easier
to conceive of them when using their physical interface; on the other I think
an abstract UI could hide a lot of the complexity; for example it could
present only the part of the interface I need for a particular task, hiding
other parts and underlying mechanisms; also it could show helpful things that
would be hidden in a physical interface, such as somthing blocked from view or
info such as the stress on a mast or wind speed.

But I'm just thinking about it now; I assume someone has researched and
thought about this issue ...

------
pnathan
I'm a nerd. What's the command line API for this? I really don't want to be
screwing with remote 2.0; i'd rather write my own programs and call the
actions on these objects.

------
xyproto
"We change, but interfaces haven't, therefore we need to change the
interfaces" is a flawed argument. Change has a cost. The interfaces may be
good enough. The new interfaces may be worse. Change may result in many
standards where there used to be one. Change is fresh and nice in other ways,
but to me, the foundation for this research seems to be more along the lines
of "Glorified knobs for everyone!", which is cool in its own right.

------
veli_joza
Did they invent that labyrinth pattern just for this, or is it an existing
technology?

~~~
jlrubin
I'm pretty sure Valentin also invented it, but perhaps as a separate project
and is using it for this.

~~~
mkeeter
The homepage for the QR-ish codes is here:
[http://hrqr.org/](http://hrqr.org/)

It was an earlier project by the same researcher.

------
politician
This is an interesting project that practically begs for the availability of
smart, pluggable, physical control surfaces able to be reprogrammed using this
technique.

Integration with Nest-style devices seems like an obvious first choice for a
consumer product.

On maintenance: It seems like there'd be a bit of difficulty around managing a
nest of "wires" between devices that are only visible under a 5" smartphone
glass. So, you'd want something head-mounted (glasses), preferably with zoom.
Microsoft HoloLens? That said, you'd definitely need an abstract schematic
view to manage a dense network.

It would be a really curious development if, in addition to plumbers and
electricians, new construction required "reality editors" to wire up behaviors
between such interfaces. "This won't do; your reality isn't up to code."

Final thought: lose the digital tattoos and integrate Bluetooth/Zigby beacons
for device advertisement. There's no way that those patterns are going to
become fashionable. Or, use something else that's easy to print but is outside
of our visual range.

------
bikamonki
Maybe I did not get the example? I would not need to get up to turn off the
light b/c I can use a trigger like touching something on my night table. But
if that something and the light switch are somehow networked, can't I just use
my phone and some app to directly turn it off? Same thing with the timer
example: isn't my phone already capable of running all sorts of abstractions?
Why would I need to borrow the timer function from the tv? Finally, as far as
discovering objects and its capabilities, wifi + some sort of IoT protocol
should do, right? No need to point a camera, this gadgets will always be
broadcasting its API (and ACLs I hope). Sorry, maybe I did not get the product
at all.

------
kapad
I think with connected devices, a central machine that figures out the
patterns for me and others in my house and then automagically creates the
patters (the drawing of lines on the app) would be way cooler.

------
cushychicken
Valentin's idea has a lot of interesting components to it. The ability to
stitch together a few different physical interfaces ad hoc using the camera +
smartphone combination is particularly brilliant. I like where his head's at:
integrating a bunch of discrete IoT devices should be simple, simple, simple.

My main beef? Frankly, it's those stickers that the camera needs to recognize
the device. Ugly, ugly, ugly.

~~~
valentin_heun
Computers have been huge (ugly?) machines before now they are sexy and small.
Screens have been heavy, now its all in your pocket. All of it has been in the
Marked and sold wonderfully. Visual Markers (Stickers) for Augmented Reality
are a design challenge that can lead to new looks. Its Natural Feature
tracking, so everything can be a "sticker". This can actually help companies
to differentiate from each other. Its something new to think about. I would
not say that a radio with just a touch screen is particular beautiful, but it
sells too. ;-) Anyhow, one thing is sure we will get rid of the marker. There
is so much technological development going on in that field.

~~~
cushychicken
Thanks for answering! I really like the comparison to a "digital screwdriver".
I think way too many companies set the setup bar for their IoT devices way too
high. If you can make it as easy as Reality Editor, you should!

------
devin
Anyone else reminded of [http://reactable.com/](http://reactable.com/)
(Reactable) at all?

------
las3rprint3r
This is awesome. Being a coder I can see myself writing IoT services without a
problem (just simple REST apis). And who cares about the masses? Code or be
coded is what I say. However, tried using the iPhone app and couldn't get it
to work. Made me feel dumb :( Be nice if there was a help me in the app or a
tutorial on how to hook phone up to other devices.

------
giancarlostoro
Thanks for an interesting project shared. I am sharing this at my college,
some share concerns about adaptability, but I feel like as this project
expands it will feel seamless. I am glad it is so readily available and
already showing support for newer boards like the Raspberry Pi Zero. Thank you
for your work, I hope to see this and similar projects expand!

------
RobertoG
A knob that changes functions..

I feel old fashioned saying this but.. I think I prefer my objects stateless.

------
microcolonel
I think the more valuable contribution is not Reality Editor, but Open Hybrid.
Right now there are many crappy home automation standards, and none/few
outside the home.

Also, HRQR just looks awesome.

~~~
valentinheun
True and not true. Open Hybrid is not useful without the visual representation
that breaks down the object in to its components. Short write up about it
here: [http://openhybrid.org/how-to-connect-
everything.html](http://openhybrid.org/how-to-connect-everything.html)

The problem of all other standards is that they have to think about complex
abstract standards.

------
intrasight
I see object with beautiful tactile affordances being turned into tactile-free
and afforance-free touch interfaces. What's the point of that?

------
plg
Why don't you just manipulate the physical objects with your hands? They're
right in front of you. I think I am missing something.

------
Puriney
Tune a .vimrc of my own life.

------
leebz
Are you going to build The Reality Editor for Android any time soon?

------
adamredwoods
Wow, imagine this applied to manufacturing or assembly lines!

