
Sketching Interfaces - adriand
https://airbnb.design/sketching-interfaces/
======
namuol
Some similar experiments, dating back as far as 1996:

SILK:
[https://www.youtube.com/watch?v=VLQcW6SpJ88](https://www.youtube.com/watch?v=VLQcW6SpJ88)

Denim:
[https://www.youtube.com/watch?v=tCVYKgewDXc](https://www.youtube.com/watch?v=tCVYKgewDXc)

~~~
sizzle
Thanks for the links

------
programmarchy
This is really incredible.

I've always loved the idea of "box and arrow" programming for flows, but in
practice the interface was always too cumbersome -- typing was faster.

But coming up with a design language that a human can sketch and a machine can
interpret really breathes new life into that idea.

Love it!

------
kidfiji
Seeing things like this just make me want to get into ML. I'm sure there's a
ton of complexity behind it but it seems that it just feels so rewarding.

~~~
harrisjt
Just know that ML jobs are just statisticians with sexier names

~~~
gipp
Depends 100% on where you are. Data Science/ML engineers can mean dramatically
different things at different companies.

------
2474
I'd be inclined to agree with Adam Michela when he says design systems
restrain creativity. It is the industrialization of user interfaces.

~~~
tannerc
Roads certainly limit where one can drive, but the alternative is pretty
disastrous.

As with anything: there is a time and place. Design systems are wildly
powerful for scaling, creating a cohesive language across a brand or product,
and moving quickly.

Just because you have a design system doesn't mean the buck stops there. You
can keep building new roads if you know where it is you need to go that the
existing roads won't take you.

------
ryanmarsh
Software development is applied semiotics.

I’ve been saying this for a while but software development hasn’t ever really
felt like symbol making (rather some engineering type profession) so people
dismiss this notion. I’m beginning to feel vindicated.

------
mattferderer
Correct me if I'm wrong but is this basically doing what Adobe & others have
already done? The only difference I noticed is they're using a camera & image
recognition as the input instead of the designer dragging & dropping 1 of the
150 components on to a digital canvas in a software application?

I do get that it's a demo that shows the possibility. I just feel I'm missing
something that makes this stand out. I did only watch the first video & skim
the article.

~~~
arstin
In practice there's a big difference in thought process and workflow between
drawing freehand on paper and flipping through a UI library with a drag and
drop interface. I think this is true even if your task is just composing from
the same well defined set of components.

If they can actually pull this off it I can see it being very useful. And if
it works really well in conjunction with other experiments currently ongoing,
I can imagine it even affecting the way teams are structured and how a
designer's day to day work goes.

~~~
seanmcdirmid
The recognized seems to be fixed to a limited UI language, so is it fair to
call this freehand? Rather, there is palette that you access via roughly
sketching out a predefined form.

This won’t be that useful for many designer sketches,l where the visual
language is mostly undefined.

------
sirtimbly
A perfect example of several congruent technologies being combined in novel
ways to create something that opens up entirely new ways of working. Design
systems + ML computer vision. Incredible.

------
enturn
Looking at the embedded youtube video the article might have been published
after Sep 7, 2017. I can only guess, though, since they don't include dates on
any articles they publish.

------
otto_ortega
This is insanely great. An ideal system would be developing something like
this to do PSD to HTML, instead of having it identify predefined components.

------
Cilvic
If I'd want to build something like this as a copy-paste nerd with some python
experience, where would i start?

~~~
postit
From my 5 minutes understanding of the problem: Non deep learning solution

Step 0: you should have some training data. Draw it < I believe 50 of each
class you want to detect should be enough to start, but you can think about
doubling it>

First you need look at the opencv library, specifically in the findcontours
function
[https://docs.opencv.org/2.4/modules/imgproc/doc/structural_a...](https://docs.opencv.org/2.4/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=findcontours#findcontours)
\- I'm linking to the c++ api but the python has the exact same function.

Make sure your contours are clean of rubbish data because this can affect the
outcome of the classifier.

With the extracted attributes of each individual class you pass on this data
to a multi class classifier ([http://scikit-
learn.org/stable/modules/multiclass.html](http://scikit-
learn.org/stable/modules/multiclass.html))

Parameterize your model, Split your data into training and testing ... yada
yada ...

With the generated model, you can receive individual detected contours and it
will give you the fitting probability to each class, use it to feed an api
which provides data to a react component that renders html.

>> ps: Ah you need an image capture app which reads the image in real time,
pass the filters and detect contours to then pass it to the classifier.

>> ps2: I'm available for projects :P

~~~
raihansaputra
How much time should this take to build? I'm really interested in trying out
to build this. or maybe creating an open source package?

~~~
postit
I suspect someone with enough experience can hack this in a couple of days.

~~~
Cilvic
How can I contact you?

~~~
raihansaputra
Are you interested in building this? I'm interested too if you're looking for
collaborators!

~~~
Cilvic
Yes!

------
drwl
The video demo showing their sketches created into a high fidelity design was
really neat

------
nikkig
It's great!

------
timthelion
Very interesting concept. It reminds me of the discussion I've seen around
smalltalk, in which smalltalk fans say that dev environments should be live
and should immediately react to changes. The idea of being able to sketch
something up on a whiteboard and having the UI come to life on a screen next
to me instantaneously really feels similar.

On the other hand, who is the guy doing the talk at the bottom of the page?
That was totally cringy. I felt like he didn't know what he was talking about.

Edit: wording. tone.

