Hacker News new | past | comments | ask | show | jobs | submit login

Yo - I did the Airbnb project linked ^^^ which I believe was the first of this wave of deep learning-powered sketch->UI projects (though standing on the shoulders of decades of R&D projects)

Our take was that we really do design on paper or whiteboard first & foremost, which is why our project emphasized the webcam + sharpie thing rather than drawing in-browser etc.

Here's a related thing I wrote about the need for design tools to design the real thing, rather than facsimiles of the thing: https://jon.gold/2017/08/dragging-rectangles/ - so so so much process waste is because developers have to re-implement static pictures of designs.

In our case, we didn't get buy-in to keep developing the project, but I'm kinda jazzed that so many people are running with the idea




> ...so so so much process waste is because developers have to re-implement static pictures of designs.

Okay, but did you attack that problem in a way that actually is more efficient than established UI paradigms?

Let's split the problem in two parts:

- "Semantic" Design (Checkbox, ImageView, TextInput...)

- Visual Design (fonts, colors, margins, ratios)

Your solution covers only the "semantic" part. Just look at the data. It's basically a simple component tree. It would be more efficient to just type it up. It also would be more efficient to just drag rectangles from a toolshelf instead of defining the type of the rectangle by drawing extra hints.

As for the visual design, that's where you use a design tool like Illustrator or Photoshop. Typing that up (e.g. in CSS) is surely a pain, but sketching it all up is out of the question. I certainly do see room for improvement in the workflow here, but a sketchy interface isn't helping.

You have to question a lot of assumptions here, but also consider how designers are most efficient with the tools they already know and have used for years. Don't mistake something that you want to create for something that users will actually want to use.


Hey ! I'm a big fan. Obviously this project as it is constructed currently isn't much. But the idea that machines can learn the code behind what artists draw is especially intriguing. I also think we'll get there is due time, with the great work being done on GANS and better scene understanding algorithms coming out. I was inspired by your team's idea, even won a Hackathon with this idea ! Thanks for your contribution, I'm a big fan




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: