Hacker News new | past | comments | ask | show | jobs | submit login
Autodraw – Fast Drawing for Everyone (blog.google)
732 points by kbyatnal on Apr 11, 2017 | hide | past | web | favorite | 143 comments



It's not really "auto-draw" as much as it's a visual search in which you suggest shapes and it looks across the collection for visually similar icons. Impressive and fun, but not yet a huge advancement over just typing "house" or "cake" to search the image library.


Their description made it sound like a really cool lower-level tool, so obviously it ends up being a letdown.

Drawing/art programs generally have a line smoothing feature - just smoothing your wobbly lines as you draw, using relatively simple algorithms. The description here made me hope for something more "medium-level", half-way between the two. It wouldn't just smooth your lines - it would adjust them according to context, based on a corpus of more precise line drawings, and perhaps predict/suggest the next strokes. It might be difficult to pull off though, if implemented naively it would probably just work against the artist.


This kind of functionality sort of exists in a basic form in the default mail on iOS using a feature called Markup. Markup tries to guess if you drew an arrow or a circle and suggests it based on your drawing.


On a similar note, this site can recognize handwritten mathematical equations (not just one symbol at a time):

https://webdemo.myscript.com/views/math.html


as well as microsoft's baked right into office Ink to Math convert


Xournal also does this for simple geometry.


In some cases, yes its easier to search, but it does fill a use case. I'm not an illustrator yet often need icons. I tend to have a rough idea of what I want but can struggle to find the right keywords. Visual search here is very useful, and allows an element of play for finding the icon. (Hopefully play, not horrible frustration)


Yeah. What would make it really stand out is if it were compositional. Judging based on the demo animation, it doesn't appear to be so.


I'm not sure what you mean by "compositional", but you can keep drawing after you've matched a shape and it will start suggesting matches for your new shape as well--though unfortunately it doesn't seem to auto-scale the icon you pick to the size of your original drawing, so unless they're already the size you want it's not as helpful as one might hope.


I guess GP means, I start drawing a cat, and it becomes a nicely drawn cat, then I add wings and a horn and a pistol and it becomes a flying laser unicorn cat.

I assumed this was the big idea in TFA, but it seems it's a collection of clip arts, with a terrible interface for looking them up.

I am super confused by the existence of this.


I guess GP means, I start drawing a cat, and it becomes a nicely drawn cat, then I add wings and a horn and a pistol and it becomes a flying laser unicorn cat.

Yes, exactly. And then you start sketching a few rough buildings, a beam, some comets, some explosions, and a helicopter: it becomes a drawing of a giant flying unicorn laser cat from space, attacking Tokyo.

Edit: something like this [0].

[0] https://s-media-cache-ak0.pinimg.com/564x/78/9c/99/789c99430...


Your use of the word compositional is still a little confusing to me. Cool cat though...


OP's example is maybe a little odd, but I admit that the very first thing I tried was "compositional" as well. I drew a mountain, clicked the icon to make it into a mountain, and then drew a bike going up it. However, there was no way to have a drawing with both a mountain and a bike.


It is easy enough to scale them though. And you can flip them horizontally or vertically too.


I could have sworn none of those controls were there an hour ago. At any rate, now that I can scale things here's a random icon I drew to demonstrate the point: https://www.autodraw.com/share/G2IIQMITW0BB


Indeed ... Google docs has had this feature for some time. If you want to insert a special character or symbol, it gives the option to draw it and then shows similar characters.


Quickdraw (same technology) definitely seemed to do more than simple shape searching.


If you mean that the algorithm was non-trivial, that's probably true. But I don't see what else you could do with it besides recognising a hand-drawn shape.


It's not a drawing tool. It's a search engine for clip art. That's very Google.

This may be the answer for how to enter emoji. There are now over 2600 emoji, with more to come. Keyboard selection isn't working and menus are huge.


Android Wear can already do this.

https://techcrunch.com/2015/05/28/draw-emoji/


On iOS I just write the emoji name and it appears in the suggestions. Much, much easier than trying to draw it.


GBoard does that too.


And SwiftKey as well


And helm-unicode!


Back to the good old days of BBCode! :)


In Google Docs, the interface to enter a special character has a 'draw it' mode...


On iOS there's a Google keyboard that has a emoji search built in.


Gboard is on Android too, and the swyping is excellent for those who use that.


I was a cog in the machine [1]. You're welcome for my 1e-23'th-sized contribution.

[1] https://quickdraw.withgoogle.com/


That was fun! I did spot one of the downsides of crowdsourced input: a dick doodle was one of the archetype images for "crab".


I found this quite fun to play with. Especially with the "how did it know" breakdown after. Cool!


That's fun. I actually like that quite a bit more than the program in the linked article.


Heads up: That fails to connect for me.


shrug, works for me.


Yep, it's back up.


This crowd might appreciate the following comic, which I made with autodraw.com (the tool this post is about) --

http://imgur.com/a/WWff9

It took me about an hour, after I got incredibly frustrated that it wouldn't let me draw anything. Can't draw a robot. Can't draw a sad face (only smiley face). Can't even draw a stick figure. Can't draw a speech bubble.

I felt like it was fighting with me for what it wanted to draw, while leaving very basic and fundamental shapes out. There were more things I couldn't draw, I can't even remember everything.


Also a couple of bug report type things:

-> There's an undo button, it works well. But there should be a redo button. (Or the Apple-Y or Ctrl-Y keyboard shortcut for redo ought to work.)

-> See how my smiley face is too big on the right? Well I can't make it smaller: even if I zoom way in (there's a zoom functionality) I can't use the select tool to just select the smiley face (inside the jail) to reduce it in size. I'd have to recreate the parts of this image separately.

-> There is no way to set line thickness on the clip art! This should be one of the easiest things to set - but you can only scale the whole image, not the line width. That makes it hard to work with.

Overall I found the experience frustrating.

I have a challenge for you guys though: for the most common hundred thousand or so words, use a machine learning algorithm on your own Image Search results, to try to come up with canonical ideas of what the objects in question might look like, after sorting them into categories based on similarity of recognized features. Then have the algorithm create an outline using the canonical idea it has derived for each category.

What I mean is that if someone Google's "hand" they might get: left hand, right hand, fist, middle finger, OK sign, I mean there really are only so many ways to hold a hand, or visual meanings/memes for the idea of "hand", and other artists already have introduced a canonical version. (Likewise "stick figure" has a meme around it.)

So for each one of those, the algorithm could learn from every version of that that it judges as similar to each other -- and then draw it's own for each one! (Computer algorithms are good at drawing in a learned style, even such as Van Gogh's, etc.)

Other simple examples include a "peace sign". If you Google image search "peace sign" you obviously get a very canonical shape. Why can't a machine learning algorithm draw its own?

This idea of deriving free, creative-commons licensed images (not subject to trademark search of course), by a machine learning algorithm trained on a huge corpus of image results date (in a fair use way), without copying any of them in particular, would be huge.

You have most of the interface to do this. It is a nice next challenge for you - and a very serious one. I suggest you do it!


Whats the process for Google to make this sort of thing? Does some 7 figure exec say we need to make it easier to draw bikes and then Google gets their army of 10x engineers to make this happen?


For stuff like this, it usually starts from the bottom. Engineers have ideas, convince others to help them work on their ideas, build prototypes (alone or with others), sometimes get help from product managers to develop a business plan, then pitch it to senior leadership to get some funding.

It takes a lot of skill, tact, and product acumen to get things out the door -- probably the same set of skills as you would need outside Google. (Except that within, you have access to more and better resources, but the bar is much higher.)

Obviously, this doesn't mean that every idea will stick... a lot of them won't -- some don't make money, some don't provide real value, and some are just terrible ideas. But it's a much better process than just top-down alone.


[Edited because I'm dumb and can't count figures] I've mostly seen this kind of thing happen because some engineer(s) wanted to try an idea, not because it was imposed from above.


College grads make 6 figures, 7 figures is one million or higher


LOL. Yup. I'm an idiot.


It looks like Google persistently feels guilty for getting enormous amounts of money without bringing too much of a value (Ads). So they try to compensate that by giving back. Most of the stuff they offer is honestly crap, but this one (Autodraw) and stuff like GMail are very decent.


Gmail is bread and butter for their ad ecosystem.


This would be great for flowcharts and diagrams. Sketch out a rough diagram on a tablet, and then have the shapes and lines "snap" to crisp versions as soon as they are identified. Even better if I could draw it on a whiteboard, take a photo, upload it, and get a response back as soon as it's done being converted.


There are bunch of apps that do this on ipad and Android... Plus Microsoft's note taking app, Lenovo/IBM's old X-series apps, and I'm sure others. Heck, the Newton did it.

If you're curious, try one of them out. It gets frustrating pretty quick.


Could you please name some? There seems to be an even bigger bunch of apps that do plain drawing, and it can be hard to find the needle (apps that convert rough sketches to clean line art) in the haystack (many apps for sketching; most just replicating the paper experience on a screen without adding functionality). Thanks.


Paper by FiftyThree - https://www.fiftythree.com/paper


Thanks


I remember seeing that on the Newton way back in the 20th century:

https://youtu.be/VWSKqgHOEy0?t=5m32s


Google needs a better way to lifecycle these things. Clearly this project will be cancelled, so rather than just reinforce its reputation for killing its projects, perhaps they need "experimental" projects that might even get spun out of the company. Or something like that.


From the blog post: "We hope that AutoDraw, our latest A.I. Experiment, will make drawing more accessible and fun for everyone."

The Autodraw website literally says "This is an A.I. Experiment" in big letters


Google keeps coming up with ways to use machine learning to do autofills, suggestions, etc. A month ago Allo [0], then that article in Verge about computational photography [1], then cameras without lenses [2] and now this. There is no question that this is all very powerful and awesome, but it also raises some questions, like who is the creator of a photo / drawing? Is every photo / drawing going to look the same in the future?

Here is an illustration of what I am concerned about:

My wife downloaded google "Allo" (Yet another chat app where you can change font size. Innovative, I know.). It also happens to suggest answers so you don't have to type as much.

Here is how it went:

  She: Hi!
  Me: Hi how r u
  She: Where r u
  She: Where r u now?
  She: At home?
  She: Working?
  She: I missed u
  Me: Working
  Me: Missed u too
  Me: What u doing?
  She: How are u?
  Me: Fine thank u
  Me: What about u?
  Me: What are u doing?
  Me: Can i see u?
  She: Working
  Me: Oh
  She: Yes
  Me: Where r u from?
  Me: Who are u?
And it kept on going for a long long time, none of us actually saying anything real, but both of us learning a lot about what looks like an average socially awkward American teenager conversation. It had love, beauty, cuteness, gifs, it even made us add some daily love quote bot to our thread, but we never actually typed anything ourselves because it was so easy not to. Of course we both knew it and thought that it's funny, but I can't shake this weird feeling that something is very wrong with this and that in the long term we are being brainwashed to be a dumber, more superficial version of ourselves.

p.s. I never use "r u", I find it lazy.

  [0] https://allo.google.com/
  [1] https://www.theverge.com/2016/10/18/13315168/google-pixel-camera-software-marc-levoy
  [2] https://www.theverge.com/2017/4/12/15267486/photography-machine-learning-future
Edit: formatting.


I was surprised how poorly it ran on my very modern phone. And then how tiny everything was on my desktop.

When I looked past that and tried to draw a cat, it wasn't all that useful. I mean cool, you saw I was drawing a face and gave me 50 options. But what am I supposed to do with that?

It feels like a rehash of what the Newton would do when you tried to draw stuff. But it does it better. I think if I could skip the "pick what I meant" step, it would be cool for whiteboarding in the office.


>I was surprised how poorly it ran on my very modern phone

What phone and what browser? Ran great in chrome on my mid-range android.


Sony Z3. Chrome.


That's because your very modern phone has a very puny CPU compared to even the average desktop CPU. I'm surprised about how few people know that their "2 GHz multi-core" phone is 5-10x slower than an average 5 year old desktop on common tasks.

(Edit: hehe, as evidenced by this post being downvoted. The HN audience doesn't know any better either?)


Probably downvoted because you're wrong? (Not that I did.) But this is with the caveat that this is comparing a desktop Mac to an iPhone and I haven't the faintest clue about top Android phones, although I have the understanding that the A10 destroys the current Qualcomm SoCs.

Here's an instructive article from last year comparing a 2013 Mac Pro and an iPhone 6s: https://www.mikeash.com/pyblog/friday-qa-2016-04-15-performa...

The relevant quote: >The most remarkable thing about this is how similar it looks to the Mac results above. Looking back at the old tests, the iPhone was orders of magnitude slower. An Objective-C message send, for example, was about 4.9ns on the Mac, but it took an eternity on the iPhone at nearly 200ns. A simple C++ virtual method call took a bit over a nanosecond on the Mac, but 80ns on the iPhone. A small malloc/free at around 50ns on the Mac took about 2 microseconds on the iPhone.

>Comparing the two today, and things have clearly changed a lot in the mobile world. Most of these numbers are just slightly worse than the Mac numbers. Some are actually faster! For example, autorelease pools are substantially faster on the iPhone. I guess ARM64 is better at doing the stuff that the autorelease pool code does.

>Reading and writing small files stands out as an area where the iPhone is substantially slower. The 16MB file tests are comparable to the Mac, but the iPhone takes nearly ten times longer for the 16-byte file tests. It appears that the iPhone's storage has excellent throughput but suffers somewhat in latency compared to the Mac's.


I think it's more because you're making a straw man argument. Nobody disagrees that they're slower. I'm saying it's not enjoyable on my phone despite them saying it's good on phones.


This is awesome!

Reminds me of a little toy project I made 5 years ago: https://www.youtube.com/watch?v=3WswSywx6TI


TL;DR as you sketch on one side of the page, a dynamically updated visual search appears on the right side. Very cool. Tell us more.


Just one of my many fun throwaway projects :P It used to be up at skrch.com, but went down a while ago. Couldn't figure out how to monetize/sell it so I moved on -- I'm still not sure what sector could use something like it. The original idea actually came about in a dream (true story!) and I wondered if I could actually implement it. Took me about a month or two as I had never used OpenCV before.

The search was done with a very simple histogram analysis algorithm and the image database had about 10,000 pictures from Flickr. Results were pretty decent, but sometimes hit and miss[1]. Database costs were pretty high as I don't think there's any database out there that has any way to efficiently hash 2d histograms (so everything was stored in memory). That could be a fun challenge.

I open sourced it a while ago here: https://github.com/dvx/skrch

[1] https://stackoverflow.com/questions/5332212/feature-blob-cor...


Wow, that's really cool! Do you have it up somewhere?


I open sourced it a while ago here: https://github.com/dvx/skrch

No live website anywhere unfortunately.


This reminds me of Chinese handwriting input methods, which have almost the exact same UI. You draw a character on the screen, and you get a selection of results at the top.


Since nobody has mentioned this yet: I found that the core search functionality is not very good.

I tried drawing a frowny face, a stick figure person, and a puppy face, and it didn't recognize any of them. I'm terrible at drawing, but I feel these are objects that have a universally-understood outline.



That tool saved me many times. It's hard to do obscure symbolic searches in Google :')


Thanks for reminding me! Also found http://shapecatcher.com for Unicode


Fun idea but doesn't really work. Just sorta replaces your random doodle with a random piece of clip art. Any trace of your original drawing is gone. Disappointing.


Isn't that the entire point? Or am I missing something?


Yes, that is the point of this tool, and it probably beats slapping together a bunch of clipart based on google image search, but I was hoping it was something more than that. The tagline is misleading because it's not really helping anyone draw -- it's just a visual search for sketches.

I sketched a really rough palm tree and it suggested a bunch of tree drawings, one of which was a palm. That's helpful, but everyone who wants a palm gets the same palm. Wouldn't it be great if the tool recognized that I was trying to draw a palm and then improved mine by adjusting it according to what it knows about sketches of palms (smoothing the lines, adjusting angles, etc)?


Very fun! Wishful thinking, but I'm hoping they partner with The Noun Project and add SVG downloads.


Everything looks vector-ish, was surprised it didn't download as SVG :(


This would be even more interesting if one could morph the provided object with the provided sketch. Sort of like style transfer.


This turned out to be a lot less interesting than it would be in my head.

Was expecting it to maybe use the data from the other drawing experiment to dream up new creations. Not just search a limited library of glyphs


There are no butts, what type of auto-drawing tool doesn't include butts



I suspect there are no butts because they are a stodgy corporate entity, not because they haven't thought about butts



I concur.

Drew a face and it proposes ovens and random jitters as closest matches???

http://imgur.com/a/4mjz0


Very cool, but they need an export to SVG option. Right now it looks like it only exports to png.


I thought the same thing. SVG would be killer!


This could be so much more useful than it currently is. If anyone has used Microsoft Visio you know what a pain it is to find symbols while searching through a library, especially if it isn't associated with a common noun. Where are the simple arrows, Greek letters, schematic components? Perhaps they'll be integrating this type of technology into an actual useful product in the future, like a Google Drive version of Visio?


Agreed, I feel your exact pain with Visio.

They should have this for all UNICODE characters, too. There are web services that attempt to do this, but they fail pretty miserably.


Being in the middle of a process mapping exercise, what you describe would be a huge time saver.


What I need is betterdraw, you input a bad drawing then it correct perspective, shapes, etc...


Somebody need to feed the robot overlords with more dirty body parts.

joke aside, Adobe should not be worried just yet. It seems to be just a image tagging service or a terrible drawing application. When I draw a face of a cat it suggests a body of a cat. When I draw a rocket is suggests a glass of vine and so on.

The QuickDraw game was fun and a good idea, but basing a drawing application on "topics" from that game seems like a bad choice.


Boring. Really, what's the point? It doesn't even make a connection between more than one "drawing". Try to draw a triangle, select the shape and then draw another triangle to make the two look like a square - you won't see a square option in the suggestions pane, since it doesn't see/remember your first triangle. My kid would probably like it though.


This is awesome tech. And probably useful.

But, art even/esp hand drawn scribbles losses almost everything without the character and idiosyncrasies of the artist. This is little more than a fancy ui for a clipart library. Clip art sucks. Although it does have a purpose, limited as it may be.


For some reason they left some `console.log` and you can see if you open the Developer Tools.


My guess is that they trained the model for this with the data they collected from [Quickdraw](https://quickdraw.withgoogle.com/)


Okay. This is great, but I feel like Google's off doing everything but looking after their current products. Google Inbox is in dire needs of new features to bring it in line with competitive mail products, and GMail needs a facelift.


Regarding Inbox, I couldn't care less about more features at this point, I just want better desktop web performance.


From the autodraw page:

>Built by Dan Motzenbecker and Kyle Phillips with friends at Google Creative Lab.

Are Dan Motzenbecker and Kyle Phillips responsible for Google Inbox development? If not it seems a bit silly to criticise a project made by <10 people in a company with 72,053 in total


Wow, it's really annoying that there's no way to type text in and get that shape. I get that it's cool that it will (sometimes) recognize what I draw, but apparently I can't draw the Space Needle for the world! It would be nice to still be able to search their images with text manually instead of having to try to draw everything you want when you know you'll use what they have.


This is pretty incredible. The only really important thing that's missing is the ability to flip the images horizontally or vertically. Otherwise my cow is unable to wear a helmet: https://www.autodraw.com/share/T7HFJ9TVN91J


Awesome. As a board game designer, I could see myself using this to make prototype cards that look decent much easier. Although I'm sure I'll still need Illustrator to take it to the next level. But for a quick and dirty prototype, it should work great.

Much better than searching the web for hours for icons that have a similar-ish art style that have what I need.


Thenounproject.com is really good. Also, Daniel Solis sells packs of really specific board game icons.


I keep forgetting about the noun project. Also, those icons by solis look like they could be useful, thanks for letting me know about that. I was aware of an old card game design series he did, but not his Patreon. I've also picked up a few of his games that are still on my Wall of Board Game Shame and need to get played.


Two minutes use and color me unimpressed. The first thing I drew should have been a slam-dunk and it wasn't recognized. (I challenge anyone to draw a padlock that autodraw can recognize!)

Also "Fill" doesn't fill except with pre-defined shapes. What's with that?


I can do a passable Bart Simpson face and the top three recommendations were for Teddy Bears. I think that's quite cool. I might throw some other stuff at it later when I have a stylus to play with. Love finding out about these things.


This reminds me of the excellent http://shapecatcher.com/, except for an icon library instead of a Unicode font.


"There's nothing to download"

Yeah, that's the web for you. With all the obsession with apps in recente years, I'm glad to hear that being advertised.

Maybe we're hitting an App saturation point.


It's really cute. I wonder if Google is going to use all the classification that people will do of their own sketches to teach its machines... to recognize hand-drawn sketches.


That was my first thought as well. But I do wonder about the quality of submissions given that most people can't draw with a mouse as well as they can with a writing utensil (assuming a drawing tablet or touchscreen is not used either).


> assuming a drawing tablet or touchscreen is not used either

Given the prevalence of mobile devices, some of which use styluses—even ignoring touchscreen laptop and desktop screens—why would you assume that?


One issue I see is that you can only export pngs. It would have been really useful to be able to copy the selected drawing elements and paste them into a Google Drawing


What other people think : “Oh! what a cool drawing app, Google is so awesome!”

What I think : “Oh what an interesting way of crowd sourcing ML data, Google is so smart!”


I can't see the auto suggestions on Google Autodraw. Sort of defeats the purpose... Anyone with the same problem? I have a macbook pro.


Only thing that seems to be missing is a way to re-arrange the shapes (z-order). I tried page-up/down and right-clicking but no luck.


Tried to draw a dick, failed to be recognized.


interesting, it definitely supports the narrative of AI replacing changing our jobs, in this case the designer.

For less artistic folks like me, this tool is from heavens. How many times did you want to illustrate a simple diagram but couldn't draw or use photoshop?

Having said that I can see Autodraw still needs more work done. It failed to recognize a phallus.


Really interesting idea that isn't yet implemented well enough to be that interesting in practice.

The idea of machine learning refining your drawings as you go, forming a sort of cooperative artistic partnership, is fascinating.

The idea of machine learning somewhat sloppily matching your drawings to pre-existing ones and just replacing them.....well, kinda just feels like image search copy/pasted into microsoft paint.

But it's a start!


If anyone else is confused about it not working, check if privacybadger is blocking inputtools.google.com


With Autodraw, there's nothing to download, nothing to install...

An installation one would be nice, though.


Guns don't exist, but swords do.


unfortunately I get

Over Quota This application is temporarily over its serving quota. Please try again later.


(I guess Google made it using App Engine.)

This is a little embarassing...


I can't see the auto suggestion bar on my Macbook pro. Anyone with the same problem???


Try getting it to display fire or flame that isn't attached to a candle or birthday cake.


Reminds a lot of shapecatcher (shapecatcher.com): Unicode character recognition by drawing.


The name AutoDraw sounds like it is created by Autodesk, the creator of AutoCAD and AutoSketch.


Got my hopes up for a really easy drawing program. Instead got an icon finder.


AI experiment? Bet the world pictionary champ is starting to sweat :P


I'm seeing 404 for the images, wonder if it auto-reports that.


ok i drew a cock and balls and the top matches were saxophones


Does that mean it works? More importantly, how was the AI trained in genitalia recognition?


This is a really clever data collection technique. ;)


Never suggests anything on linux Chrome 57.0.2987.133


I had to allow third party cookies to inputtools.google.com in privacy badger for it to suggest things


This is literally the "Write the kanji" feature which has been present in Google Translate for years, with a different 'character set.' Not exactly impressive.


Wow, the blacklist of classes is really long.


Don't get used to it... If it's an "experiment", and "free", then it will likely get shutdown.


Great, more clip art. /s


tried drawing some circuit components but they weren't in the library :(


I couldn't get a horse. It was a horrible drawing, though.


It thought my horse was pliers, a camel, or a hand gesture: http://imgur.com/a/JWifP


Someone could make a great version of "Eat poop, you cat" with this.


Oh, hell yes.

1. Make a sketch of your choice 2. Pick the first AutoDraw suggestion (or randomly one of the first N) 3. Feed that to google image search 4a. Google's best guess for this image is a prompt for the next human sketch. Repeat from 2. OR 4b. Pick a sketch-like image from the results of 3 and reproduce on the AutoDraw canvas. Repeat from 2.

I got an amazing result on my first attempt at 1 - 3:

Autodraw (Cat > Raccoon): http://imgur.com/a/Fy00K

Google Images (Raccoon > "Suppression of Enemy Air Defenses"): http://imgur.com/a/F3z1T

It'd be hilarious to automate this, set it running, and just watch it go.


Hey


nryche0725


I didn't realize there were now brand top level domains, such as `.google`. Here is a list of more, https://en.wikipedia.org/wiki/List_of_Internet_top-level_dom...




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: