Drawing/art programs generally have a line smoothing feature - just smoothing your wobbly lines as you draw, using relatively simple algorithms. The description here made me hope for something more "medium-level", half-way between the two. It wouldn't just smooth your lines - it would adjust them according to context, based on a corpus of more precise line drawings, and perhaps predict/suggest the next strokes. It might be difficult to pull off though, if implemented naively it would probably just work against the artist.
I assumed this was the big idea in TFA, but it seems it's a collection of clip arts, with a terrible interface for looking them up.
I am super confused by the existence of this.
Yes, exactly. And then you start sketching a few rough buildings, a beam, some comets, some explosions, and a helicopter: it becomes a drawing of a giant flying unicorn laser cat from space, attacking Tokyo.
Edit: something like this .
This may be the answer for how to enter emoji. There are now over 2600 emoji, with more to come. Keyboard selection isn't working and menus are huge.
It took me about an hour, after I got incredibly frustrated that it wouldn't let me draw anything. Can't draw a robot. Can't draw a sad face (only smiley face). Can't even draw a stick figure. Can't draw a speech bubble.
I felt like it was fighting with me for what it wanted to draw, while leaving very basic and fundamental shapes out. There were more things I couldn't draw, I can't even remember everything.
-> There's an undo button, it works well. But there should be a redo button. (Or the Apple-Y or Ctrl-Y keyboard shortcut for redo ought to work.)
-> See how my smiley face is too big on the right? Well I can't make it smaller: even if I zoom way in (there's a zoom functionality) I can't use the select tool to just select the smiley face (inside the jail) to reduce it in size. I'd have to recreate the parts of this image separately.
-> There is no way to set line thickness on the clip art! This should be one of the easiest things to set - but you can only scale the whole image, not the line width. That makes it hard to work with.
Overall I found the experience frustrating.
I have a challenge for you guys though: for the most common hundred thousand or so words, use a machine learning algorithm on your own Image Search results, to try to come up with canonical ideas of what the objects in question might look like, after sorting them into categories based on similarity of recognized features. Then have the algorithm create an outline using the canonical idea it has derived for each category.
What I mean is that if someone Google's "hand" they might get: left hand, right hand, fist, middle finger, OK sign, I mean there really are only so many ways to hold a hand, or visual meanings/memes for the idea of "hand", and other artists already have introduced a canonical version. (Likewise "stick figure" has a meme around it.)
So for each one of those, the algorithm could learn from every version of that that it judges as similar to each other -- and then draw it's own for each one! (Computer algorithms are good at drawing in a learned style, even such as Van Gogh's, etc.)
Other simple examples include a "peace sign". If you Google image search "peace sign" you obviously get a very canonical shape. Why can't a machine learning algorithm draw its own?
This idea of deriving free, creative-commons licensed images (not subject to trademark search of course), by a machine learning algorithm trained on a huge corpus of image results date (in a fair use way), without copying any of them in particular, would be huge.
You have most of the interface to do this. It is a nice next challenge for you - and a very serious one. I suggest you do it!
It takes a lot of skill, tact, and product acumen to get things out the door -- probably the same set of skills as you would need outside Google. (Except that within, you have access to more and better resources, but the bar is much higher.)
Obviously, this doesn't mean that every idea will stick... a lot of them won't -- some don't make money, some don't provide real value, and some are just terrible ideas. But it's a much better process than just top-down alone.
If you're curious, try one of them out. It gets frustrating pretty quick.
The Autodraw website literally says "This is an A.I. Experiment" in big letters
Here is an illustration of what I am concerned about:
My wife downloaded google "Allo" (Yet another chat app where you can change font size. Innovative, I know.). It also happens to suggest answers so you don't have to type as much.
Here is how it went:
Me: Hi how r u
She: Where r u
She: Where r u now?
She: At home?
She: I missed u
Me: Missed u too
Me: What u doing?
She: How are u?
Me: Fine thank u
Me: What about u?
Me: What are u doing?
Me: Can i see u?
Me: Where r u from?
Me: Who are u?
p.s. I never use "r u", I find it lazy.
When I looked past that and tried to draw a cat, it wasn't all that useful. I mean cool, you saw I was drawing a face and gave me 50 options. But what am I supposed to do with that?
It feels like a rehash of what the Newton would do when you tried to draw stuff. But it does it better. I think if I could skip the "pick what I meant" step, it would be cool for whiteboarding in the office.
What phone and what browser? Ran great in chrome on my mid-range android.
(Edit: hehe, as evidenced by this post being downvoted. The HN audience doesn't know any better either?)
Here's an instructive article from last year comparing a 2013 Mac Pro and an iPhone 6s: https://www.mikeash.com/pyblog/friday-qa-2016-04-15-performa...
The relevant quote:
>The most remarkable thing about this is how similar it looks to the Mac results above. Looking back at the old tests, the iPhone was orders of magnitude slower. An Objective-C message send, for example, was about 4.9ns on the Mac, but it took an eternity on the iPhone at nearly 200ns. A simple C++ virtual method call took a bit over a nanosecond on the Mac, but 80ns on the iPhone. A small malloc/free at around 50ns on the Mac took about 2 microseconds on the iPhone.
>Comparing the two today, and things have clearly changed a lot in the mobile world. Most of these numbers are just slightly worse than the Mac numbers. Some are actually faster! For example, autorelease pools are substantially faster on the iPhone. I guess ARM64 is better at doing the stuff that the autorelease pool code does.
>Reading and writing small files stands out as an area where the iPhone is substantially slower. The 16MB file tests are comparable to the Mac, but the iPhone takes nearly ten times longer for the 16-byte file tests. It appears that the iPhone's storage has excellent throughput but suffers somewhat in latency compared to the Mac's.
Reminds me of a little toy project I made 5 years ago: https://www.youtube.com/watch?v=3WswSywx6TI
The search was done with a very simple histogram analysis algorithm and the image database had about 10,000 pictures from Flickr. Results were pretty decent, but sometimes hit and miss. Database costs were pretty high as I don't think there's any database out there that has any way to efficiently hash 2d histograms (so everything was stored in memory). That could be a fun challenge.
I open sourced it a while ago here: https://github.com/dvx/skrch
No live website anywhere unfortunately.
I tried drawing a frowny face, a stick figure person, and a puppy face, and it didn't recognize any of them. I'm terrible at drawing, but I feel these are objects that have a universally-understood outline.
I sketched a really rough palm tree and it suggested a bunch of tree drawings, one of which was a palm. That's helpful, but everyone who wants a palm gets the same palm. Wouldn't it be great if the tool recognized that I was trying to draw a palm and then improved mine by adjusting it according to what it knows about sketches of palms (smoothing the lines, adjusting angles, etc)?
Was expecting it to maybe use the data from the other drawing experiment to dream up new creations. Not just search a limited library of glyphs
Drew a face and it proposes ovens and random jitters as closest matches???
They should have this for all UNICODE characters, too. There are web services that attempt to do this, but they fail pretty miserably.
joke aside, Adobe should not be worried just yet. It seems to be just a image tagging service or a terrible drawing application. When I draw a face of a cat it suggests a body of a cat. When I draw a rocket is suggests a glass of vine and so on.
The QuickDraw game was fun and a good idea, but basing a drawing application on "topics" from that game seems like a bad choice.
But, art even/esp hand drawn scribbles losses almost everything without the character and idiosyncrasies of the artist. This is little more than a fancy ui for a clipart library. Clip art sucks. Although it does have a purpose, limited as it may be.
>Built by Dan Motzenbecker and Kyle Phillips with friends at Google Creative Lab.
Are Dan Motzenbecker and Kyle Phillips responsible for Google Inbox development? If not it seems a bit silly to criticise a project made by <10 people in a company with 72,053 in total
Much better than searching the web for hours for icons that have a similar-ish art style that have what I need.
Also "Fill" doesn't fill except with pre-defined shapes. What's with that?
Yeah, that's the web for you. With all the obsession with apps in recente years, I'm glad to hear that being advertised.
Maybe we're hitting an App saturation point.
Given the prevalence of mobile devices, some of which use styluses—even ignoring touchscreen laptop and desktop screens—why would you assume that?
What I think : “Oh what an interesting way of crowd sourcing ML data, Google is so smart!”
For less artistic folks like me, this tool is from heavens. How many times did you want to illustrate a simple diagram but couldn't draw or use photoshop?
Having said that I can see Autodraw still needs more work done. It failed to recognize a phallus.
The idea of machine learning refining your drawings as you go, forming a sort of cooperative artistic partnership, is fascinating.
The idea of machine learning somewhat sloppily matching your drawings to pre-existing ones and just replacing them.....well, kinda just feels like image search copy/pasted into microsoft paint.
But it's a start!
An installation one would be nice, though.
This application is temporarily over its serving quota. Please try again later.
This is a little embarassing...
1. Make a sketch of your choice
2. Pick the first AutoDraw suggestion (or randomly one of the first N)
3. Feed that to google image search
4a. Google's best guess for this image is a prompt for the next human sketch. Repeat from 2.
4b. Pick a sketch-like image from the results of 3 and reproduce on the AutoDraw canvas. Repeat from 2.
I got an amazing result on my first attempt at 1 - 3:
Autodraw (Cat > Raccoon): http://imgur.com/a/Fy00K
Google Images (Raccoon > "Suppression of Enemy Air Defenses"): http://imgur.com/a/F3z1T
It'd be hilarious to automate this, set it running, and just watch it go.