Hacker News new | more | comments | ask | show | jobs | submit login
Suggestive Drawing Among Humans and AIs (nono.ma)
124 points by swannodette 9 months ago | hide | past | web | favorite | 25 comments



Humans - for the moment - still drastically lead machines in the realm of creative output.

But I love the author's idea of the machine as a "collaborator"

Tools like the Source Filmmaker are a good analog. (1)

That platform packaged together powerful animation tools along with free assets and a simple UX to empower largely non-technical creators to make awesome CGI clips, movies, and memes.

The function of the software referenced in the OP is simple - it colors your photos, it helps to extend or add texture and depth.

But you can see the potential for a future where damn near anything you can think of, and draw in stick figure form could come to life as a fully realized image, painting, animation, or experience.

(1) https://en.wikipedia.org/wiki/Source_Filmmaker


> But I love the author's idea of the machine as a "collaborator"

My understanding is that this idea was at the very heart of how/why modern personal computers developed from batch-processing systems.

I remember reading a quote from one of the central figures in this development (I believe it was J.C.R. Licklider —definitely from the book "The Dream Machine" which has a good bit of focus on him), which suggested that the ideal role of computers was to leave us in a position where our thoughts are purely creative, where all thought in support of some direction we've decided on is dealt with by a computer instead, and we just make creative decisions.

That said—this is of course some cool new progress in that direction :)


Neat article, but I'm skeptical of machine learning to make art easier.

I spent some time using PaintsChainer, a ML tool to autofill colors based on a few starting choices and the results were... rough.

The problem with ML art, is that it lacks the ability to polish. 90% of the impact of art is in the the last 10% of work, where the artist meticulously refines the piece to turn it from a loosely colored sketch, into a cohesive and complete picture.

Many of the tricks and tools to create this polish are heuristics that are not quantifiable or teachable via image sets. They come from an understanding of the "Gestalt" of a picture, or what the "gestalt" should be and then doing the necessary steps to get it there.

The other problem is you can't teach an ML algorithm about the hidden volumes in art. Much of drawing/painting is about tricking the eye into perceiving volume when none exists. An ML algorithm can perceive volume and identity after color/light have been applied, because those categories carry data. However, an ML algorithm can't infer what color/light a circle should have to give it the correct volume/perspective. A circle can be a doorknob, a pie, a ball, an eye. Each needing different data applied to it, which the ML algorithm doesn't (and won't ever) have.

Personally, I'd welcome a tool to make painting easier. It'd be amazing. However, I don't think we are all that close to a machine creating polished artwork.


I didn't read very far but the article didn't seem to spend much time considering art theory, which seems odd when I would think that codifying those rules would build better AI. This paper builds a mechanical assistant to the mechanical act of drawing, which, I guess, is good, but like you say that AI will never be able to get the work to 100%. Which means real artists won't have much more to be concerned about.

Also that Marshall Mcluhan quote sounds more appropriate in a bitcoin thread. The world don't need a cult of A.I.


Every time someone makes a statement, or series of statements that involve the words "can't" or "not" (or others) regarding the capability, adoption, or just general progression of technology to do a particular thing or reach a particular goal, my spidey sense starts tingling and I think to myself "well that's going to happen." Unless it involves basic physics, then I think "okay, probably not going to happen, but you never know."


You can't determine if an arbitrary computer program will halt on some input.


Probably falls into the category of physics. ;)


Nop. The halting problem is a purely mathematical problem. You don't need any bit of physics to define it and prove it's not solvable.


Why do you assume that neural network would be unable to understand (or classify) 'Gestalt' of the picture? I would not make those assumptions about perspective, because in the end ml could make 3d model of the picture and KNOW the exact perspective.


If there's anything I would bet money on taking UX to a revolutionary new point, it's this.


Even in this relatively crude form, I already see a wide range of application for fashion design. For instance, you could take some of the textured flower designs the author presents and place them on graphic tees that could easily sell at Uniqlo, or H&M - and in bulk, these new designs would cost much less than paying multiple designers to create them.

(Though as the author points out, you would still need at least 1 designer to train the machine)


This site repeatedly crashes my mobile browser, I can't even remember when that happened last. (Chrome on Android 8.1)


> You can use the arrow keys to navigate this page, but then you cannot read it anymore.

FTFY


Awesome article.


Very impressed by the author's effort, particularly in making the topic easily digestible.


I cannot read the article unless I turn on Reader View in Firefox. The text is almost white.


The text color is almost black - 34,34,34. It uses a webfont called Minion Pro https://typekit.com/fonts/minion which isn't especially thin. The fallbacks in the font stack are just serif and sans-serif.


That is not what I see in my Firefox:

https://imgur.com/a/v3Oo1uy

It is probably some kind of weird configuration in my computer though.


You hit an arrow key. This page has weird scripting that tries to let you navigate with arrow keys, but I noticed it ends up skipping most of the text that way.


I'm guessing they used this document as their 'slides' while presenting. The text is what they'd be saying, while the arrow keys hide the script, and flip between annotated graphics.

That said, the keyboard navigation is pretty much useless outside of that context.


I think it has something to do with the scroll hijacking. The text is almost white if it thinks you haven't "scrolled" to it yet.


I have tried to scroll, and move around but the contrast does not change.


Was hoping for ai generated crudely drawn penises. Was disappointed.


Oh come on guys, it was at least a little bit funny!


One of the best things about HN is how joke comments aren't usually upvoted, even if they were funny. It makes room for other kinds of conversations.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: