Hacker News new | past | comments | ask | show | jobs | submit login
World Draw (withgoogle.com)
172 points by valgaze 5 months ago | hide | past | web | favorite | 120 comments



I'm not really understanding the point of this. I can draw something, then it tries to match it to one of it's cookie cutter things that it knows about, and doesn't seem to use my drawing at all in the final map. So is this just a glorified "thing picker", rather than a big dropdown or something?


Yea would have been much more pleasant to just type "fence."

I thought I could place it myself, I was going to be an Internet about it and drop a fence right in the middle of this roundabout with like ten buses going around it, but I'm stuck on this "finding a place in the world," so I don't really get the purpose of this app at all.

I have no way to distinguish if I actually am affecting this "world." For all I know someone is just modeling stuff, and when I draw a "fence" it goes and finds a random fence and says "here's the fence you made, yay for you!"


> I have no way to distinguish if I actually am affecting this "world." For all I know someone is just modeling stuff, and when I draw a "fence" it goes and finds a random fence and says "here's the fence you made, yay for you!"

I have a feeling that's exactly what's going on here because I drew a small bush and it said "Thank you, the world needed 4 of those!".

Also take a look at where most of the trees were created at. Shoreline Amphitheatre. :)


I love how Hollywood gives us futuristic UIs from movies like Iron Man, the Matrix, etc. And then you have Google. They predict a future that looks like...to use another Hollywood reference, The Truman Show.


They are using it collect labeled drawing data for future ML training.


Quick trigger the botnet to gaslight this training session ;)


This was my experience.

First I drew a post box that I can (normally) see from my window. It tried to tell me it was a house, and I managed to cancel out by accident in my search for a "not a house" option.

Second I drew a bush with lots of bubbly bits - very mario-esque - but it turned it into a spheroid.

Confusing if this is what was intended; but the UI was pretty.


Its just a thing they made for the last Google IO to show off their doodle-to-thing ai recognition. So, yeah, the thing picker is the point.


More specifically, this game:

https://quickdraw.withgoogle.com/

It uses an AI do figure out what you meant from an abstract drawing.


Which led to this amazing segment on the local news - https://youtu.be/ZOXKLz4UfRY


at first I was skeptical, but now I'm glad they made this app.


I tried Firefox and Chrome to see if it was a browser dependency. I tried work and home internet to see if it was a firewall issue.

In the Console if you bring it up you can see the issue. It's unable to connect to the socket it's trying to connect to. So a live bug.


This site makes absolutely no sense. Why is there no explanation to what I'm supposed to do and to what is happening? It says it's the "world" but I can barely navigate a small little neighbourhood. I thought I'd be able to explore the whole planet with drawings correlated to the location they're made in. Why does it tell me where people are drawing things from when it really means nothing?

When I finally got around to drawing something, it didn't work. I drew a relatively simple rectangular building and nothing happened. I realized it must be matching based off of things it already knows so I restarted and drew the simplest 3 line car I could. Still nothing. No error, no instructions. Just a trash can and a circular arrow.

This experiment is a failure.


I think that, similarly to earlier projects like https://quickdraw.withgoogle.com/, it's a way of obtaining tons of user generated data (scribbles), which are also relatively easy classified.


I don't know, took me a few moments to create a house boat. Experiment is a success?


You didn't create a house boat. Somebody modeled a house boat, gave it two parameters to slightly tweak it. An ML system was trained to classify scribbles as (among others) a house boat.

You scribbled something that the ML system classified as a house boat. None of the characteristics in your drawing moved over to the model. The only creative part (your scribble) was destroyed.

You could've just typed "house boat" and picked the model from a list. The result would be the same.


And yet just doodling something raw and having something presentable come out is way more fun than choosing stuff from drop down menus. Everyone is wired differently and this may not work for you but may work for others.

Why even have a GUI why not a CLI which parses a config file that says "house boat".


> And yet just doodling something raw and having something presentable come out is way more fun than choosing stuff from drop down menus.

It's fun for thirty seconds, after which it is just tedious. If you actually want to "get stuff done", like building a toy city, you will want the menu that shows you what's actually available.

> Why even have a GUI why not a CLI which parses a config file that says "house boat".

Because that's obviously a bad interface as well. It's not like scribbly interfaces are the next step of evolution in human-computer-interaction. They're gimmicks. You're not going to go to Amazon.com and start scribbling something you want to buy. You'll type it in.


Overall I agree with you. And I'm pretty disappointed with this experiment page. The previous ones were better for sure.

But to your Amazon example, doodles might be the new best way to find "that thing with the stick out the top and the bell looking thing at the bottom. I forget what it's called"

I'm glad Google is playing around with this. This one though just feels flat compared to the others.


No one said this is the "future" of all interfaces. It was a demo and it may have a niche. You're extrapolating all on yourself.


Hah, the UI told me I made a houseboat, thanks for raining on my parade.


You still get a trophy for participation.


> The result would be the same.

Not to Google.


You move around by dragging the picture from side to side. Not the most intuitive interface, to be sure.


I let my 7 year old play with this; she was able to scribble simple things which were recognized pretty well, actually, and was a fun and useful activity. I think most of HN is not the target audience here, and we are reacting to our assumptions based on the title of the project.


It's nice to see a single reasonable response here. The entire premise seems to fit a young kid contributing to something, but a bunch of jaded 30 year olds on HN are angry that this tool didn't appeal to them.


Heck this site doesn't even work with noscript. I'm outta here. /s


HN does tend to get very self-absorbed at times.


But has anyone claimed that they could build it themselves in a weekend?


It works completely contrary to what it pretends. There is no creativity and creative collaboration happening here. Everything I draw is turned into a predefined shaped. Everything unique is translated into something generic. I think there is a serious misunderstanding about the value of creativity happening here, and I very much hope this is not the future Google is imagining for us.


Woah, slow down a bit. This feels very much like a toy project with a toy budget that had optimistic ideas about how easy it would be to create something that could be dynamic in an interesting new way.

To think that it somewhat represents future direction of Google seems over the top.

Dynamic creation in the way they are hinting at in the description seems like a very difficult but interesting endeavour that requires a lot of iterating to get anywhere on.


I think you are both right. It’s a tech demo which unintentionally perfectly encapsulates a disturbing direction we could slowly evolve into... unintentionally.

This thing could be placed in an art gallery. Visitors submit an original drawing and watch it transform into a cookie cutter contribution, stripping away everything that makes them unique. Its actually a pretty played out concept artistically, the whole “white picket fence” vibe.


I think it's just how people view large tech companies on this site. The view of those large companies is very negative in general here.


I've seen withgoogle.com before. withgoogle.com is supposed to invoke collaboration, but with google's reputation as a cold, impersonal corporation where everything is automated and there's hardly ever any technical support, I just mentally shrug it off when I see it.

and we also have abc.xyz and domains.google and other domain shenanigans that I forget about.

I just learned that alphabet.com is different company. I wonder if they've clung to the domain as tightly as nissan.com has clung to theirs or if google doesn't want it that much.


Think of withgoogle.com as an off-google.com domain place for experiments and ad-hoc programs/campaigns. There are still the same privacy and security standards, but a little bit more lax branding rules to allow for more ambitious projects. (Disclaimer: I work in Google and have launched some projects on withgoogle.com)


Back in the time when I was working on jobs for Google (they were my employer's client), withgoogle.com was a domain where we hosted things like "über-doodles", all the funky gizmos, etc. It's basically so that they don't run "experimental" / "untrusted" code in the same cookie domain as google.com; same thing goes for googleusercontent.com, where they store user-uploadable media.

It was always my impression that coolproject.withgoogle.com is meant to be understood as "company X made a cool project with Google".


Not sure if I understand your comment. Alphabet is google's parent company, allegedly created to assuage the fear of a giant entity by turning it into multiple quite big entities.


Alphabet, the owner of alphabet.com appears to be some sort of fleet (as in company car) management service, unrelated to Alphabet the owner of Google.


Right, I see. Thank you for the clarification.


I agree. It promised to let me "draw the world", and then immediately turned all my creative phalluses into mushrooms and trees. What's the point of spending all that making users draw stuff if you're going to replace it with ready-made art? Such a waste of users and traffic.

The things any number of the projects at indiehackers or product hunt could do with the traffic!


This is probably the most dramatic reply I can think of


Hahaha, thanks. I remembered this site i used to spend time on where people actually collaborated on weird quilt paintings back around 2000 - now this was a lot of fun. Sadly not functional anymore and only as a gallery http://www.ice.org/tiles/ The paintings might be a bit uglyish, but I prefer them any day to the kind of tech fetishization Google is presenting here.


I actually like this art quite a bit, don't find it uglish at all. Sad that it's not functional anymore, how did the collaboration take place? It could not have been real time at the time could it?


It was actually a pretty cool site. You would reserve one of the free square "tiles" and would only see a thin stripe of the neighboring tile that you'd paint next to. You'd get a png image with something like 10 pixels of the neigboring tiles on the edges, that you had to blend into, and given 24 hours I think to reupload to the site. Only when all 4 neighboring tiles were completed a tile would be revealed. It was a lot of fun. They also had stuff like imaginary gold coins for X number of painted tiles, not for yourself, but that you could award to a tile you especially liked somebody else made.

I also like the art in the sense that it allows all kinds of perspectives to flow into the image, and there is a way greater complexity than somebody alone could create. It would be worthwhile to recreate the site - maybe I'll do it some day (-:


That sounds very fun and I'd like to see things going in the direction of creative "social" platforms/communities/ games/experiments made for humans to play with not dumbed down and yet simple.

This idea could be turned into a phone app, and experimented with real time aspects of drawing concurrently, layers, sounds, video frames. Imagine stitching together a film edited and directed by 50 people in the similar vein as the tiles site. I think we have amazing possibilities and the technology for real time collaboration is here already, we just have to have a different vision than what the big companies want from us.


It's an idea for a new open project, something like Github but with creative artistic designs! Where people can use similar version control ideology for an art project!


Art doesn't work like that, it could be a bit more disorganized, spontaneous, fluid, etc. As soon as it is pinned down it looses its vitality, becomes boring fast. Artists spend a lifetime to end up settling in some very personal ways.


drawball.com is still going I think.


But its not really pretending to be what you're describing. Its just a web toy they made to show off their doodle-to-thing ai with the added fun of dropping the thing onto a cartoon google-map.


"Draw the world together", "Nice! you made a house". Even when Google make a "web toy", they just can't help but ooze infantilization and saying one thing while doing another.

> the added fun of dropping the thing onto a cartoon google-map.

The added fun of it being dropped, or dropping it? I don't know, since for me on Firefox, it hung at the "finding a place in the world" stage.


They can't help it. Their entire employee base is terminally infantilized.


Exactly.

I was hoping for something like /r/place.

It's just a demonstration of the impressive but ultimately rather useless "drawing -> shape classification" machine learning application.

It would be massively better if I could just choose from the shape classes and place them myself. That would allow at least for some amount of creativity. Of course then they don't get to show off their gimmick AI...


But it's someone else's vision, why does it have to conform to yours ? Also, the classification is super useful for technical flowcharts and designs that can be doodled easily. That's at least one use case.


> But it's someone else's vision, why does it have to conform to yours ?

I didn't say it does. I'm giving an opinion.

> Also, the classification is super useful for technical flowcharts and designs that can be doodled easily. That's at least one use

I disagree. Classic user interfaces are superior here as well. Just pick the tool you want instead of convincing some shape recognition system that whatever you just scribbled is some class of basic shape. You could just pick that shape from a shelf of tools.


I'd say the shape recognition model is the classic one: it was used in Sketchpad (1962) and Grail (1968), at the very early stages of computer GUIs.


By the way, here's a video of Alan Kay presenting Grail: https://www.youtube.com/watch?v=QQhVQ1UG6aM


I find choosing shapes from a toolbox to be a pain. I can focus on my train of thoughts and links to pieces when I am drawing freely and not having to context switch to moving and selecting using a cursor. I can also use this interface on a tablet. Many people are wired differently to your preferences, it's worth trying new approaches to old tools.


> I can focus on my train of thoughts and links to pieces when I am drawing freely and not having to context switch to moving and selecting using a cursor.

Do you really know that for a fact or do you just assume that? What program with a scribbly interface are you actually using on a regular basis?

> I can also use this interface on a tablet.

You can also use toolshelfs (and many other interface elements) on a tablet.

> Many people are wired differently to your preferences, it's worth trying new approaches to old tools.

This is not a new approach, this is a very old approach that has been summarily rejected as a good way of going about things.


Is that the same thing you say when a kid gets LEGO's? It's all predefined shapes anyway, no creativity anywhere.

Why on earth did you write your post? What does your negativity bring to the world?


Lego is a lot more fun when you add hacksaws and p38.


That's how you get a lot of dicks drawn unfortunately.


If you can classify houses, trees and boats, you can classify "problematic" things. That would actually be somewhat useful.

In practice though, if somebody wants to deface something, they'll find a way to do it. You can't sacrifice creativity to preventing that.

/r/place has shown that people (or redditors) are cooperative enough to maintain a reasonably "clean" environment, without any form of external moderation.


I just wish they'd focus on fixing Gmail instead.


I'm sure they shut down the entire Gmail organization in order to work on this tool, rather than have a separate engineering team work on it.

This is akin to complaining about bugs in a game when they release new artwork. Artists aren't working the bug queue.


I only know that Gmail has degradation of quality they apparently are unable to fix. From the outside, it does appear Google has shortage of talent, their famed interview process notwithstanding.

This demo site though is more responsive and slick than their mainline product. Just as Gmail it encompasses Web UI, data storage and a machine learning backend. Maybe all people who can code have moved to the doodle lab.


Jeez, it's just a cool demo. If you don't like it don't use it. No one is forcing anyone to use this.


Can we all just acknowledge that the "AI" aspect of this is gimmicky and - ignoring that part - Scribblenauts did this way, way better back in like '09?


Acknowledged. I would say even Scribblenauts might've been actually a better game without this gimmick. It wouldn't have attracted the same amount of attention though.


This AI doesn't seem to recognize my drawing...

https://imgur.com/a/fwmUEP6


That was a good one!


thank


Amazing idea! Poor execution. I tried to draw something and it basically reduced to an AI doing an object recognition on what I had drawn, then suggesting me the closest matches from its existing database. I clicked on a match, then was able to change TWO parameters describing the object.

It would have been much more interesting if you could draw something, then have an AI try to guess what it would look like in 3D, then add your model to their database. People could then upvote/downvote models to get rid of crappy results, and the AI could use that information to learn what models it messed up generating.


> This website is optimized for certain browsers and devices.

> Please upgrade your browser.

Emphasis mine - I'm on Firefox 63. I can't believe this is still happening in 2018. :(


I'm on Firefox 63.0.3 and it worked fine


Same on 63.0.1.


Especially because the corporate ESR version 60 is relevent for about another year (support finishes when Firefox 69 comes out).


I am getting this with Chrome 70.0.3538.102 on Mac.


Same thing.

“Works best on chrome” I guess.

:(


Neat idea, but seems to be limited in user choice.

Tried to draw a couple of things, it picked the closest match, then got stuck on "Finding a place in the world" for me (why can't I pick?).

Also can't seem to draw new things, just pick from the AI guesses, so far as I can tell.

I look forward to poking at the next version.


No matter what I draw, I always end up stuck on the "finding a place in the world" step.


I guess the world is "full".


I feel like this essentially exists for the sole purpose of generating a large training set for some sort of drawing analysis.

I wonder if google has some new use case where they want to be able to quickly tag human doodles with their subject matter?


Don't worry, this is simply made for tracking users, first one creates few drawings then his unique drawing style is assigned to profile and voila! now you can be recognized anywhere and everywhere by simply drawing a cat. (-:


Kinda disappointing that the "AI" bit of it is just image recognition.

I thought it was going to be a cool demo on changing 2D drawings to 3D objects.


There's probably a project to make here. Take those pixel2pixel models that are able to generate images from drawings, generalize them to 3D, say a pixel2vertex. Would be pretty neat.

The key would be to find a way to constrain the output space so that it's not so vast that it won't be possible to converge to something useful.


That would be very interesting, reminds me of an demo a couple years back of sketching buildings and and immediately being translated using procedural shape grammars into 3d.

http://www-sop.inria.fr/reves/Basilic/2016/NGGBB16/


A lot of cynicism in comments. Seems like an innocuous experiment. Cool graphics. Average experience. But built on the web, that's important. Let's not conspiracy theories get the better of us.


It's telling me my browser is unsupported and to use Chrome, but I'm using the latest version on Mac. Anyone else getting this? Works on Safari though.


Seems like a bad version check. I'm running latest Beta (Version 71.0.3578.53 (Official Build) beta (64-bit)) on macOS and seeing the same thing.


Working on Firefox 63.0.3 running on Linux perfectly.


Same for me. Ironically if you click on the "unsupported" link it wants you to install Chrome.


Working in Chrome on macOS for me.


> World Draw determines what someone is sketching and turns it into a 3D model in WebGL.

This appears to be highly misleading at best. It appears to be a fancy classifier for 2D drawings, that matches to a database of predesigned slightly-customizable 3D models. There's no real connection between the drawing and the model.


The graphics are incredibly charming! Would totally play a Sim City / Cities: Skylines etc. type of game with graphics like this and think the drawing controls could be a really nice way of navigating blocks and such on too!


The Draw Things section has no way to finalize the drawing. The only buttons are reset and undo.

Also on the world map there's a bunch of bikes that claim to be cars and pickups.

Firefox on Windows.


If the AI recognizes your drawing it will offer a next step.


I tried it on Chrome, and it started recognizing after just one line. On Firefox it never did.

Also, it's sad that your drawing gets discarded as soon as you pick a category.


An AI Experiment to “draw the world closer together”? While I appreciate this cool technology, it would look like the gushing technoutopianism of say 2010 is still alive and well inside Google despite social unrest across the world. I don’t want to explicitly criticize their optimism, but maybe they should have used a more restrained statement to describe their app?


I guess it says something that my drawing is so bad it doesn't recognize it as anything? The tool seems oddly non-functional.


Well apparently they haven't modeled the Death Star yet... Turns out the closest thing to a Death Star is a bush.


I think it's really weird that you can't draw people or animals. I tried to add a t-rex to the world :(


I tried to draw a church but it didn't recognise it.

This is no more advanced, and waaay less useful than this: http://detexify.kirelabs.org/classify.html (which is at least 10 years old at this point).


Not sure why, but this crawls on Safari, (2013 MBP, 16GB RAM) and demands I update either Chrome or Firefox before it will run, and I generally avoid running the newest version of anything unless completely necessary.

Safari gives me about 4FPS, with nothing else open.


> I generally avoid running the newest version of anything unless completely necessary.

Including your web browser!?


Especially my web browser, and iOS/MacOS/Windows.

The security issues are not nearly as important to me as the software actually functioning the way I want.

The best security is just not to use Google products to start, I know, but sometimes it's a necessary evil.

I also avoid upgrading Logic Pro / Final Cut Pro like the plague. Apple has a nasty habit of changing minor UI/UX features that really screw with my workflow, even just the way they moved the 'tempo' field in Logic 10.4.


You know those joke UI exercises of "the most terrible way to implement a volume slider" or whatnot? This is basically that.

In order to pick a tree prefab, please draw a tree.


Slightly the lot of changes regularly impacting the world like technology day by day change, some predictions for the near-future of artificial intelligence.


> This website is optimized for certain browsers and devices. Please upgrade your browser.

... which? Getting this with Chrome 70.0.3538.102 on Mac.


Reading through these comments is actually kind of hilarious, HN is definitely embracing The Grinch just in time for the holidays.


Seems like a great way to get training data for image recognition (or whatever the term that applies). Kudos google.


Surely everyone else just sees this as a scaled way to keep training Google's machine learning models, right?


It kind of sucks if you draw something nice, but it can't tell what it is, so that's that.


I thought this had something to do with the world chess championship though this is cool too.


It's drawing only cactuses?? I didn't know I could draw cactuses so well


It'd be cool if you create things like in the Scribblenauts game.


okay be honest...how many of you tried drawing a penis?


I don't get it.


Is it just me or is it so slow?


The drawing screen lags a little for me. Kinda hard to draw precisely unless you move very slowly.


I'm guessing the suggested items are the ones hard to differ. Analytics and explanation of patterns to the machine if you ask me.

They just keep pushing out stuff for the same reason on and on




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: