When I finally got around to drawing something, it didn't work. I drew a relatively simple rectangular building and nothing happened. I realized it must be matching based off of things it already knows so I restarted and drew the simplest 3 line car I could. Still nothing. No error, no instructions. Just a trash can and a circular arrow.
This experiment is a failure.
You scribbled something that the ML system classified as a house boat. None of the characteristics in your drawing moved over to the model. The only creative part (your scribble) was destroyed.
You could've just typed "house boat" and picked the model from a list. The result would be the same.
Why even have a GUI why not a CLI which parses a config file that says "house boat".
It's fun for thirty seconds, after which it is just tedious. If you actually want to "get stuff done", like building a toy city, you will want the menu that shows you what's actually available.
> Why even have a GUI why not a CLI which parses a config file that says "house boat".
Because that's obviously a bad interface as well. It's not like scribbly interfaces are the next step of evolution in human-computer-interaction. They're gimmicks. You're not going to go to Amazon.com and start scribbling something you want to buy. You'll type it in.
But to your Amazon example, doodles might be the new best way to find "that thing with the stick out the top and the bell looking thing at the bottom. I forget what it's called"
I'm glad Google is playing around with this. This one though just feels flat compared to the others.
Not to Google.