
Three bad recipes generated by neural network (2017) - DanBC
http://aiweirdness.com/post/159022733587/three-bad-recipes-generated-by-neural-network
======
YeGoblynQueenne
Actually, it's pretty amazing that it got the structure of the recipes right-
title first (with subtitle), ingredients list next, complete with quantities,
then preparation and even serving suggestions at the very end ( _Serve on
ranged removable pieces_ ; lol).

There's no information on the training (the link on the top first asks me to
accept targeted ads so, no) but it looks like an LSTM trained to predict the
next character (hence the nonsensical names of the recipes and some other
words, clearly invented by the network to approximate real words in its
training data). In a way it's really impressive that this sort of setup can
learn such rigid structure.

On the other hand, not an expert, but I'm pretty sure there are ways to force
the network to learn that the ingredients list and the prepration section must
be somehow related. I think, something something attention something would
make for more coherent results :)

------
jedberg
If you put whole recipes into the neural network, of course it won't work. You
need to separate the parts. One neural network for ingredients, and a separate
one for "instructions".

Then you feed the output of the first network into the second to get a set of
preparations that actually prepare the chosen ingredients.

~~~
crististm
What about your solution makes it right?

~~~
bostik
I would think it wouldn't get things right. But it should get things less
incorrect.

The ingredients are a set of items. Preparation is a sequence of steps with
the ingredients as input. A neural network taught on lots of recipes should be
able to come with sequences that actually make sense given the ingredients.

~~~
ricardobeat
The choice of ingredients is also a function of the desired preparation /
serving. This will probably generate more realistic recipes but still nothing
usable.

~~~
jedberg
That would be accounted for though. If say the standard prep for chicken
involves butter, all the training recipes would have butter with chicken.

------
crististm
What I get from this is that neural networks really have no idea what they are
talking about (which of course is no news). How could they? Is a feature more
important or relevant than another just because of its frequency?

People look at the output of neural networks and when they work they say - of
course they do. And when they don't they rationalize it with "your model was
wrong from the start".

(This is similar to how Alpha-Zero "discovered" the optimal chess openings. Of
course it did, it was bound to find them by the rules of chess).

------
pmx
It failed at making recipes but it excels at comedy!

~~~
mr__y
apparently randomness is important factor in comedy, just like somewhat funny
memes created by neural network
[https://news.ycombinator.com/item?id=17302917](https://news.ycombinator.com/item?id=17302917)
maybe joke-generation is actual future of AI?

~~~
drb91
Perhaps randomness is a shortcut to absurdity, but good comedy is rarely
random. (Is it ever? Honest question.)

------
jwilbs
Related: Awesome visual essay about using algorithms to generate (bad) cookie
recipes
[https://pudding.cool/2018/05/cookies/](https://pudding.cool/2018/05/cookies/)

~~~
DanBC
That's an amazing article about cookies!

------
exikyut
__Highly__ , highly related:
[https://gist.github.com/nylki/1efbaa36635956d35bcc](https://gist.github.com/nylki/1efbaa36635956d35bcc)

I tried condensing it a couple years ago (
[https://news.ycombinator.com/item?id=14376816](https://news.ycombinator.com/item?id=14376816)
\- I was i336_ then), but I think the original is worth slowly reading
through.

There are some definite gems in there that will most definitely cheer you up
at the very least.

------
Keres
Is it just me that wants to see another neural network which can generate a
photos of these wondrous creations ?

