
Show HN: A beetle generator made by machine-learning zoological illustrations - belforn
https://www.cunicode.com/works/confusing-coleopterists/#StyleGAN
======
mensetmanusman
I had the thought of how obfuscating this technology could be to hide
information. Imagine this going off and publishing an unlimited number of
images with generated descriptions that are indistinguishable from what a
human would do. There would be no way to verify what the correct information
is if you are someone just casually searching the Internet. (This could also
almost apply for any type of information...)

~~~
belforn
-> [https://thisrentaldoesnotexist.com](https://thisrentaldoesnotexist.com)

All of the dynamic content on each listing is generated via a series of
different machine-learned AI models.

~~~
chaosmachine
"The laundry are converted into a mini central heating for every guest."

~~~
mipmap04
I've found that creating Markov chains are more human feeling (but maybe a bit
overfit) if I use a n[0]-level deep look ahead for my generators.

[0] where n is normally 2 or 3

------
todd8
What a clever idea!

By the way, there are over 400,000 naturally occurring species of beetles.
Beetles make up 25% of all animal forms according to Wikipedia.

~~~
mc3
Humans: by mass, a very insignificant species.

~~~
benibela
Not among land mammals: [https://xkcd.com/1338/](https://xkcd.com/1338/)

------
bayesian_horse
I wonder if this could be used as an identification tool for beetles like a
phantom sketch. You could move some sliders to get it closer to the bug you
are thinking of or which you have in front of you.

Of course, this method would have to compete with a related model just trying
to classify a photo.

~~~
deanclatworthy
They already have similar apps to this for a range of things. I guess the most
popular ones are for plants [1]

[1] [https://www.picturethisai.com/](https://www.picturethisai.com/)

------
auton1
Reminds me of Dawkins' biomorphs:
[http://www.emergentmind.com/biomorphs](http://www.emergentmind.com/biomorphs)

~~~
stcredzero
For more info on things like this:

[https://en.wikipedia.org/wiki/L-system](https://en.wikipedia.org/wiki/L-system)

------
jonplackett
I thought it was now basically the law that this must be named
thisbeetledoesnotexist.com

~~~
belforn
Someone already set it up:
[http://thisbeetledoesnotexist.com/](http://thisbeetledoesnotexist.com/)

~~~
jonplackett
haha, about time!

------
ArtWomb
This is a beautiful write-up. And I don't wish to detract from the author's
work. Nature's result side-by-side with that of the Machine. Makes one feel as
though we've taken a step backwards from Alan Turing's "The Chemical Basis of
Morphogenesis".

Consider the "morphogenetic puzzle" of a bi-valved seashell that shuts with
perfect water-tight seal. There is a constraint to this design: survival!

[https://twitter.com/AlainGoriely/status/1207210428344029184](https://twitter.com/AlainGoriely/status/1207210428344029184)

~~~
erikpukinskis
Reminds me of video game art: every game has two sets of art, the “real” art
which is part of the mechanics of the game world, and the “pretty bits” which
dangle off the game objects and try to trick you into believing there’s more
to the game world than there is.

A lot of gameplay involves testing for this boundary... Trying to figure out
whether you can actually do things that are implied by the art.

Are there any modern games where 100% of the art exists inside the game world?

~~~
hayksaakian
Minecraft?

~~~
drdeca
There are e.g. particle effects which don't (afaiu) influence anything else in
the game, and are just visual.

Also, the clouds.

------
cortesi
Excellent. I did something similar a few months ago using a dataset of
zoological silhouettes, resulting in a menagerie of mammals, bugs, spiders and
other mutant wonders.

[https://twitter.com/cortesi/status/1153075801891278848](https://twitter.com/cortesi/status/1153075801891278848)

[https://twitter.com/cortesi/status/1153088629972934656](https://twitter.com/cortesi/status/1153088629972934656)

~~~
belforn
nice results! could be interesting to see another network naming the mutant
creatures :)

------
fiter
Was this model tested for overfitting? I do not have any sense of whether the
beetles that I'm seeing match some source pictures exactly.

I noticed that the transformations seem to be fast through a transition and
then seemingly paused. Is this intentional or does this have something to do
with the model?

~~~
enchiridion
Is there a good way to test a gan for overfitting?

------
CSactuary
To the creator: since most (all?) of the beetles are symmetrical, couldn’t you
generate left halves and then reflect it to create the right half? This could
help you prevent asymmetric generations

~~~
dan-robertson
Most of the images have lighting which isn’t symmetrical and I think that
makes it pretty obvious when an image is made by mirroring half a beetle.

But maybe there’s some way to deal with that.

~~~
erikpukinskis
Naively, you could train another network to correct the lighting on a mirrored
beetle.

Although I’m sure there’s a smarter way.

------
eskimobloood
Would be interesting to see the result trained on images from this book[1]
about mutation on insects in the chernobyl area.

[1][https://www.amazon.com/Heteroptera-Beautiful-Other-Images-
Mu...](https://www.amazon.com/Heteroptera-Beautiful-Other-Images-
Mutating/dp/3908247314) [2][https://atomicphotographers.com/cornelia-hesse-
honegger/](https://atomicphotographers.com/cornelia-hesse-honegger/)

------
nsxwolf
What I never understand about these things is ... what actually does the
drawing? The AI decides what the beetle looks like, at what level of
abstraction? When/how does it go from beetle idea to pixels? Does this network
"know" what the beetle's "leg" is, or does it just "know" "this pixel here
should be this color"?

~~~
_fullpint
Much closer to the latter. I haven’t read this yet, but it sounds like an
encoder model.

~~~
belforn
Correct.

The machine here doesn’t even know that those are beetles (because nobody told
it), it is “just” arranging pixels in a similar manner as the pixels from the
source images. It does understand that each generated image must have “legs”,
“eyes”, “shells”... and other features that it detected are common in the
original images.

------
proc0
Interesting to see how it handled the legs. Looks like it had problems with
the fact insects have 3 pairs of symmetrical legs.

------
jamesfisher
Is the source/model/network available to run?

~~~
belforn
yes, it is published. Available via @RunwayML [https://open-
app.runwayml.com/?model=cunicode/confusing_cole...](https://open-
app.runwayml.com/?model=cunicode/confusing_coleopterists)

------
slynn12
Very clever. Do you know if a machine does the drawing? Wasn't totally clear
on that bit.

