
On generative algorithms - dave9000
https://inconvergent.net/generative/
======
aw3c2
Warning, this series of fantastically written articles on generative art might
make you feel very small and sad compared to those who are capable of not only
creating but communicating beauty like this.

In other words, I highly recommend you try some generative art yourself and
see that at its core, it is not that hard and YOU can do it too!

~~~
bduerst
Any idea on what they used to generate patterns like this?

[https://img.inconvergent.net/img/plot/43adc57.jpg](https://img.inconvergent.net/img/plot/43adc57.jpg)

~~~
gmiller123456
There seems to be two pieces to generating that particular image. The first
appears to be just a sphere with lines drawn at random points on its surface.
The second piece are the "motion" lines that tend to point away from the
sphere's direction of motion.

The first piece seems simple enough. Pick and random point on a sphere, and
draw a few lines at random orientations a very small distance from that spot
with no correction for persepective.

The second piece, the motion lines, seem to be a bit more complex. But they
appear to be done by choosing a random spot on a circle's circumference,
choosing an angle at random which is heavily biased towards the tangent line
at that point on the circumference. Choosing a magnitude (length) for that
motion line. And drawing some dashed lines along that path. Then repeating
some new lines along that same path but shifted a random amount perpendicular
to the motion line.

[I doubt I explained that second part very well, so don't try to re-read it if
you didn't get it.]

------
akavel
When I was a kid (in the 90s), one of the computer periodicals in Poland at
the time had a column titled "Mikroprofesor", which run a lot of simple and
beutiful algorithms for generative art like these. I loved the images and even
tried coding some of them (I think they always came with some pseudocode). I
wish there was some online archive of the articles, but unfortunately I
couldn't find one (the magazine was named "Enter"; I can't recall the name of
the column author however).

~~~
emptybits
Sounds interesting. FWIW, A search for "Magazyn Enter" (the publisher is
Lupus?) at archive.org turns up a handful of downloadable cover CDs from 1999.
I didn't see any references to Mikroprofesor in the one ISO I checked. Just
FYI, if it helps anyone sleuth.

------
thx4allthestuff
@Anders Hoff, thank you for making your site work just fine with javascript
disabled. I really appreciate that.

------
wiz21c
The father of all of this is William Latham.

[https://www.doc.gold.ac.uk/~mas01whl/](https://www.doc.gold.ac.uk/~mas01whl/)

(or John Conway, but that's another league)

~~~
sampo
Lindenmayer (born 1925) and Prusinkiewicz (born 1952) are older.

[https://en.wikipedia.org/wiki/Aristid_Lindenmayer](https://en.wikipedia.org/wiki/Aristid_Lindenmayer)

[https://en.wikipedia.org/wiki/Przemys%C5%82aw_Prusinkiewicz](https://en.wikipedia.org/wiki/Przemys%C5%82aw_Prusinkiewicz)

[https://en.wikipedia.org/wiki/The_Algorithmic_Beauty_of_Plan...](https://en.wikipedia.org/wiki/The_Algorithmic_Beauty_of_Plants)

------
sarbaz
The Hyphae image (and the trees) have a result that's very similar to
Diffusion Limited Aggregation, but the method of generation is very different.

IMO the Hyphae way is actually less cool than plain DLA. The algorithm had a
lot of degrees of freedom that were selected randomly, while DLA has very few
degrees of freedom and gives a cooler shape

I guess the point is that sometimes a simpler algorithm can give more
interesting results for this kind of thing

~~~
kragen
DLA is super awesome, but if you're starting from a point, it takes a _lot_ of
computation. Or, rather, what we thought of as a lot of computation around
1990. Hoff's version of "Hyphae" is orders of magnitude less computationally
demanding.

In a lot of cases, if you're willing to throw a massive amount of computation
at a morphogenetic problem, you can get a higher coolness-to-algorithmic-
complexity ratio.

------
pareshverma91
About half a decade ago I had added a random blot generation to my now-
unmaintained-site
[https://web.iiit.ac.in/~paresh.verma/](https://web.iiit.ac.in/~paresh.verma/).
I have spent a few hours generating random blots, and trying to interpret
them. Adding more structure to it, with an algorithm should make it more
interesting and entertaining.

------
khitchdee
There's but a fine line between generative algorithms used for the purposes of
arts and entertainment

and those used for programming itself.

Formerly referred to as meta-programming

generative programming is the art of writing programs that help generate
programs

sort of a way of extending the idea of the tool as in programming tool

beyond its current rather static avatar that consists merely of a rather
lifeless IDE

~~~
a1369209993
Honestly, given the unreasonable effectiveness of (de-)compression algorithms,
and the plethora of "I built a CPU in [insert PCG game here]" articles, I
think it's a line of negative width.

~~~
khitchdee
strange

------
olooney
These are very beautiful. Very organic, yet strictly mathematical.

Many of these techniques have practical applications in computer graphics; see
for example
[https://www.youtube.com/user/keeroyz/videos](https://www.youtube.com/user/keeroyz/videos)

------
speps
I was fascinated by Jared Tarbell's work and his website some time ago:
[http://www.complexification.net/gallery/](http://www.complexification.net/gallery/)

------
akshayB
One of the reasons why I like to follow generative algorithms its extremely
unique and wildly diverse at the same time. It is a combination of geometry,
design, art & programming that tries to mimic nature.

------
jimijazz
I wonder if these algorithms have been applied to sound generation....

------
joshu
He does a great job of explaining stuff that seems very hard in a way that is
intuitively very implementable.

I've been a fan of his for years and have ported some of his stuff to my own
robots...

------
jimijazz
Awesome work! Any one knows why did the author closed the shop?

------
shawn
Davinci pointed out that a tree’s thickness is constant if you slice it in a
semi circle.
[https://fractalfoundation.org/OFC/leonardotree1.jpg](https://fractalfoundation.org/OFC/leonardotree1.jpg)

That is, if you face a tree and draw a circle with your arm, and add up all
the thicknesses your fingers intersect, the total will always be the same.

[https://www.insidescience.org/news/uncovering-da-vincis-
rule...](https://www.insidescience.org/news/uncovering-da-vincis-rule-trees)

~~~
sampo
In forestry, they teach that as "pipe model". I am unable to track down how
long that observation has been known in forestry, but I found this 1964 paper
that attempts to build on the observation (already known in 1964) and build a
theory.

[https://www.ncbi.nlm.nih.gov/pubmed/29370362](https://www.ncbi.nlm.nih.gov/pubmed/29370362)

The modern understanding is that the pipe model holds only approximately.

~~~
sampo
It's kinda nice, how you can take a bunch of metal wires to make a tree trunk,
and start separating them and folding them into branches, and separating and
folding into smaller branches, and this can give pretty realistic looking
trees.

[https://diyjoy.com/wp-
content/uploads/2016/06/WireTree.jpg](https://diyjoy.com/wp-
content/uploads/2016/06/WireTree.jpg)

[https://www.thisiscolossal.com/2014/10/dense-wire-tree-
sculp...](https://www.thisiscolossal.com/2014/10/dense-wire-tree-sculptures-
by-clive-maddison/)

[https://www.youtube.com/watch?v=g0tj_ZLpv80](https://www.youtube.com/watch?v=g0tj_ZLpv80)

------
make3
Everyone who liked this should like to read on Generative Adversarial (Neural)
Networks, generative models where a pair of neural networks are trained one
against the other to learn to generate new images. This has been put to many
great purposes, including artistic ones:
[https://towardsdatascience.com/gangogh-creating-art-with-
gan...](https://towardsdatascience.com/gangogh-creating-art-with-
gans-8d087d8f74a1)

~~~
weego
No it's a completely different interest and skillet. Not everything can be
boiled down to "datascience can fix it!". Especially when all those NNs will
just generate art that's a mashup of the training data and won't have any real
inspiration. Unexpected results is not inspiration.

~~~
yorwba
Using GANs for art is not necessarily limited to training on a collection of
famous artists to replicate their style.

Some artists use them as another tool to process and transform their artwork
[https://thegradient.pub/playing-a-game-of-
ganstruction/](https://thegradient.pub/playing-a-game-of-ganstruction/)

I'm pretty sure that the invention of photography was also greeted with
complaints that it lacked "real inspiration" because a photographer "just" had
to choose what to take a picture of. But the real inspiration is always in
what you choose your art to be, and not whether you create it by applying
paint to a canvas.

