

Enhancing Molecules Using OpenGL ES 2.0 - wallflower
http://www.sunsetlakesoftware.com/2011/05/08/enhancing-molecules-using-opengl-es-20

======
lloeki
I like the writing, it's clear and concise, and goes a long way to understand
how shader programming works.

Also, there's one easy to overlook small paragraph that nonetheless caught my
eye about how GCD was used to massively, and in an automatically scalable
manner, improve performance.

------
nkassis
Cool article. I'm wondering, instead of using 4 vertices to represent the
sphere could points have been used instead to do the same thing? Would it be
simpler?

~~~
kmm
I don't see how that would work. With the 4 vertices, he defines a square quad
that gets "textured" in the fragment shader. If one were to use a point, only
one pixel would be sent to the fragment shader.

~~~
palish
In OpenGL 3+ you can use something called a 'geometry shader'.

Execution flow looks like:

1\. application renders a list of points

2\. those points are transformed by the Vertex Shader

3\. the output of [2] is fed into the Geometry Shader.

(and the output of [3] is additional geometry, created "on the fly", which is
then rasterized.)

As a simple example, a geometry shader could turn each point into a square.
For each input point, it outputs the four corners of the square:
[[ptX-1,ptY-1] [ptX+1,ptY-1] [ptX+1,ptY+1] [ptX-1,ptY+1]]

But yeah that's not coming to GLES anytime soon.

