It depends strongly on the specific application. The actual GL calls themselves, once data is on the GPU, execute at the same speed as in native applications, because it's the same hardware/drivers/API. But, the roundtrips between JS code (and JS datatypes) and the GPU could kill performance even more than CPU<->GPU roundtrips normally do. So depends on how good the application is at avoiding those.
70% faster doing what? On what hardware? The images on the link you posted look like it's doing a few 2d blits. So if you have any half decent graphics hardware, your GPU will be sitting idle and the CPU will be waiting for vsyncs.
So while you may get better perf in a micro benchmark like this, with decent graphics hardware you should be able to add 2000x the content (vertices, fragments, textures, etc) to the WebGL app with only a little increase in GPU (and CPU) load.
Comparing anything to WebGL is silly as WebGL may run on different hardware and software, ranging from a mid-spec smartphone to a hot gaming PC that has liquid cooling and it's own nuclear power plant for power supply.