For example: https://en.wikipedia.org/wiki/Monte_Carlo_N-Particle_Transpo...
So much of 3D graphics is "hmm, this random thing I just made slightly resembles an $X". "OK, so let's say I just created a method for modeling $X." :-) Really fun stuff.
Despite the apparent variety, most 4k intros rely on a single effect, maybe two. There isn't enough space for more. The only things that change are the parameters.
Interestingly, the technique used here is signed distance field raymarching. A technique that is used by maybe 90% of all 4k intros today.
So basically, write the code in GLSL instead of C, add music, play a bit with the parameters, use a good exe packer like crinkler and you have a nice 4k intro.
The effects make the difference, besides artistic direction, so materials, blur (bokeh), the sdf geometries (often just mere balls and cubes, but perhaps a twister or a nice water surface or a fractal) just for starters. Just from my outside perspective.
Off-topic: May I suggest that you switch to using imgur instead of the image host you chose? The ads on imgur are much less intrusive.
Edit: Found the image link on my own site:
The image subdomain servers of Imgur, i.imgur.com, look at the referer of your request and will conditionally redirect you to the web app.
Basically, direct links to images will only give you the image directly if the page you came from is on their whitelist.
Reddit is on said whitelist but I don’t think HN is.
And to further complicate the matter keep in mind that browser caching might make it look to you as though a direct link you posted somewhere is really direct, but to those who haven’t visited the image already they will be redirected the first time they visit that direct link.
For example, here is a direct link to an image hosted on Imgur: https://i.imgur.com/6O265V5.jpg
For me it will look this direct link serves the image only, even when I click my own link in this comment after I’ve posted the comment and am visiting it with HN as referer.
But if HN is not on their whitelist then you and everyone else clicking the link in my comment will be redirected to their web app, provided you didn’t happen to have the image in your cache already.
Edit: Yup, visited the link in this comment from another computer and am indeed redirected as I expected.
Edit 2: Am also redirected even on the device I posted from when following the link in my comment. So even browser caching didn’t stop that in this case.
Fortunately, if you send no referer it seems you get the image directly. This is easy to do with a browser setting.
On your desktop browser yes. On mobile devices not as easy.
And even on desktop you might not want to globally disable sending referer because other sites might break.
And even if you use a browser add-on to only block referers for i.imgur.com, most other people haven’t so everyone else is still being redirected.
What's even worse, on mobile, imgur heavily downscales and compresses images. Which is fine for kitten photos, but completely destroys any of its utility for screenshots, as you can't read normal-sized text on the phone.
But running an image hosting service that anyone may post to is a lot of work.
Firstly you have your run-of-the-mill DMCA takedown notices, both the legitimate and the bogus ones. So you need to deal with those. And if you are unfortunate with your choice of hosting provider or registrar then those might not forwarding DMCA takedown notices to you as they should but instead just terminate service.
And DMCA takedown notices aren’t even the worst part. Sooner or later someone might post illegal photos to your server depicting sexual abuse and other atrocities, and you absolutely need to figure out the proper procedures for dealing with that.
On top of that you have trolls abusing what ever form of report functionality you create.
This leads to a kind of Catch-22. You need for reports to be legitimate in order to provide service to the honest part of your user base, but at the same time you neither can nor want to look at the worst kind of images that someone could post.
So you need systems that can automatically identify those sorts of images without human interaction. And certainly it will be very difficult to get such a system correct so that it has no false negatives and limited amount of false positives (the latter meaning that the system removes images that shouldn’t be removed), because once again you neither can nor want to look at the images that it should be able to identify and remove.
Obviously, it’s not impossible — otherwise there wouldn’t be any image hosting sites in existence — but like I said, it’s a lot of work.
Additionally, even if you do get all of that right, the utility of a HN specific image hosting site is very limited. For example, if the HN community was to adopt it then suddenly the referer check that was there to make the service feasible in terms of bandwidth cost and server load will result in seemingly broken links for anyone on HN that wants to share any of the images outside of HN.
I wonder if they have some kind of "hey, non-members get a limited number of directs" policy.
Start here: https://github.com/ssloy/tinyrenderer/wiki/Lesson-1:-Bresenh...
Little off-topic, but that's one of the reasons I like to use Magit on Emacs, you can see the diff of each commit interactively and intuitively.
Anyhow, with just git.
Diff of most recent commit:
git show HEAD
git show HEAD^
git show HEAD^^
git show HEAD~3
git show af4c
git diff --cached
alias di="git diff --cached"
alias dp="git diff"
I didn't realize openmp was so easy to use. It isn't realtime but you could bake up some cool effects with this.
AMD FX8320 3.5GHz
$ time ./tinykaboom
This technique is used a lot in demoscene demos, which certainly do run in realtime.
$ time ./tinykaboom
./tinykaboom 78.08s user 0.02s system 369% cpu 21.159 total
$ go build ./tinykaboom.go
$ time ./tinykaboom
./tinykaboom 34.32s user 0.03s system 368% cpu 9.315 total
Here's the corresponding perf report:
Samples: 103K of event 'cycles:pp', Event count (approx.): 37252033995665
Overhead Command Shared Object Symbol
32.17% tinykaboom tinykaboom [.] math.sin
28.80% tinykaboom tinykaboom [.] main.hash
11.81% tinykaboom tinykaboom [.] main.rotate
7.76% tinykaboom tinykaboom [.] math.Min
5.18% tinykaboom tinykaboom [.] main.lerpFloat64
4.25% tinykaboom tinykaboom [.] main.noise
2.59% tinykaboom tinykaboom [.] runtime.mallocgc
2.59% tinykaboom tinykaboom [.] main.fractal_brownian_motion
2.58% tinykaboom tinykaboom [.] main.signed_distance
Samples: 234K of event 'cycles:pp', Event count (approx.): 86721459552303
Overhead Command Shared Object Symbol
67.93% tinykaboom libm-2.23.so [.] __sin_avx
30.80% tinykaboom tinykaboom [.] _Z5noiseRK3vecILm3EfE
1.27% tinykaboom libm-2.23.so [.] __floorf_sse41
0.00% tinykaboom tinykaboom [.] _Z23fractal_brownian_motionRK3vecILm3EfE
0.00% tinykaboom tinykaboom [.] floorf@plt
I ran the comparison again on another machine that I have and this time their performances are about the same:
$ time ./tinykaboom
./tinykaboom 46.72s user 0.01s system 364% cpu 12.804 total
$ time ./tinykaboom
./tinykaboom 42.50s user 0.07s system 350% cpu 12.161 total
2x e5-2667 v2:
Seems like it's pretty inefficient with the dual CPU setup.
I wonder how many shaders this would keep busy. There is probably a class of GPUs and above that this could work on rather well alongside an already large workload.
He also has videos on YouTube, some of which show the creation of a scene starting from the basics.
I wanted to say he's an excellent introduction to graphics. As a sysadmin, I had never drawn more than simple lines and the whole thing seemed daunting. Now I see how fun it can be, and I'm waiting for next weekend to pick up some more!
I was willing to criticize the line count if it had a bunch of dependencies, but 180 lines seems accurate and I'll give a pass to the obligatory core C++ include statements
Way to go!
Still, it's hard for me to not want to create a series of blog posts under the title "All those cool graphics tutorials in <language I like better>".
Especially for graphics-related stuff.
Yet HN community seems to ignore GPUs.
When I recently published my hobby project that renders much more advanced procedurally-generated stuff, I only got a single upvote: https://news.ycombinator.com/item?id=18921046
Indeed, I don't see why people like this. In modern world, doing graphics on CPU is very inefficient.
Have you read the linked article? It says "I want to have a simple stuff applicable to video games."
> without being distracted by getting it running on specific hardware
Just target Windows and use Direct3D, 99% of PC game developers do just that. The last GPU that didn't support D3D feature level 11.0 was intel sandy bridge from 2011. Everything newer then that supports 11.0, and unlike OpenGL with it's extensions, the majority of features are mandatory. Very rarely I saw compatibility issues across GPUs in recent years, and when I did it was a driver bug.
> I do not pursue speed/optimization at all, my goal is to show the underlying principles.
> Just target Windows and use Direct3D, 99% of PC game developers do just that.
Misses the point of the series. From the introduction (linked at the top of the page):
> I do not want to show how to write applications for OpenGL. I want to show how OpenGL works. I am deeply convinced that it is impossible to write efficient applications using 3D libraries without understanding this.
The exact same could be said for Direct3D. He gives his students a class to read/write TGA images and set pixels for that article, which should make it exceedingly clear that the point is to ensure the focus is the algorithm and no irrelevant details to teach the principles without people being sidelined by worrying about libraries, and differences between platforms and the like.
And in any case, I explained to you what the appeal of this to people here is. That you think it could be done differently does not change that the appeal to people is exactly that there are no dependencies like Direct3D or Windows or anything else (I don't have Windows anywhere, so for me that would have made it relatively uninteresting; as it would for a lot of other people here). I don't care about the performance; I care about the concepts.
What he explains is almost irrelevant for efficiency. Other things are relevant: early Z, tiled rendering, other things about architecture of GPUs: resource types, cache hierarchy, fixed-function pipeline steps, these warps, many others.
> I care about the concepts.
I've been programming C++ for living since 2000, about half of that time something relevant to 3D graphics, both games and CAD. You no longer need deep understanding of rasterizer. Vague understanding of what the hardware does, and how to control it, is enough already. Only people working in companies like nVidia or Chaos Group need that info, IMO.
Your project seems to be interesting in its own way, but I don't see why you'd juxtapose it with this tutorial, other than that they both involve pixels.
Try searching "GPU" on this site. 100% of the first page of results are about using GPGPU in the clouds. Do you think that matches what people buy GPUs for, or amount of code developers white for them?
Exactly. People here mostly do general-purpose computations on them, despite “G” stands for graphics.
Search for “Graphics”, and the majority of top results are for CPU-based stuff, pixel graphics in terminal, python, Skia. There’re some results for GPU-based graphics, like graphic studies for MGS and GTA, but they’re minority.
I think graphics is how the majority of users use their hardware for, but it’s under-represented here.