I just had it convert Swift code to Kotlin and was surprised at how the comment was translated.
It "knew" the author of the paper and what is was doing!? That is wild.
Swift:
//
// Double Reflection Algorithm from Table I (page 7)
// in Section 4 of https://tinyurl.com/yft2674p
//
for i in 1 ..< N {
let X1 = spine[i]
...
Kotlin:
// Use the Double Reflection Algorithm (from Wang et al.) to compute subsequent frames.
for (i in 1 until N) {
val X1 = Vector3f(spine[i])
...
Wild that it can do that, but also clearly a worse output. The original has a URL, section, page, and table listed. The AI version instead cites the author. Needing to notice and fix unhelpful tweaks is one of the burdens of LLMs.
Well, of course it knew the author. I'm sure you can ask just about any LLM who the author of the DRA is and it will answer Wang et al. without even having to google or follow the tinyurl link. And certainly it would also know that the algorithm is supposed to compute rotation minimizing frames.
Not sarcastic at all. it just doesn't seem like a big deal if you have played with LLMs and realize just how much LLMs know. The double reflection paper is not particularly obscure. (Incidentally I just asked Claude a couple of weeks ago about implementing rotation-minimizing frames!)
Someone else has written this exact code on the internet, OpenAI stole it, and now chatgpt is regurgitating it. Just like it can regurgitate whole articles.
You need to stop being wow'd by human intelligence masquerading as AI!
I wonder what percentage of 3D meshes used in practice (say in video games, visual simulations, etc..) are topologically equivalent to a sphere. No coffee cups allowed.
A trick used to achieve the morphing is that all of the creatures are topologically spheres ;)
The main trick is that their triangle topologies are all identical. We projected a common mesh onto each hand-modeled mesh so there would be a direct correspondence between each vertex of all of the projected meshes. The common mesh was topologically a sphere, but was pre-shaped to be more like a balloon animal with 6 legs and a tail (the union of all models’ appendages).
With the 1:1 vertex relationship set up, we animated 1 vertex from each of two meshes at a time separately in the vertex shader. Then the shader did a simple linear interpolation of the results. Similarly, we had a fragment shader for each pair of materials that just evaluated both and blended the results.
From there, a bit of special handling of fur and other details and we could morph between creatures with apparently different shapes, skeletal structures, materials and rendering features in real time.
There are probably lots of weird shapes used in practice, but the nice thing about game design is that you get to build the mesh directly instead of making your best guess at what kind of surface unconnected points ought to represent. Mapping to a sphere is definitely pretty restrictive.
I'd imagine most are either planar or complex/undefined. Most meshes in gamedev probably aren't even closed with a unique inside/outside; just "pile of triangles".
If this aids your argument, nearly All video game models that I tried to convert to 3D printing STL files were non-manifold.
I think this logically follows from your statement, and it made fixing the stl files necessary otherwise the printer would switch to imaginary coordinates halfway thru the print and since the printer is mechanically restricted the print does a good impression of an Escher painting.
Great article. That half pixel offset is crucial for understanding proper rasterization of detailed 3D meshes. No seams, no holes, no overlaps on shared triangle edges rely on exactly where the pixel centers are. This carries over to image processing and sampling. There are a few of us that lose sleep over these tiny details -- I am happy when I see kindred spirits with the same attention to detail.
There are also functions that have a first derivate, but no second derivative. Much of my graduate research involved studying these type of functions.
Many of the original ideas came from the paper "The calculus of fractal interpolation functions"
https://www.sciencedirect.com/science/article/pii/0021904589...
I wrote a paper on how to compute the surface normal (for rendering) of related functions:
https://link.springer.com/article/10.1007/PL00013408
Interestingly enough, while you can not differentiate the Weierstrass function, you can integrate it -- i.e. you can treat it like a differential equation that has a set of well defined solutions.
I also think it is worthwhile stepping thru working code with a debugger. The actual control flow reveals what is actually happening and will tell you how to improve the code. It is also a great way to demystify how other's code runs.
I agree and have found using a time travel debugger very useful because you can go backwards and forwards to figure out exactly what the code is doing. I made a video of me using our debugger to compare two recordings - one where the program worked and one where a very intermittent bug occurred. This was in code I was completely unfamiliar with so would have been hard for me to figure out without this. The video is pretty rubbish to be honest - I could never work in sales - but if you skip the first few minutes it might give you a flavour of what you can do. (I basically started at the end - where it failed - and worked backwards comparing the good and bad recordings) https://www.youtube.com/watch?v=GyKrDvQ2DdI
I think that fits nicely under rule 1 ("Understand the system"). The rules aren't about tools and methods, they're about core tasks and the reason behind them.
That is rule #3. quit thinking and look. Use whatever tool you need and look at what is going on. The next few rules (4-6) are what you need to do while you are doing step #3.
Interesting, although it seems unreasonable that you would be able to constantly accelerate for large distances without an essentially unlimited energy source.
Also, this is using special relativity only. Adding general relativity as you approach massive objects would be interesting.
I find LLM's good for asking certain kinds of Biblical questions. For example, you can ask it to list the occurrences of some event, or something like "list all the Levitical sacrifices," "what sins required a sin offering in the OT," "Where in the Old Testament is God referred to as 'The Name'?" When asking LLM's to provide actual interpretations you should know that you are on shaky ground.
The LLMs have been fed the critique and analysis and discussion of all manner of biblical passages. The LLMs usually give great interpretations and even contrast different theological takes on such passages.
Yeah -- I ask it interpreted questions all the time, but just like for programming, I realize answers that appear good are often just plain wrong. I do know you can ask it leading questions if you want answers with a certain theological bent. e.g. "Did the judgments of Revelation 8 and 9 occur in the first century?"