In the 1970's, I had the honor of working with Bill Hartmann, Bob Strom, Gerard Kuiper, Clark Chapman and Ewen Whittaker, at Tucson's Lunar & Planetary Labs. They used large earthbased telescopes to photograph the moon's surface at many illumination angles and libation angles. The images were captured on glass plates.
They physically projected these images onto a large plaster sphere; in turn, they rephotographed the images from different angles, to remove foreshortening and show the lunar surface as seen from directly above a crater.
I'm always really impressed with the clever techniques and methods to accurately map topographic surfaces (in this case, lunar) before computerization. Really interesting!
(and your comment reminded me to finally place my order for a Klein bottle!)
Looking forward to reading this post but just wanted to say that the work Tyler has done on ray tracing in R is phenomenal. I highly recommend checking out this package website: https://www.rayshader.com
Nice post. R's quirks seem to put some people off but I've found that it's a relative joy for exploratory analysis and visualization like this, especially within RStudio.
Recently I was tasked with grouping a large number of DNA oligonucleotides, and exploring the criteria by which to group them was a lot of fun using various R libraries. In the span of a few days I learned how to use k-means clustering, how to employ an UpSet plot, and how to build a phylogenetic tree.
R is hands-down the best language for data manipulation, analysis, and visualization: it's a language truly centered around treating data as a first class citizen. That focus does make some traditional programming workflows more error prone (helpful interactive data analysis features like vector recycling, flexible automatic type conversion, and non-standard evaluation provide lots of footguns), but the last decade of language improvements (stringsAsFactors = FALSE!) and R packaging ecosystem improvements have made the situation much nicer. The flexibility and lispy expressiveness of the language make it really fun to develop in, once you've gotten over the initial quirks.
100% agree, especially on the lispy expressiveness. I love that I can build analysis pipelines in a functional style, which has always clicked with me more than other paradigms.
Tidyverse is a godsend for at least getting initial data transformations sketched out and for gently introducing new users, but I do believe one should gain an understanding of how to do all of these things in plain R.
I agree with this. I wonder what it would take to let R spread beyond its niche into a more popular data science language. My worry is that with polars coming along, Python is catching up where it's behind, and staying ahead where it is ahead.
I have. R is far less verbose and maps far better to data analysis. The Wolfram lang is far more expressive and powerful for symbolic computation. So basically, Wolfram for doing math Research, R for applied stats.
I have not. I started using R due to its open source codebase and ability to audit and understand exactly what its doing under the hood—being able to see how statistical formulae were implemented in code was invaluable in understanding and interpreting a package's analytical output.
R gives you a relatively simple set of tools that you can combine in powerful ways. The Wolfram Language seems to have a specialized function for everything, which is nice sometimes but it takes me longer to get started when doing exploratory data analysis, since I have to remember more nuanced stuff.
I absolutely love R. Once you get your head around data types and the 20 most important functions, you can do amazing things.
My personal favorite resource is "R for Data Science" by Hadley Wickham. It covers lots of nice data manipulation and visualization examples, and provides a good introduction to the tidyverse, which is a particular dialect of R that's well-suited for data analysis. It's available for free at:
For more specialized analytical methods there are lots of textbooks out there that provide a deep dive into packages for a specific field (e.g. survival analysis, machine learning, time series), but for general data manipulation and visualization it's hard to beat R4DS.
An option to the Hadley book that also covers some nice statistical methods is Statistical Rethinking by McElreath. Not really available for free though but interesting read.
Absolutely beautiful - both the clear explanation and the idiomatic (tidyverse style) R packages and code walkthrough. The combination of the two allowed me to read through and understand in one go. And I have immediate uses for the packages. Thanks!
I remapped the UV coords based on the spherical projection of the mesh after subdividing, so there should be minimal distortion, especially compared the UV sphere. There is a slightly higher density of vertices where the edges of the cube used to be, but it's small compared to the UV sphere's extreme convergence at the poles.
Amazing work. Simple, easy-to-use code. This must have been quite the effort. It's honestly stunning work. Also, good to see R is still alive and well!
I would just directly ray trace it, no subdivision. Then it becomes something like 100 lines of code total, and is probably still faster than the subdiv approach.
BTW I like to call that singularity at the pole god, because I often notice it in env maps as an arsehole in the sky :P
Opensubdiv would definitely be the robust, industry standard solution. However, R packages that are distributed on the CRAN have additional restrictions on required system libraries, so for portability I went with a bespoke implementation.
They physically projected these images onto a large plaster sphere; in turn, they rephotographed the images from different angles, to remove foreshortening and show the lunar surface as seen from directly above a crater.
One result of this is the Rectified Lunar Atlas -- one of the guiding maps of the Apollo missions: https://sic.lpl.arizona.edu/collection/rectified-lunar-atlas