Does anyone know what software or process was used to create these videos?
Here's where I got to since the README.md is basically empty for the moment.
So far I've got it to do something. It needs at least these python libraries as dependencies.
- cv2 (This is OpenCV and is not easily installable inside virtualenv)
git clone https://github.com/3b1b/manim
mkdir -p ../animation_file/images
>>> scene = generate_logo.LogoGeneration()
Then I think you can call .construct() on the object.
I imagine this is already feasible. We would need to give them time to pick up the skill by letting them loose in some kind of training situations so they know what everything looks like and can distinguish things passing by.
The only thing I'm not certain of is how long the training would take before some basic recognition is possible amidst the already present noise of every day life. I do know that when you lose a sense though your other senses get sharpened to compensate, and that might improve the adaptation period.
edit: we could also give these to seeing people to help retain their acuity as they age.
This section  talks about the end result but the whole wikipedia article is an interesting read on how his work developed over time. I didn't see anything specific to what I remember from my college class on it but this part hints at it, "pen-like machine that could draw shapes of sound waves on smoked glass by tracing their vibrations."
The linked article on Visible Speech  may also be of interest describing what Bell's father used to describe sounds phonetically.
It's also available as/on a web app, Raspberry Pi, NVDA, and Windows.
It apparently works: http://www.sciencemag.org/news/2014/03/computer-program-allo...
Apparently there is a similar iOS app called EyeMusic.
Split the image into RGB (or HSV) channels, stretch each of them into a stream using Hilbert curve mapping and then either try a lossless compression or a lossy JPEG/FFT-like one. A hunch says this could show some interesting results.
Usually this has applications when dealing with things like file systems and databases. I've seen some people that argue that using something like Hilbert curves in compression algorithms will yield less compression-induced artifacts because the artifacts, as you can probably already guess, will "snake around" the image, being much harder to detect visually.
Advantage would be that you would not lose time in which you could have drawn pixels for vertical and horizontal blanking periods.
Disadvantage would be that the hardware is a lot harder to make, more so if you want to dampen out oscillations near corners well.
Scan Methods and Their Application in Image Compression
Tarek Ouni and Mohamed Abid
That might be just my experience and may not extend to others. However, it should be clear that time is better spent in choosing a good learning source than wasting time trying to make up for it later.
My question on the Hilbert Curve: When he's talking about filling infinite space, why does the curve or repeating blocks of HC's spiral out, instead of continuing in the pattern of the original Hilbert curve? Doesn't the spiral introduce a new pattern?
That same pattern cannot apply to go "in-to-out", as in starting with a unit square and trying to go to all of space.
You might think you could have the Pseudo-Hilbert curve pattern fill 4 unit square, then 16 unit square, then 64 unit square, etc. However, no proper limit curve would exist in this case, since each specific value on the curve tends to diverge to infinity.
On a side note, the Hilbert Curve pattern quite resembles the folds of a brain. Which makes me wonder about the attributes such a pattern would lend our brains:
1) the ability to hold fixed points in space relative to each other while increasing information density between those points, and
2) our ability to stand on the edge of space (reality), and look / measure inward (1 to zero), without seeing infinity looming behind us.
The best teaching often jumps back and forth between the two.
Mathematics can be thought of a simulation of properties of the world inside people's brains which verify the result. Sadly, there's no shared comprehensive framework outside our brains for visualizing  and organizing all of it, since it's very flexible . And thus most people miss out on experiencing the beauty of many results.
I sometimes wonder if LaTeX augmented by context (and visualization) would help, and I wish Mathematica was open source to aid this. At the same time, we are faced with incompleteness results that put into question any formal organization of mathematics .
Sorry for the long rant.
Yet, when discussing it, <i>while learning</i> with math folk, it's usually met with "You don't have a degree" or "That's not a formal understanding of the definition, come back". It's completely exclusionary and killed much of my math passion until recently.
Rather than wishing for this that will not happen, let's work to make Sage, particularly SageMathCloud (https://cloud.sagemath.com), the lingua franca of CASs.
Touch your right thumb to the small bone on your right pinky finger that is closest to the palm - this is 1. Move it up one bone (phalanx) - this is 2. Move it one more to the tip of your pinky - this is 3. Then you continue with your ring finger - 4, 5, 6 and when you reach the tip of your index finger, you're at 12. Now raise one finger on your left hand for 1 * 12 and continue with your thumb on the first phalanx of your pinky - 13. Once you've raised all 5 fingers on your left hand you're at 5 * 12 = 60 and when you then place your thumb on the tip of your index finger, you're at 72. It's easy and quick after only a bit of training and lets you count practically all numbers you'll ever want to count on your fingers.
Should come with a warning - once you start watching, don't expect to get anything else done for the next 17 minutes :-)