I also recently picked up "Data-oriented design: software engineering for limited resources and short schedules" by Richard Fabian, which I've not had a chance to read properly but looks like it covers things in detail.
It also means you naturally end up with small, testable, generic components without having to think too hard about it. You've five the heavy lifting in the data structure, and whatever behavior you want flows naturally from that.
So an alternative to object-oriented design was proposed: Data oriented design - there's a good video about this from 2014 by Mike Ackton . But in short, the idea is to go 'back to basics' and focus on this: You're pushing around and transforming bits of data so you can eventually give an output. So the goal of your design is to make it explicit how and when you do any of this, so you can avoid unnecessary copying, you can lay out data to fit efficiently in the cache, parallelize as much of this as possible, and do as little else as you possibly can. The result is the difference between opening Word and Sublime Text. In a web-context, Map-Reduce is a similar way of explicitly expression transformations in a way that enables parallelization.
A common pattern that many game engines have adopted as a result of this is Entity-Component-System . Basically, most game engines work with a model where each object in the game world is an object with a list of components that give it varies attributes - e.g. a physics component to define how it handles collisions, a rendering component, a script component with some code to run every frame, etc. Previously that was also how it would be laid out in memory - an object contains the data from its components. With ECS this is inverted/exploded - the object is split up into an entity (basically just an id), each component's data is stored in an array along with all the components for other entities that also have that component (and the entity id is used to look up/associate in that array) and finally a system does transformations on the data in the component arrays. One of the benefits of this is that it lays out e.g. all the physics data continuously in memory, which makes it much easier to use the cache efficiently - for example if you load one components' worth of physics data into the cache, it might already include the next 1-3 chunks that just so happen to be exactly what the physics system will work on next. This blog post comes with some nice illustrations to help understand the difference between these two ways of laying out data 
In some ways, this seems ~analogous to the web front-end world's adoption of unidirectional data flow and treating UI as a function of state.
Also, is there anything more useless than a slide deck without the actual presentation? Just have to kinda guess all the blanks eh? I normally wouldn't even comment but I find data oriented design to be the best and love to see other takes/critiques/methods but this is useless as is.
I also find it kind of funny talking about scene graphs and animations when model, mesh and texture data are already examples of data focused designs and they're much more fundamental.