My mother told me later how suddenly terrifying the situation became.
There is no power. Your car may not have enough fuel to get home to your kids. We do not know when there will be power again. It may be in an hour, or next week. You are 100 miles from home- a distance that used to be an inconvenience but suddenly might be an impossibility. The phone system may not be working either, or may not for very much longer. You have whatever money you're holding in cash right now to pay for your needs.
We are dependent on our technology in ways we cannot even see.
(They pulled into the driving running on the last fumes in the tank, fortunately)
This kind of thing can happen at any moment and yet how many people are really prepared?
When your entire profession and skillset is based on the internet, and computers, etc. -- and those things simply cease to exist for all practical purposes - that is a bad feeling.
(sorry about the start of the video, but it shows the type of equipment you'd use for this sort of situation) https://www.youtube.com/watch?v=FiuNVTLHqEc
(The nearest petrol station to me is automated, so there would be nobody to do this anyway!)
Other Operator: “Hey, do you think you could help out the 345 voltage a little?”
Eastlake 5 Operator: “Buddy, I am — yeah, I’ll push it to my max max. You’re only going to get a little bit.”
Other Operator: “That’s okay, that’s all I can ask.”
Eastlake Unit 5 tripped on overload shortly after this, removing reactive power capability from the system and further destabilizing it.
Human error and lack of situational awareness (Midwest ISO's state estimator, a software program that is supposed to continually evaluate power system conditions and alert operators when they are in or near a region of instability, was offline) played a big role.
FirstEnergy didn't adequately trim trees in the right of way for a 345 kV line. On a hot day, due to other outages, the load on that line caused the conductor to heat, expand, sag, and then electricity shorted to the tree. Protection equipment removed the line from service, but with the loss of that transmission capacity, other lines also exceeded their capacity with their protection equipment tripping them offline. Planning criteria should have 1) prevented the first line from having as much load as it did, and 2) prevented the loss of the first line from causing cascading failures.
Page 18 is a good place to start if you don't have the patience for the full 238 page report.
The most damning thing is that FirstEnergy didn't realize their system was falling apart. They lost alarm notifications at 2:14, and didn't realize that things were bad for another hour and a half. The first transmission line to fail completely  wasn't discovered by anyone until after the blackout, and FirstEnergy was called about the first transmission line failure but they dismissed it as a fluke because they received no notification of its failure. It took several people calling them asking about line failures before they realized that they weren't seeing their internal notifications, and by that time, their system was pretty much one minor thing away from disaster.
The only actual recourse they would have had at that point would have been to start disconnecting people from their system. That takes time, and by the time they realized that they were in such an emergency, it was too late. The system was already collapsing and the full blackout was just 20 minutes away.
 The Star-South Canton transmission line crossed company boundaries and failed twice then immediately automatically reconnected before failing for good a third time. The first failure precedes the Harding-Chamberlain failure, while the other two failures occurred after two lines were off for good, so this line's failure is considered the third failure.
In California, there's an incentive problem because the utilities try to charge the costs to the consumers. They have reduced economic incentive to maintain safe zones around their equipment.
The report seems to be repeatedly emphasizing that the grid is supposed to be resilient to unpredictable failures at unpredictable times. While the trees were the immediate cause , the real story of the outage was the resulting cascade failure, which absolutly should not have happened.
To take a CS example, when Google/Amazon/etc suffer a cascade failure, they do not blame the proximal cause (such as a few nodes getting overloaded), but rather their load balances and failure recovery systems for allowing/causing the problem to cascade into a full network failure.
 And a result of inproper vegetation management
4ish pm (ann arbor area)... boom. no power. pitch black. in a basement. with no sounds. no ambient hum of the hvac. nothing, save a single red 'exit' emergency light. we made our way upstairs and outside, and ... yep - the whole town was shut down. took a while to realize it was a good portion of the northeast!
looking back, I was quite surprised no one was caught in an elevator. there were only 4 floors, and it was 4pm, so things were slow, but... they did get used regularly.