I would add that inertia works at both a personal and institutional level. For example, Vim, Emacs, and Nano are available on most supercomputers but Neovim really is not (haven't seen it myself). Many people may like having highly customized setups --- nothing wrong with that in my view --- but you can't use what your system does not have available. Plain old Vim is still much more universal in many areas.
As far as my favorite poltical biographies go, I think the ones that gave me the most understanding of politics was the Robert A Caro biographies, "The Power Broker" and "The Years of Lyndon Johnson".
I recently started writing reviews of each book I read. I found it helps me retain the contents of the book and often people give positive feedback about what they learned from my review.
how many hours do you invest in this per book and per month?
I did this for a few months in 2017 but it was taking like 90 minutes per chapter, and at full saturday morning 8am attention! It's like doing math homework. And math homework is not the most important thing I can spend the best 90 minutes of my day on.
I don't know, but I wrote nearly 3000 words on the last book review I did. I'd say I spend less time than 90 minutes per chapter. I'm also sharing the reviews, which I find the most rewarding part.
Not GP, but https://fourminutebooks.com comes to mind. It's also just incredibly useful to get a "4 minute precis" and decide whether or not it's worth it to you to spend more time on that book.
I like the idea of Unix pipelines, but I hate all the sublanguages, awk being one of the biggest. I scratched my itch and built my own shell, marcel: https://github.com/geophile/marcel.
I mention this specifically, here, because of the CSV point. Marcel handles CSV, e.g. "read --csv foobar.csv" reads the foobar.csv file, parses the input (getting quotes and commas correct), and yields a stream of Python tuples, splitting each line of the CSV into the elements of the output tuples.
Marcel also supports JSON input, translating JSON structures into Python equivalents. (The "What's New" section of marcel's README has more information on JSON support, which was just added.)
I usually use this awk function to parse CSV in awk:
# This function takes a line i.e. $0, and treats it as a line of CSV, breakin
# it into individual fields, and storing them in the passed in field array. It
# returns the number of fields found, 0 if none found. It takes account of CSV
# quoting, and also commas within CSV quoted fields, but doesn't remove them
# from the parsed field.
# use in code like:
# number_of_fields = parse_csv_line($0, csv_fields)
# csv_fields[2] # get second parsed field in $0
function parse_csv_line(line, field, _field_count) {
_field_count = 0
# Treat each line as a CSV line and break it up into individual fields
while (match(line, /(\"([^\"]|\"\")+\")|([^,\"\n]+)/)) {
field[++_field_count] = substr(line, RSTART, RLENGTH)
line = substr(line, RSTART+RLENGTH+1, length(line))
}
return _field_count
}
It's not perfect but gets the job done most of the time and works across all awk implementations.
I'm not a biologist, but I know from other biologist that is often difficult to cultivate cells in a lab. Kudos to him for mastering such a difficult art