Hacker News new | past | comments | ask | show | jobs | submit | ahalbert's comments login

Inertia, I guess - I'd have to get it set up and all and vim works fine for me now.


I would add that inertia works at both a personal and institutional level. For example, Vim, Emacs, and Nano are available on most supercomputers but Neovim really is not (haven't seen it myself). Many people may like having highly customized setups --- nothing wrong with that in my view --- but you can't use what your system does not have available. Plain old Vim is still much more universal in many areas.


The author thinks that porn literacy is a good thing, but she notes that it's difficult to teach students to read texts you can't show them.


My blog: https://ahalbert.com/

I just got back to public writing, but I like to share summaries of the books I read.

My most recent post is the secret history of cold war submarine espionage: https://ahalbert.com/reviews/2023/07/01/blind_mans_bluff.htm...

This recently was featured on hacker news's front page: https://ahalbert.com/reviews/2023/06/04/the_culture_map.html


Yes. I summarize books I read and share what I learned.


That's what I said in the article.


My apologies, I must have been mixed up.


It's ok. It's quite long and there's a lot of stories.


As far as my favorite poltical biographies go, I think the ones that gave me the most understanding of politics was the Robert A Caro biographies, "The Power Broker" and "The Years of Lyndon Johnson".


I recently started writing reviews of each book I read. I found it helps me retain the contents of the book and often people give positive feedback about what they learned from my review.


how many hours do you invest in this per book and per month?

I did this for a few months in 2017 but it was taking like 90 minutes per chapter, and at full saturday morning 8am attention! It's like doing math homework. And math homework is not the most important thing I can spend the best 90 minutes of my day on.


I don't know, but I wrote nearly 3000 words on the last book review I did. I'd say I spend less time than 90 minutes per chapter. I'm also sharing the reviews, which I find the most rewarding part.


Is there any review(er) in particular you used as inspiration here? Curious if you treat this more as personal notes, or a public-facing review?


I do it on my college alum slack channel, but I recently wrote one up that made it to the front page of HN:

https://ahalbert.com/reviews/2023/06/04/the_culture_map.html

I took some inspiration from the book review contests of "Astral Codex Ten"

https://astralcodexten.substack.com/p/your-book-review-publi...


Not GP, but https://fourminutebooks.com comes to mind. It's also just incredibly useful to get a "4 minute precis" and decide whether or not it's worth it to you to spend more time on that book.


I do the same, since the Start of the year. I write personal notes and if a book really gets to me I will write a public review


I love using Awk, the only thing I miss is that it can't handle complex csv files. Does anyone know how to handle quoted CSV strings like

> "foo","bar,baz"


I like the idea of Unix pipelines, but I hate all the sublanguages, awk being one of the biggest. I scratched my itch and built my own shell, marcel: https://github.com/geophile/marcel.

I mention this specifically, here, because of the CSV point. Marcel handles CSV, e.g. "read --csv foobar.csv" reads the foobar.csv file, parses the input (getting quotes and commas correct), and yields a stream of Python tuples, splitting each line of the CSV into the elements of the output tuples.

Marcel also supports JSON input, translating JSON structures into Python equivalents. (The "What's New" section of marcel's README has more information on JSON support, which was just added.)


If quoted string is the only thing you need to handle extra (i.e. no escaped quotes, newlines, etc) and if you have GNU awk:

    $ echo '"foo","bar,baz"' | awk -v FPAT='"[^"]*"|[^,]*' '{print $1}'
    "foo"
    $ echo '"foo","bar,baz"' | awk -v FPAT='"[^"]*"|[^,]*' '{print $2}'
    "bar,baz"
For a more robust solution, see https://stackoverflow.com/q/45420535 or use other tools like https://github.com/BurntSushi/xsv


I wanted to ask why not the more simple form:

echo '"foo","bar,baz","boo"' | awk -F"\",\"" '{print $1}' "foo

echo '"foo","bar,baz","boo"' | awk -F"\",\"" '{print $2}' bar,baz

echo '"foo","bar,baz","boo"' | awk -F"\",\"" '{print $3}' boo"

Realizing that I have to strip the quotes that remain.

Edit. formatting.

EDit, again, from your link, the following is more terse and too my taste (still needs strips):

awk -v FPAT='("[^"]*")+'


I usually use this awk function to parse CSV in awk:

    # This function takes a line i.e. $0, and treats it as a line of CSV, breakin
    # it into individual fields, and storing them in the passed in field array. It
    # returns the number of fields found, 0 if none found. It takes account of CSV
    # quoting, and also commas within CSV quoted fields, but doesn't remove them
    # from the parsed field.
    # use in code like:
    #   number_of_fields = parse_csv_line($0, csv_fields)
    #   csv_fields[2]  # get second parsed field in $0
    function parse_csv_line(line, field,   _field_count) {
      _field_count = 0
      # Treat each line as a CSV line and break it up into individual fields
      while (match(line, /(\"([^\"]|\"\")+\")|([^,\"\n]+)/)) {
        field[++_field_count] = substr(line, RSTART, RLENGTH)
        line = substr(line, RSTART+RLENGTH+1, length(line))
      }
      return _field_count
    }
It's not perfect but gets the job done most of the time and works across all awk implementations.


Convert it with Miller first:

    mlr --icsv --otsv cat examplefile
* https://miller.readthedocs.io/en/latest/10min/


Yes, this is what csvquote does. It does nothing else, just this so that programs like awk, sed, cut, etc. can work properly.

https://github.com/dbro/csvquote


They are planning built-in support for that, see that other comment https://news.ycombinator.com/item?id=36518146


I'm not a biologist, but I know from other biologist that is often difficult to cultivate cells in a lab. Kudos to him for mastering such a difficult art


When will the model be available?


It says mid July


Also looking forward to Stable Diffusion's new SDXL model in "mid-July": https://stability.ai/blog/sdxl-09-stable-diffusion

I hope both of these deliver on their promises. They're exciting developments.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: