Hacker Newsnew | past | comments | ask | show | jobs | submit | ryanmjacobs's commentslogin

FYI, one of my favorite Ruby idioms is to capture command output with `open(...){_1.read}`

  pp open("|ls -lh /usr/bin/ls"){_1.read}
  "-rwxr-xr-x 1 root root 135K Aug 30 04:57 /usr/bin/ls\n"
or to quickly print tabular data

  open("|column -t -s \\t", "w"){_1 << tsv_data}


Am I missing something - it's 2am here, so possibly - or could you not just use backticks for the first one?


Oh that's embarrassing -- you're right, the first example doesn't make sense. Kinda pointless actually haha

I use it for code like this:

  raw = IO.popen(%W[gzip -d -c -- #{path}]){_1.read}
  raw = IO.popen(%W[gzip -d -c], in: pipe){_1.read}
  raw = IO.popen(%W[git status --porcelain -- #{path}]){_1.read}
Useful because it eliminates /bin/sh footguns (e.g. `md5sum hello&world` forking or `~john` expanding), plus you get kernel.spawn() options. `open("| ...)` only made sense for the second example.

Anyway, I find "_1" more eye-pleasing than:

  IO.popen(%W[gzip -d -c #{path}]){|io| io.read}


Oh damn. I have been using `_1`, `_2`, etc. extensively for years... I didn't know about that footgun.

I'd like know if `it` is merely an alias for `_1`, or if protects from this "arity issue" too.


I'm pleasantly surprised to see this improvement:

  Tempfile.create(anonymous: true) removes the created temporary file
  immediately. So applications don’t need to remove the file.
  [Feature #20497]
I use a similar pattern a lot:

  file = Tempfile.new.tap(&:unlink)
  file << "... some really large data to pipe"
  system("md5sum", in: file.tap(&:rewind))
It's a neat trick to leverage the filesystem for large amounts of data (e.g. "psql \copy" generation), without littering tempfiles everywhere. Once the last fd disappears, the filesystem releases the data; so you don't have to "trap" signals and add cleanup routines. (Hint: you can also use these unlinked-tempfiles for command output, e.g. huge grep output, etc.)

On Linux systems, `open(..., O_TMPFILE)` is typically used, which provides additional safety. The created file is initially unnamed (no race condition between `open` and `unlink`).

When I needed safety, my non-portable solution was to use Ruby's syscall feature. (Btw, I love that you can do this.)

  require "fcntl"

  SYS_OPEN   = 2
  O_TMPFILE  = 0x00410000
  O_RDWR     = Fcntl::O_RDWR

  def tmpfile
    mode = O_RDWR | O_TMPFILE
    fd = syscall(SYS_OPEN, "/dev/shm", mode, 0644)
    IO.for_fd(fd)
  end
But... Another pleasant surprise from the PR (https://bugs.ruby-lang.org/issues/20497).

  Linux 3.11 has O_TMPFILE to create an unnamed file.
  The current implementation uses it.
Excellent to see :) Makes the PR even better!


This is available in older Ruby versions by using the upstream `tempfile` gem version.


Rest in peace, Bram. I was introduced to Vim by my father when I was a kid, and have been using it for the past 15 years. Your software has touched many lives.


Oh nice. This is a very useful tip. I wish I knew that a month ago.


I had the same exact situation. I used the Apple card almost exclusively in 2022 because of the 1% card + 2% apple pay rewards.

Doing taxes right now and I spent about 10 minutes manually exporting+emailing OFX/QFX files to myself for Quicken. Kind of a pain. Every other card I have supports automatic data pulling.

Probably will move my main card to something that has automatic integration.


You can always integrate with Mint, and then use that to export all your years worth of data into CSV, etc.


I tried Mint. AFAIK Apple CC doesn't allowing pulling live transactions.


Well written story. Worth the read.



I found out because my CircleCI jobs were failing to `npm install`. Guess the deploy will have to wait...


I've been doing this with several production PostgreSQL instances.

PostgreSQL on ZFS is great.

I have zstd compression enabled and I average 3.50x compression ratios.

(Probably some pretty awful CPU tradeoffs in there, but system load and query times seem fine. My database is 50 GB before zstd compression, so enabling it helps a ton.)

I also have ZFS snapshots taken hourly and shipped offsite. It's awesome that I don't need to pause PostgreSQL or anything to take the snapshot. It's atomic and instant.


With some quick and dirty `time` style tests zstd has a pretty low overhead. IIRC writing was around off = 100, lz4 = 110, zstd = 115 cpu utilization on my personal data set that resulted in 1x, 1.7x, 2.1x compression. Reading was negligible, single digit percentages, for both lz4 & zstd. For anything on a spindle that's a pretty good trade off of CPU time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: