Hacker News new | past | comments | ask | show | jobs | submit login

As an outsider, getting your code merged into a popular open source project involves a political process of convincing the maintainers that your fix should be addressed, and then convincing them they should merge your code.

Writing a fork involves sitting down at your laptop and coding it out.




Plus of course everything needs rewritten in rust /s.


  $ hyperfine -w 100 -m 1000 -L bin jq,jaq "echo '[1,2,3]' | {bin} '.[1]'"

  Summary
    echo '[1,2,3]' | jaq '.[1]' ran
      1.57 ± 0.15 times faster than echo '[1,2,3]' | jq '.[1]'
Bring on the competition!


As the benchmarks show, jaq is pretty significantly faster than jq.

I've commented before that I expect Rust to be a language that is generally faster than even C or C++ in a way that's hard to capture in small benchmarks, because the borrow checker permits code to be written safely that does less copying that other languages have to do for safety. Given the nature of what jq/jaq does, I wouldn't be surprised that that is some of the effect here. It would be interesting to instrument them up with tools that can track the amount of memory traffic each benchmark does to compare (that is, not memory used but total traffic in and out of RAM); I bet the Rust code shows a lot less.


FWIW, I see no difference. (hyperfine 1.17.0, jq 1.7, jaq 1.2.0)

  $ hyperfine -N -w 100 -m 1000 -L bin jq,jaq "echo '[1,2,3]' | {bin} '.[1]'"
  Benchmark 1: echo '[1,2,3]' | jq '.[1]'
    Time (mean ± σ):       3.4 ms ±   1.7 ms    [User: 0.6 ms, System: 2.6 ms]
    Range (min … max):     0.7 ms …   5.8 ms    1000 runs
 
  Benchmark 2: echo '[1,2,3]' | jaq '.[1]'
    Time (mean ± σ):       3.4 ms ±   1.7 ms    [User: 0.5 ms, System: 2.7 ms]
    Range (min … max):     0.7 ms …   5.8 ms    1000 runs
 
  Summary
    echo '[1,2,3]' | jq '.[1]' ran
      1.00 ± 0.71 times faster than echo '[1,2,3]' | jaq '.[1]'


That would still be a microbenchmark. Given that the benchmarks in the post take on the order of seconds to run, I am assuming they are not microbenchmarks, or at least, much less "micro"benchmarks. I would hope some sort of standard JSON querying benchmarking suite would include some substantial, hundreds-of-kilobyes or more JSON samples in it.


I'm pretty sure you could do this using hardware performance counters, but I never actually tried, so I might be wrong


not going to disagree




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: