Hacker News new | past | comments | ask | show | jobs | submit | ronjouch's comments login

Interesting, will read Gert Tinggaard Svendsen's research.

French passerbys: for a similar study on why France is particularly lacking trust, see CEPREMAP / Yann Algan & Pierre Cahuc (2007) La société de défiance - Comment le modèle social français s’autodétruit. It's been too long since I last read it for me to be able to summarize it in English, so I won't. Here's the free PDF, in French: https://www.cepremap.fr/depot/opus/OPUS09.pdf .

My image optimization trio as of now:

- GIF: gifski (squeeze out the best possible out of the ancient format)

- PNG: pngquant (lossy but respectful of what many PNGs are: lots of flat surfaces + limited color palette, transparency)

- JPG: jpegli (writing this post mostly to share the news about this one, it's new from libjxl learnings, approximately as good qualitatively as Guetzli, but 1000x faster -actual number, not an exaggeration-. It's amazing! https://github.com/libjxl/libjxl/tree/main/lib/jpegli )

Below's my lil' `image-opt` helper making for a nice workflow to optimize any gif/jpg/png file(s) :)

  #!/usr/bin/env bash
  set -euo pipefail
  dbg() { if [ "${DEBUG:=false}" = 'true' ]; then echo "$@"; fi }
  die() { echo "Error: $*" 1>&2; zenity --error --text="$*"; exit 1; }
  if ! command -v pngquant > /dev/null; then die 'You must install pngquant'; fi
  if ! command -v cjpegli > /dev/null; then die 'You must install cjpegli'; fi
  if ! command -v gifsicle > /dev/null; then die 'You must install gifsicle'; fi
  for filename in "${@:1}"; do
    if [ -d "$filename" ]; then
      echo && echo "ℹ File '$filename' is a directory. Skipping."
    if [ ! -f "$filename" ]; then
      echo && echo "ℹ File '$filename' does not exist. Skipping."
    extension="${extension_raw,,}"  # Convert to lowercase
    if [[ "$extension" != 'jpg' && "$extension" != 'png' && "$extension" != 'gif' ]]; then
      echo && echo "ℹ Skipping file $filename due to unsupported extension: .$extension"
    dbg "filename($filename), filename_noext($filename_noext), extension($extension), filename_original($filename_original)"
    echo && echo " Compressing $filename ..."
    mv "$filename" "$filename_original"
    if [ "$extension" == 'jpg' ]; then
      cjpegli "$filename_original" "${filename_noext}_q50.${extension}" -q 50
      cjpegli "$filename_original" "${filename_noext}_q60.${extension}" -q 60
      cjpegli "$filename_original" "${filename_noext}_q70.${extension}" -q 70
      cjpegli "$filename_original" "${filename_noext}_q80.${extension}" -q 80
      cjpegli "$filename_original" "${filename_noext}_q90.${extension}" -q 90
    elif [ "$extension" == 'png' ]; then
      pngquant --strip --force --verbose --output "$filename_final" "$filename_original"
    elif [ "$extension" == 'gif' ]; then
      # https://kornel.ski/lossygif , set --lossy to 20 for light compression, 200 for heavy
      gifsicle -O3 --lossy=50 --verbose --output "$filename_final" "$filename_original"
    gio trash "$filename_original"

I feel dense for advocating for jpegli to everyone and then totally forgetting to replace it in my export chain myself. The tools you quote should be the only ones used in 2024. Everything else should be scorched.

That's my experience, yes. Just tested it on a 750kB 1080p image with detailed areas and areas with gradients. Highly unscientific, N=1 results:

- Guetzli at q=84 (the minimum allowed by Guetzli) takes 47s and produces a 403kB image.

- Jpegli at q=84 takes 73ms (MILLIseconds) and produces a mostly-indistinguishable 418kB image. "Mostly" because:

A. it's possible to find areas with subtle color gradients where Guetzli does a better job at keeping it smooth over a large area.

B. "Little specks" show a bit more "typical JPG color-mush artifacting" around the speck with Jpegli than Guetzli, which stays remarkably close to the original

Also, compared to the usual encoder I'm used to (e.g. the one in GIMP, libjpeg maybe?), Jpegli seems to degrade pretty well going into lower qualities (q=80, q=70, q=60). Qualities lower than q=84 are not even allowed by Guetzli (unless you do a custom build).

I'm immediately switching my "smallify jpg" Nautilus script from Guetzli to Jpegli. The dog-slowness of Guetzli used to be tolerable when there was no close contender, but now it feels unjustified in comparison to the instant darn excellent result of Jpegli.

Thank you for such a well-informed report!

With guetzli I added manually overprovisioning for slow smooth gradients. If you have an example where guetzli is better with gradients you could post an issue with a sample image. That would help us to potentially fix it for jpegli, too.

Hey didn't realize I was answering to a contributor!

So, I started creating an issue in the repo, and as I was creating a side-by-side-by-side comparison of A=orig, B=guetzli, C=jpegli ... I realize that wait-a-minute, Jpegli is actually doing a better job at preserving the original image :D

The B/guetzli version is actually too smoothed, obviating a couple gradient anomalies observable in A/orig. Conversely, C/jpegli actually better "preserves" these imperfections, by not smoothening the broader area into a gradient that is "smoother" but loses some detail.

So, not creating an issue :D. If you wish to see the image and do the A/B/C comparison yourself, it is screenshot 14 of videogame [1], direct link [2]. The area where I noticed differences in gradients is the top / top-right area with black arches and blue fog.

Thanks for Jpegli and Guetzli.

[1] https://store.steampowered.com/app/1671480/ABRISS__build_to_...

[2] https://cdn.cloudflare.steamstatic.com/steam/apps/1671480/ss...

Thank you! This is comforting to know!

In Arch it's packaged in libjxl as /usr/bin/{c,d}jpegli . See https://archlinux.org/packages/extra/x86_64/libjxl/ , section "Package Contents" at the bottom.

There's a talk that I enjoy a lot about this "multiple places for settings" problem. About how lots of software end up resembling

1. The maker's org chart (Conway's law) ...

2. ...multiplied by time

Casey Muratori: The Only Unbreakable Law - https://www.youtube.com/watch?v=5IUj1EZwpJY

His examples are precisely the Windows settings :) . More precisely, the various audio/mixer UIs.

CouchDB also tries to do something similar (natively hosting webapps close to a DB good at replication between replicas). https://guide.couchdb.org/draft/standalone.html

That's very interesting, thanks for the link.

Mandatory mention of the excellent adblocking plugins, for pi-hole-like DNS-based blocking of { ads, tracking, malicious sites, analytics, etc }, but built-in your router: https://openwrt.org/docs/guide-user/services/ad-blocking .

Featureful (or simple if you want simple and go for the simple plugin), plenty of blocklists to pick from, auto-updates, all the things! OpenWrt is awesome.

> "Unfortunately these websites change their basic layout so often that it felt like these fixes would work for 1 month max then I'd have to configure again."

You're exaggerating. My userContent.css is 60kB, and although breakages do happen indeed, it's occasional and nowhere near "redo everything every month".

What I will reckon is a pain, are machine-mangled CSS classes (e.g. by packers for React / other frameworks). They are kinda stable, until they're not, and at any rate, their inscrutability makes maintenance more difficult (because .user-profile-picture is human-transparent, while .cD5aZf is not :-/ ).

Judging by some of the other examples here I’m guessing this is a case of writing fragile rules that e.g. count n and m items into a tree and test unnecessarily for incidental classes. If all you do is accumulate the often senseless outputs of the element picker I’d expect the gains to be short-lived.

Indeed, 100%. I learned to never do such things, and would rather not have a rule than have a brittle one. This explains that my experience differs from OP's.

There are better ways to do it. For example, you can match elements containing specific text, such as the text that introduced the annoyance. On NewEgg you might match the text “Download our app” and use it to remove that whole box.

Yup yup.

- Standard CSS (for userContent.css): https://developer.mozilla.org/en-US/docs/Web/CSS/Attribute_s...

- uBlock Origin: https://github.com/gorhill/uBlock/wiki/Static-filter-syntax & https://github.com/gorhill/uBlock/wiki/Procedural-cosmetic-f...

Still, sometimes it's difficult/impossible to make a reliable filter, and in such cases I'd rather not have it than have a brittle one.

Yes, I feel the same. Even though I still have some brittle rules, I avoid most and prefer filtering for specific text or classes/ids and find the correct element from there. Before I did not use uBlock to filter elements that much because I used the filter.

I wonder if you have your filter rules open source or available somewhere. Please share.

I didn't make the effort to share them, sorry. That being said,

1. My rules are not any different from yours. I doubt you'll learn much from them given what you're already doing in your repo.

2. I feel that the "what to hide" / "not to hide" choice is too personal to be reusable by anyone else. I'm sure some of the stuff I hide will be considered excessive, and some will be considered missing. What I enjoy in this HN thread is that we share the { practice, tools, docs, tips }, then to each their own :)

3. I'm not interested in maintaining a public repo of that kind of stuff, and/or replying to Issues. So, would rather not make it public.

Sorry/notsorry ¯\_(ツ)_/¯ , at any rate, glad we're sharing tips around the practice.

The repository I shared is very opinionated and I will decide what goes in or not (a benevolent dictatorship). I want to make it easy for me to maintain, and others to use/fork if they want.

You could share them publicly without having the burden of replying to issues/PRs etc. But I get you.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact