Hacker News new | past | comments | ask | show | jobs | submit | steve_rambo's comments login

This is crazy to read. I live in what you consider a highly authoritarian and non-free society and can't imagine something like this happening here. Lower paid jobs are even more privileged in some ways: for example, in many companies you can just not show up for work for a couple of days if you feel like it, and the worst thing you can expect is a small pay cut at the end of the month.


America is a strange place, that’s for sure. It doesn’t look any saner from here in Europe.


[flagged]


Turns out that happiness is not all about air conditioning or sitting in a car.


It will really vary. Most places I've worked, even in lower paid jobs in my later teens, they're pretty lenient and understanding with personal/sick time.


I have found, between the span of $8/hour and $88,000/year, that the amount of tracking (and also the amount of "if you have time to lean you have time to clean"-type busywork) is very close to inversely proportional to income in the US.


Don't, there are many smaller email providers that will take that load off your shoulders for a small fee. I've been using purelymail and have had good experience with it, and heard good things about migadu and fastmail. The latter two are more well known and better staffed, but also expensive.

I've been using similar aliases for years (paypal@domain.tld, ebay@domain.tld, etc), but make sure you have a contingency plan for when you're no more. I've received lots of account info from previous owners of the domain by setting up a catchall mailbox. We will obviously not care, but when someone takes over your account, they might use it to do harm to others (spam or fraud or whatever else).


> ChatGPT consumes 25 times more energy than Google

> ChatGPT consumes a lot of energy in the process, up to 25 times more than a Google search


I've also used phones which haven't received any updates for years without any obvious problems. Just maintaining basic digital hygiene like you do. In theory, one could use a zero-day in a web browser (like the recent libwebp vulnerability), then exploit one of the numerous CVEs in one of the system libraries or the kernel, and own the phone that way even without you doing anything worse than visiting a random website. For example, that's how one of the the first methods of jailbreaking PlayStation 4 operated.

Your average Joe six-pack like myself probably shouldn't really worry about it though, it seems more likely to be used against really high value targets.

You might want to try out another web browser that has aggressive ad blocking (Firefox, Brave, or Vivaldi should do it) since ads are one of the major methods of spreading malware.


>You might want to try out another web browser that has aggressive ad blocking (Firefox, Brave, or Vivaldi should do it) since ads are one of the major methods of spreading malware.

Under rated advise. Too bad said Joe six-pack donesn't follow it because it thinks other browsers "have viruses"



I wish we would rather get rid of Dockerfile in favor of what something like buildah does:

https://github.com/containers/buildah/blob/main/examples/lig...

Since Dockerfile is a rather limited and (IMHO) poorly executed re-implementation of a shell script, why not use shell directly? Not even bash with coreutils is necessary: even posix sh with busybox can do much more than Dockerfile, and you can use something else (like Python) and take it very far indeed.


That's like saying "why do we bother with makefiles when we can just make a shell script that invokes the toolchain as needed based on positional arguments?". Well, we certainly could do that but it's over complicated compared to the existing solution and would represent a shift away from what most Docker devs have grown to use efficiently. What's so bad about Dockerfile anyway?


> What's so bad about Dockerfile anyway?

Things I've run into:

* Cannot compose together. Suppose I have three packages, A/B/C. I would like to build each package in an image, and also build an image with all three packages installed. I cannot extract functionality into a subroutine. Instead, I need to make a separate build script, add it to the image, and run it in the build.

* Easy to have unintentional image bloat. The obvious way to install a package in a debian-based container is with `RUN apt-get update` followed by `RUN apt-get install FOO`. However, this causes the `/var/lib/apt/lists` directory to be included in the downloaded images, even if `RUN rm -rf /var/lib/apt/lists/` is included in the Dockerfile. In order to avoid bloating the image, the all three steps of update/install/rm must be in a single RUN command.

Cannot mark commands as order-independent. If I am installing N different packages

* Cannot do a dry run. There is no command that will tell you if an image is up-to-date with the current Dockerfile, and what stages must be rebuilt to bring it up to date.

* Must be sequestered away in a subdirectory. Anything that is in the directory of the dockerfile is treated as part of the build context, and is copied to the docker server. Having a Dockerfile in a top-level source directory will cause all docker commands to grind to a halt. (Gee, if only there were an explicit ADD command indicating which files are actually needed.)

* Must NOT be sequestered away in a subdirectory. The dockerfile may only add files to the image if they are contained in the dockerfile's directory.

* No support for symlinks. Symlinks are the obvious way to avoid the contradiction in the previous two bullet points, but are not allowed. Instead, you must re-structure your entire project based on whether docker requires a file. (The documented reason for this is that the target of a symlink can change. If applied consistently, I might accept this reasoning, but the ADD command can download from a URL. Pretending that symlinks are somehow less consistent than a remote resource is ridiculous.)

* Requires periodic cleanup. A failed build command results in a container left in an exited state. This occurs even if the build occurred in a command that explicitly tries to avoid leaving containers running. (e.g. "docker run --rm", where the image must be built before running.)


> Must be sequestered away in a subdirectory ... Must NOT be sequestered away in a subdirectory

In case you were curious/interested, docker has the ability to load context from a tar stream, which I find infinitely helpful for "Dockerfile-only" builds since there's no reason to copy the current directory when there is no ADD or COPY it is going to use. Or, if it's a simple file it can still be faster

  tar -cf - Dockerfile requirements.txt | docker build -t mything -
  # or, to address your other point
  tar -cf - docker/Dockerfile go.mod go.sum | docker build -t mything -f docker/Dockerfile -


Thank you, and that's probably a cleaner solution than what I've been doing. I've been making a temp directory, hard-linking each file to the appropriate location within that temp directory, then running docker from within that location.

Though, either approach does have the tremendous downside of needing to maintain two entirely separate lists of the same set of files. It needs to be maintained both in the Dockerfile, and in the arguments provided to tar.


> The documented reason for this is that the target of a symlink can change

The actual reason is that build contexts are implemented as tarballing the current directory, and tarballs don't support symlinks.


> and tarballs don't support symlinks.

Err, don't they? If I make a tarball of a directory that contains a symlink, then the tarball can be unpacked to reproduce the same symlink. If I want the archive to contain the pointed-to file, I can use the -h (or --dereference) flag.

There are valid arguments that symlinks allow recursive structures, or that symlinks may point to a different location when resolved by the docker server, and that would make it difficult to reproduce the build context after transferring it. I tend to see that as a sign that the docker client really, really should be parsing the dockerfile in order to only provide the files that are actually used. (Or better yet, shouldn't be tarballing up the build context just to send it to a server process on the same machine.)


That look quite the same as running a container in docker then commiting it into an image. But this does not seem to allow to set entrypoint or some image configuration values.



ECC is fully supported by consumer AMD processors (at least Ryzen 7000, and I think earlier ones too). You need to pick a matching motherboard, most boards from ASRock will do. And you need to find unbuffered ECC RAM, this is more difficult than the previous two and is why I had to give up on the whole idea.

Related post:

https://sunshowers.io/posts/am5-ryzen-7000-ecc-ram


What was the difference between CPU and video card vendors (if you can talk about that at all)?


aerc sometimes breaks on non-compliant email because the author of the header parser refuses to introduce kludges to handle broken email. When it happens, the mail in question simply doesn't show up in the list. I fully understand that position, but it's not really ideal as a user who can't simply refuse to deal with broken crap. So after using it for a couple of months I reverted to neomutt.


Which mail software sends broken mails? As far as I know, this never happens to me.


Another discussion a couple of days ago:

https://news.ycombinator.com/item?id=40231332


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: