Hacker Newsnew | past | comments | ask | show | jobs | submit | macns's commentslogin

> Why? Because every line shared becomes collective momentum that accelerates the journey.

Truly admireable on their part and a great paradigm for others. Reasons for this doesn't really matter to me but I can't help but wonder if somehow they were obliged or otherwise indebted to follow this route.


Sources for this? Deserves it's own post if data are there to back it up


It's been widely known for years. Search for EcoHealth Alliance and Peter Daszack if you want to learn more.

Briefly, under Obama gain of function research was banned for a while. During this time Fauci signed off on illegal grants to fund GoF research, funnelled through a British NGO in order to evade detection and the ban. The NGO didn't do the research themselves, they then sent the money to Wuhan to fund experiments done there. Thus Fauci was using US money to do banned research, which he then lied about under oath to Congress. It's for this sort of reason that he's now been pardoned by Biden, as otherwise he would likely have been prosecuted. Whether you can actually retroactively pardon someone for any/all crimes without actually specifying what for and without that person actually having been found guilty is a bit unclear, though.


Seriously, wouldn't have it been appropriate to do just the minimalistic wikipedia search before contributing to this thread?

https://en.wikipedia.org/wiki/Wuhan_Institute_of_Virology

This lab was an international cooperation, from back in the days where there were such things.


> This is completely coherent with their privacy-first strategy

You mean .. with their said privacy-first strategy


I bet there's plenty of other non-AI tracks you wouldn't pay to listen to


.. Sundar is announcing that DeepMind and the Brain team from Google Research will be joining forces as a single, focused unit called Google DeepMind

This would be enough as an anouncement, rest of it is just sugar coating.


what? can you be more specific? I'm using debian as my daily desktop and firefox and I've never ever had issues with anything, especially the web browser. I'm also staying away from fancy new things like snap. I've always managed to get everything I wanted either using apt or dpkg.

Can you please give an example of an application you needed available only as a snap?


What version of Firefox do you have installed? The up to date version is 107, released almost half a month ago. If you're on 106 or earlier, you're not running an up to date browser.

Which might be fine! If that's the kind of system that works for you.


  ~$ firefox -v
  Mozilla Firefox 102.5.0esr
.. which is November 15, 2022


Why do I need Docker for such a simple task? From their blog:

> The proxy is extremely lightweight. An inexpensive and tiny VPS can easily handle hundreds of concurrent users. Here’s how to make it work:

    SSH into the server.
    Install Docker, Docker Compose, and git:
I'm sorry but installing Docker on a tiny VPS last time I checked wasn't any light at all.


It's a simple way of running something quickly and without touching the rest of your system (if you already have Docker installed)

Anyway, the proxy is just an nginx with a custom config file. You can check that file and just add it yourself to an nginx you manage, probably with little changes.

https://github.com/signalapp/Signal-TLS-Proxy/blob/main/data...


It's a bit odd to use a custom docker image, rather than the one maintained by nginx Inc though:

https://github.com/signalapp/Signal-TLS-Proxy/blob/main/ngin...

Vs

https://github.com/nginxinc/docker-nginx

For one, this is 5 versions behind (1.18 vs 1.23).

In general seems caddy or haproxy might be a better fit - but nginx is a perfectly fine choice I suppose.


> It's a simple way of running something quickly and without touching the rest of your system

Providing a statically linked binary is even simpler, without all that extra complexity that comes with docker.


This project wraps existing software (e.g. nginx) to function. It's not as simple as providing a binary.


I'm also confused about the Docker hate here. The daemon itself is lightweight and the Docker-ized process(es), once running, have negligible overhead compared to running them natively.

I didn't look at the image size but you might be paying a ~100 MB storage penalty to bundle dependencies.


For my fedora people,I just want to remind them that whenever anyone says docker, you can safely use podman (or at least that is the goal).

It won't be rootless in this case as far as I know because you will need privileged ports 80 and 443 but good habit in general.


You can allow unprivileged apps to use privileged ports, its just a simple sysctl edit.


It's actually more than negligible, docker containerization tends to impose limits, tracking, and network overhead on processes, which all have some overhead and penalty on performance.

On beefcake supreme machines it's just usually not significant enough to worry about, because the perceived benefits outweigh the downsides.


Docker images are just tarballs no? There’s almost no overhead at runtime. Of course you could fork it


There is some performance overhead from the configuration Docker uses for the containers, as well as some of the historical behaviour (not sure if they still apply)

- if you use docker nat, it about doubles connection time, if you only have extremely short connections this can be quite visible.

- If you need FS access, this can come at a high cost depending on your usage pattern, docker’s layered FS is not cheap.

- Finally Docker enables features which don’t come for free and which you may not be enabling separately e.g. seccomp (this can result in a 15+% performance hit in the worst case)


I've put Docker onto small VPSes. It's no hassle. The heavy part is Nginx. Adding the container on top won't be making much difference to the size.


pretty sure you can run docker on a $5 vps with plenty of headroom left

could it be done leaner? sure

is it worth it if it raises the barrier of entry of getting people to run the proxy? doubtful


A single statically linked binary would not raise the barrier of entry. Quite the opposite.


What if you're running on an ARM VPS? Now there are 2 binaries. What if you're running e.g. Alpine? Now you need 4. Which init system do you provide startup scripts for? You need an install script too. And what if you just want to try it on your Windows/Mac computer? Need to manually set up a VM.

Meanwhile, you can just install Docker, which you might already have if you do self-hosting often, and run one command. The overhead of containers is tiny, so you really won't notice it. Bonus points for using Podman, which doesn't even have a daemon.


But you also need to provide a systemd service for it. And statically linked against glibc or musl?


you need to provide a systemd service for a docker image as well, the restrt policies leave a lot to be desired for a host that itself can restart.

glibc doesn't support static linking, so it's probably going to be musl. Running a musl binary on an otherwise glibc system isn't an issue.


> Why do I need Docker for such a simple task?

Containers are more consistent and have less side effects than packages.

> I'm sorry but installing Docker on a tiny VPS last time I checked wasn't any light at all.

There's very little overhead and it takes a one liner[1] to install it.

[1]: curl -sSL https://get.docker.com/ | sh


Whenever I see a docker compose based install, it's clear that the installation wasn't thought through very well. Inevitably, these installs are more complicated and less reliable than a finished product.


Do you have any data to back up your claim about the overhead of using Docker?


just installed it in lxc without docker... works like a charm.


time > compute resources. Docker up/compose and on with your day.


stood up a Signal proxy on a VM with the following specs:

- Single core 1GHz CPU - 640 MB RAM - 10 GB storage ( default size )

I'd say docker is pretty light.


Maybe I'm too romantic, but I'd like to see an american GDPR (not saying that the eu name or the bill itself is better), and then an Asian and so on till we have one global GDPR protecting all consumer data.

</daydream>


GDPR is a horrible horrible solution and only helps the big corporations who can afford all the extra work to ensure that users who actually end up agreeing to the terms are locked in.

It helps no one besides politicians who now have create more work for them selves, and is an abomination just like the cookie policy.


I can confirm sdf.org doesn't exist using Firefox but shows ok with Chromium.

On my debian pc. Probably a firefox bug? Oh, wait.. The second time I tried sdf.org it showed up correctly, so I'm guessing it's a temporary DNS name resolution where it fails on the first attempt.. Doesn't make sense!


Most commenters think of this as an honest/dishonest situation on behalf of the agency. A different approach (and the one I'm reading it as) is that 'Isaac' operates as a seller that likes to make 100k per customer or project in any given time.

Not committing to each project, or actually listening, when the customer complains Isaac makes a few adjustments and continues business as usual.

He only cares about that 100k in a few month's time for a project. So now that he got only half of it he probably thinks he did the client a favor.

EDIT: the product is a life-saver for servers, kudos to the developer(s)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: