Hacker News new | past | comments | ask | show | jobs | submit login

Technically yes. But we're pretty accustomed to this level of risk. For something like this, the pros far outweigh the cons involved. Yeah, it could fail. The maintenance overhead of this is absolutely minimal and took a handful of hours to have tested and in production.

Also worth noting, that this isn't really a single point of failure as a system wide thing. It'd only be a single point of failure on that single node. So if haproxy decided to explode, only that one machine would have a problem momentarily, while the process got started back up with our process manager.

The worst case scenario is a human error where we ship a bad config and break everything.




Not really true. If you for example mistune maxconn haproxy will stop accepting new connections and that's likely to happen cluster wide.


This is equivalent to shipping bad application code that takes everything down. Except the config is only a handful of lines of code and will very likely never change again. Also, we don't blindly roll out changes cluster wide for things like this without testing explicitly on staging or test nodes.


Shipping it with the app, you lose the cluster wide cached objects. A SPOF is the resolver. It's google but it's a SPOF. Is the failover to s3 automatic ? Or do you make a code change ? What kind of latency does that add ?


Haproxy is localhost. Caching nginx is nearby but not local, so the cache is shared.

Haproxy sends to caching nginx if available, else directly to s3.


Thx. Got it. I don't know enough about the app, but I would have it serve directly from CF to users. Instead of hitting this environment for static assets. Good job and good conversation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: