Hacker Newsnew | comments | show | ask | jobs | submit | meatmanek's comments login

Time zones are offsets in UTC, which change over time (DST, political changes, etc).

UTC is an offset from TAI, which changes over time (leap seconds).

The time zone files already keep track of historical changes[1].

Conceptually, they're pretty similar; the only difference is that leap seconds have a special clock value (23:59:60 instead of showing you 23:59:59 twice).

  1. https://en.wikipedia.org/wiki/Tz_database#Example_zone_and_rule_lines

I would imagine that some hardware RTCs would not support the :60 leap second value. In fact I cannot recall a single RTC that I have dealt with, that understands leap seconds.

DNS is surprisingly tricky as a service discovery tool. A lot of clients are poorly behaved, and will cache values for too long (or forever). It'd also be another dependency critical for site functionality.

-----


We're using this for internal load balancing, so we control both ends of the connection. If this started misbehaving, we'd see timeouts, dropped connections, or errors, which would show up in logs.

-----


I see. Any reason not to patch HA to reload config?

-----


Patching HAProxy to reload config is really hard. There have been ideas, patches, and discussions on the HAProxy mailing list for a few years now trying to get zero downtime reloads natively supported in HAProxy, but the reality is that it just is not as easy as it might seem.

For more details check out the mailing list: http://marc.info/?l=haproxy

-----


Not sure; I'm not the author of the post (a colleague), but I'd guess it's a fairly complicated patch. I know the author has made some changes to the HAProxy codebase[1], so I'm sure he considered it.

[1] http://comments.gmane.org/gmane.comp.web.haproxy/21025

-----


The HAProxies in question are dealing with at least one port per service, with dozens of services. If you were to remap ports, you'd end up with a lot of port mapping rules.

We also considered using different loopback IPs per HAProxy, so ports could stay consistent, but decided against it:

- We have other things (like scribe) listening on the same loopback IP address, so we'd either have to move those to different IPs or exclude them from iptables rules.

- We thought it could be confusing/misleading to see HAProxy listening on one IP/port, but connections being made to a different IP/port.

-----


You can try using DNS with dynamic zones as a simple service discovery mechanism (sharing one master with all your environments), but you'll soon find out that: - healthchecking really is a good idea in service discovery - clients are awful about refreshing state from DNS - single-master systems are a bad idea in a large environment. - DNS replication is finicky; DNS caching is slow.

Puppet with puppetdb can sorta fill this gap, too, as long as you don't need fast convergence (or fast puppet runs, if your puppetdb is more than a few milliseconds away from any of your nodes).

Consul may be new, but it's built on really solid ideas and technologies. You can read papers[1][2] about the underlying technologies to get a sense for how Consul will fail. I'd like to think that counteracts some of the problems you get with newness.

[1] https://ramcloud.stanford.edu/raft.pdf [2] https://www.cs.cornell.edu/~asdas/research/dsn02-swim.pdf

-----


I believe TeX is not adding new functionality now; only fixing bugs.

Linux keeps getting more features all the time, so an asymptotic numbering scheme doesn't make as much sense.

-----


Consul is pretty neat, and in my experiments with it, it's really easy to spin up and modify a cluster. It seems like they really thought through the procedures.

Another mature option for service discovery that this port doesn't mention is AirBnB's SmartStack: https://github.com/airbnb/synapse and https://github.com/airbnb/nerve. Here's a talk from DockerCon about using SmartStack and Docker: http://blog.docker.com/2014/07/dockercon-video-building-a-sm...

-----


You could probably get a similar effect by using HAProxy's "balance first" algorithm, which chooses the first available server with an available connection slot (as defined by maxconn). If you did this, you'd want to set maxconn pretty conservatively.

-----


Did you contact Synology to report the vulnerability?

-----


Someone already had, I didn't find the vulnerability on my own. I merely played with it to see how bad it was

-----


How could they possibly not be aware?

-----


Probably because nobody has contacted Synology to report the vulnerability

-----


If they read their own customer forums, they're aware.

If they don't, they're almost criminally negligent, so you wouldn't want to buy from them anyway.

-----


In the PDF that nitinics linked, https://apps.fcc.gov/edocs_public/attachmatch/FCC-02-139A1.p..., check out the bottom of page 13.

  Because repeaters utilize two channels at once (input and
  output) and extend the operating range of a single user,
  their use would limit the number of users able to share
  these frequencies at the same time.
  ...
  some commenters are concerned that MURS frequencies will be
  congested and that repeater use will only aggravate this
  problem. We agree.
Basically, they want to maximize the number of users that are able to reasonably use this band. At the time (1998-2002), two-way radios were much more popular than they are today (cell phones have largely made them obsolete), so I can understand the concern.

For whatever reason, MURS never did became as popular as GMRS or FRS did.

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: