In the meantime Miroslav's chrony is a good alternative: http://chrony.tuxfamily.org/
If you are looking for the fixed version you can grab 4.2.8 from archive.ntp.org which is still responding to requests: http://archive.ntp.org/ntp4/ntp-4.2/ntp-4.2.8.tar.gz
> A new "systemd-timesyncd" daemon has been added for synchronizing
> the system clock across the network. It implements an SNTP client.
NTP measures clock drift over a server group. It discovers enough about your topology to assign a statistical factor to each peer, so that a rogue or broken server can not bring down the whole group.
Known good time (which is what the stratum value is, a measure of distance to known good time) is then sprinkled in from several sources to drift the time in the direction of true time.
Exactly as you should have a number of secondary DNS servers in different AS, you should use several different time sources from different organizations. If you are bigger than a closet shop, you might as well put your own GPS receiver in there too when sparkfun sells them for $40, and enable authentication on it.
Many years ago OpenBSD threw up their hands and just decided to roll their own, named OpenNTPD. It's not nearly as full featured as the reference implementation, but it works fine for most people.
Edit: forgot to mention that OpenNTPD does privilege separation (don't know if reference implementation has added that yet). Which means that "executed with the privilege level of the ntpd process" isn't nearly as scary as when the process is running as root.
My reasoning comes from the vast majority of problems I notice being from the edge cases in managing memory, and so if I understand correctly there is a whole suite of languages that mostly remove these problems.
Anyone who says you'll just have a new set of similar issues is full of crap. You'll likely have issues, but ruling out memory corruption out the door is an indisputable win.
I agree that projects should have a way to verify with 100% accuracy the memory safety of their code. One way to do that is to rewrite in another language.
Another way to do that is to use a static analyzer that lists out all unsafe operations in your codebase. Here's one http://goto.ucsd.edu/csolve/. Adds an extra build step / precommit-hook but IMO the cost/benefit is much better than rewriting your project. After all, we can't just rewrite our code every time a new safe language comes out.
Languages can make it easier for some things, but they're not a magical fix all. If it weren't this problem it would be something else, something that even "the great mythical Rust" can't prevent.
Rust, as an example, prevents exactly these kinds of problems.
Don't get me wrong, using a more restrictive language will help but it's not a fully-baked solution. We need more third-party tools to help us automatically verify the correctness of our programs. It shouldn't stop at compilers.
#1 Weak default key in config_auth()
If you're only doing local timekeeping, and not using authentication (you'd know if you were) this doesn't apply. Basically the automatically generated key used for authentication (if you didn't specify one) was only 31 bits long and easily guessable.
#2 non-cryptographic random number generator with weak seed used by ntp-keygen to generate symmetric keys
Same as the above. If you're not using keyed sessions with remote hosts, this doesn't apply to you. Even if you are, the worst you're losing here is that someone could potentially mess with your clock.
#3 Buffer overflow in crypto_recv()
If you are using crypto (i.e. your ntp.conf file contains a line starting with "crypto pw"), you are potentially remotely exploitable to remote code execution. You probably do not have that configuration line set unless you know you put it there.
#4 Buffer overflow in ctl_putdata()
From the sound of the post on ntp.org, this is the scary one. "A remote attacker can send a carefully crafted packet that can overflow a stack buffer and potentially allow malicious code to be executed with the privilege level of the ntpd process." This makes it sound like everyone is exploitable. However, Redhat says "the ctl_putdata() flaw, by default, can only be exploited via local attackers". This makes me believe if you have your ntp.conf locked down using the 'restrict' lines you might not be vulnerable.
#5 Buffer overflow in configure()
This is the same as #4, ntp.org's advisory is vague enough that it sounds like everyone is vulnerable. Redhat is saying "the configure() flaw requires additional authentication to exploit." I do not know what this means.
#6 receive(): missing return on error
From their description, it's technically possible (but they haven't done it) to get ntpd into a weird state that is unlikely to be exploitable.
TL;DR: You're possibly vulnerable to #4 and #5 on a stock configuration. Redhat says no, ntp.org's advisory is vague enough that I'm not sure.
Does it affect any ntpd asking for time?
Does it affect a ntpd asking a compromised server for time?
Does it only affect ntpds answering to time requests?
I wonder because this would also affect what arbitrary code could be run as the ntp user.
$ ps u 561
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
ntp 561 0.0 0.0 5856 780 ? Ss Jul14 22:37 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 103:107
$ /sbin/getpcaps 561
Capabilities for `561': = cap_net_bind_service,cap_sys_time+ep
Generally, at any time, it's safer to assume there's at least one active local root exploit in any system.
E.g. here's my OpenBSD firewall, which syncs to a number of NTP servers on the net (including pool.ntp.org and time.apple.com):
$ rdate -npv clock.isc.org
Fri Dec 19 18:23:11 PST 2014
rdate: adjust local clock by 0.012466 seconds
$ ntpdate -qu clock.isc.org
server 220.127.116.11, stratum 1, offset 0.011798, delay 0.05516
19 Dec 18:24:43 ntpdate: adjust time server 18.104.22.168 offset 0.011798 sec
$ ntpq -p
remote refid st t when poll reach delay offset jitter
*xxxxx.yyyyyyyy. 22.214.171.124 4 u 307 512 377 0.223 5.384 2.611
The TLDR from the OpenBSD side of things:
Hey, if ultra-precision time is an issue,
go buy an atomic clock, I'm sure if your needs
are that precise, you can probably afford it.
openntpd is intended to be SMALL, SIMPLE and SECURE.
Not HUGE, COMPLEX and "Hope for the Best".
I hear something like that a lot, yet all my systems run openntpd and all of them are keeping proper time without issue. What exactly is so not proper about it, and which algorithm exactly does it need to be "proper"?
Having to try to debug intricate problems and not knowing if you can trust timestamps on the logs for the actual order of events can drive you nuts.
Reach a reasonable accuracy. We are not after the last microseconds.
Whereas NTP looks for maximum accuracy.
Also, the last release of non-portable openntpd is from 2009.
Vector clocks? Sure. UTC clocks? Not a chance in hell I'd ever trust them for a reconciliation protocol. I've spent way too much time around the internals of hypervisors, and seeing how various OSes (and versions of them) do and don't keep time well in virtualized environments gives me zero confidence. The fact that any two given cloud VMs agree on the time within 500ms is a testament to the sheer bullheaded determination of their administrators.
Simply, you cannot trust timestamps between two machines to determine ordering. If you do, you will have a bad time sooner or later (so to speak).
Monotonic sequence numbers loosely based on real time are common, since you can correlate the sequence number back to an actual time to look at logs, etc. Time skew wouldn't break the protocol, just make it annoying to correlate after the fact.
And of course any leasing system uses bounded clock skew rate between hosts, but NOT bounded absolute skew.
When I did my distributed DB course way back when I was at uni it was all Lamport or vector clocks...
I'm just trying to estimate the likelihood that anyone was hacked through this vulnerability. Even with stack protection the vulnerability could be used to crash ntp, so upgrading is a very good idea still.
Also, in many cases applications will be distributed or have Makefiles without enabling stack canaries or sometimes even DEP and ASLR.
Bypassing a stack canary generally requires a separate memory disclosure vulnerability if I remember right. (Or an interesting local variable in the same function as the buffer like a function pointer, but I think compilers now are smart enough to try to arrange buffers to be the last thing on the stack before the canary.) It wouldn't be surprising if one existed, but I don't think any were publicly disclosed now.
I guess what I'm asking is: 1) Are there any public exploits that get remote code execution against NTP on modern systems (Should we expect that the average MITM is popping shells with this already on most systems?), and 2) do all modern distributions (like say Debian) containing NTP have the standard default mitigations (DEP, stack canaries, etc) enabled on NTP, or why not?
Better just replace it with tlsdate.
Why would anyone need encrypted time synch? I do not understand the privacy implications, UTC is not a secret.
: For example: http://www.nist.gov/pml/div688/grp40/auth-ntp.cfm or https://www.nrc-cnrc.gc.ca/eng/solutions/advisory/calibratio...
: My sure gps puck cost $40. It works with the antenna sitting on the window ledge inside my house.
I do not understand the need for confidentiality.
FYI Stephen Röttger is also the same guy who is a co-author of the two IETF proposals for the successor to autokey; Network Time Security and Crypto Message Syntax for NTS.
Bitcoin relies on more than NTP so it's not a huge vulnerability, but NTP's vulnerability is a headache to many security developers.
Or can you circumvent certificate revocations this way?
You could also cause expired certificates to be accepted, which limits the utility of short-lived certs as a means of protecting against key compromise.
And as hannob said, you can circumvent HSTS by setting the time in the future, causing all HSTS entries to be expired.
NTP does not sync time. NTP measures time drift across groups of servers, and sprinkles in known time. I'm not saying authentication is useless, you should turn it on for your known time sources, but it's not so simple as you make it out to be.
Unless you use SNTP on your servers. Don't do that. Ever.
Being able to control he time could theoretically let you control any PRNGs that rely on it.
There is no problem with using low-resolution time signatures as a cryptographic seed. Using time as an entropy source is only a problem if you sample at a lower resolution than your clock's error rate.
Ummm, no. NTP normally runs on machines that have a local clock/battery, but which need an established network clock anyway.
> Critical initialization code should probably compare uptime with current epoch time if it needs a random seed for a long-use token.
Using time as a random seed is probably a mistake in the first place. You could perhaps try to add entropy from a clock, but you'd want another source of entropy. Generally crypto code needs network clocks for other things (think of Kerberos ticket expiration).
Are you familiar with something other than NTP as a time source for devices without CMOS? I have a project that desperately needs crypto without a clock.
Yes, there is a bit of a bootstrapping problem, but you can address that with a bootstrapped handshake that sets a clock baseline.
Alternatively, you could just hardwire a radio receiver (like say... a GPS receiver).
Plus ça change, plus c'est la même chose...
Geez, in a time sync program? Nothing is safe anymore.
It doesn't just "query a remote server and steer your clock", and "steer your clock" is a lot more complicated than it sounds when you're trying to achieve such precision.
Most of that source code is there for different and wrong reasons.
It doesn't take much source code to steer your clock precisly, we are talking less than 100 lines.
What takes up space is all the "extras" and "nice to haves".
Why don't you list some of those "extras" and "nice to haves"? Some of them are probably absolutely critical to my services. All of them are probably critical to somebody's.
I don't think you're understanding that I'm not in any way disputing that we need something orders of magnitude smaller for the most common case of a simple desktop or server keeping an approximately accurate clock.
If your response to me talks about just steering a clock, you are not responding to me.
I've already written about most of this in my little "time-blog" related to this project: http://phk.freebsd.dk/time/index.html
Feel free to ask any questions not already answered there.
Right now ntimed is less than 4KLOC and while it isn't complete yet, it does contain precision timekeeping, a data collector to record simulation input, a simulator to chew on those tracefiles and the real-time clock-steering code.
It really isn't rocket size.
OpenBSD's NTP implementation is similar size, but they sort of threw their hands op on smart clock-steering, opting instead for "KISS" from a security point of view.
The main reason I didn't start from OpenBSD's NTP implementation is that it is not aimed at being part of a larger family of time-keeping programs, like I intend to deliver, so it wouldn't save me any time in the end.
(I'll be proud if OpenBSD adopts my ntimed, but I'll fully understand and accept if they do not.)
Adding support for being a NTP server is mostly a matter of writing code to defend against packet storms (See: http://en.wikipedia.org/wiki/NTP_server_misuse_and_abuse#D-L...)
Adding refclocks should never happen in the "main" process, but rather they should be separate processes, suitably sandboxed and interfaced via a good simple API. (I have radical plans for this, which will allow refclocks to be implemented in PL/1, REXX or INTERCAL if that's how you roll :-)
But right here, right now, my focus is to get the vast majority of all machines away from running NTPD, purely as a matter of security and robustness, while improving their timekeeping at the same time.
If ntimed-client ever grows above 10KLOC, I've failed that goal.
First preview release of the source code will happen this weekend.
The only point I made was that 100,000 lines is not staggering for all of the functionality ntpd encompasses, which goes way beyond just "query and steer".
I didn't say a typical NTP client should be 100,000 lines. 10,000 lines sounds far, far too large to me, in fact, for the 80-90% case.
Second, the 100.000 is my guesstimate of what is necessary for "core functionality" and I think that is still a pretty staggering size for what that "core functionality" is.
There is a lot of ancient infrastructure which complicates otherwise very simple tasks, some design-choices, perfectly valid in 1980 should have been revisited no later than 2000, far too much duplicated functionality, because people didn't know or couldn't get the other copy working etc.
My initial plan was to do a serious rebuild of the existing code, in deference to Prof. Dave Mills and NTPDs long legacy (probably one the longest running open source projects!), but in the end I had to admit that such a strategy would be inefficient on any relevant metric.
I think I could write a bare-bone NTP client in well less than 1000 lines, but it wouldn't be very user-friendly, net-friendly or clock-friendly.
Usability in particular soaks up space, but we're not in 1980 any more, so there is no excuse for stuff like this any more:
#define DEFPROPDELAY 0x00624dd3 /* 0.0015 seconds, 1.5 ms */
critter phk> ./ntimed-client -p pll_std_capture_time
Capture time before stiffning PLL.
After this many seconds, the PLL will start to stiffen the P
and I terms to gain noise immunity. Decreasing risks that
initial frequency capture is not finished, which will increase
the offset-excursion. Increasing just delays this stiffning.
Failure: Stopping after parameter query.
... and that will still be less than 3% of the current NTPD.
And he certainly know more about the ntpd family than I.
> aimed at being part of a larger family of time-keeping programs, like I intend to deliver
I am guessing that ntimed-client is intended for the 99.99% use case for most offices, homes and internet providers (which is client-server, not peer-to-peer), and that another program in the family will provide for your use case, which sounds very advanced to me.