
Consul 1.0 Released - schmichael
https://www.hashicorp.com/blog/hashicorp-consul-1-0/
======
tombert
I use Consul at work, and I do like it, but I have to ask a kind of obvious
question; why would I use Consul over something that's more industry-standard
like Zookeeper?

I'm not asking this passive aggressively, I mostly want to know what features
it adds over a vanilla Zookeeper setup.

~~~
jake-low
I found this article [0] a couple weeks ago and found it informative.
Basically Consul has service discovery built in, while Zookeeper et al. expect
you to build service discovery on top of the primitives they provide (assuming
you need it).

[0]:
[https://www.consul.io/intro/vs/zookeeper.html](https://www.consul.io/intro/vs/zookeeper.html)

~~~
takeda
This article is biased because it was written by Consul author.

I'm big fan of Zookeeper, and agree with parent that is not only industry
standard, but it is very robust and battle tested.

Having said that, this is not what Zookeeper was meant to. ZK is coordination
service, if you have a distributed service that you need to coordinate and you
don't want any single point of failure ZK is the choice.

ZK can be used for service discovery, but that's not the proper use, you could
equally set up a single database node, or MongoDB cluster to accomplish
essentially the same thing.

So specialized service is better, but I also don't like how Consul solves
service discovery. If you think about it, SD doesn't require strong
consistency, it is not a big deal if all nodes won't learn about each other at
the same time, arguably that's actually preferred.

By removing strong consistency (the only reason why ZK is even compared to
Consul) we could have a better SD that's also more scalable.

~~~
scarface74
_If you think about it, SD doesn 't require strong consistency, it is not a
big deal if all nodes won't learn about each other at the same time, arguably
that's actually preferred._

Can you go into more detail about this? Each computer has its own Consul agent
that monitors the health of services running on it. You also get configuration
from the agent on the individual computer. All of the agents communicate peer
to peer and with the server don’t they? Isn’t that eventually consistent? I’m
just starting to use Consul for configuration and haven’t started working on
the service discovery part yet.

~~~
takeda
Yes, but that's only to detect whether node is up or down, the health checks
are still strongly consistent and go through the servers.

Because of this design, the don't get benefits of any of them.

Side note, KV is actually strongly consistent, which is what is desired, the
thing is that since KV comes with consul it encourages people to use it.

One just need to keep in mind, that Raft and Paxos expect all nodes to be
updated on every change (e.g. if you have 5 server node cluster all 5 nodes
need to "sign on it" before it is accepted). This is counter-intuitive since
adding nodes to cluster makes it actually slower (the goal is correctness and
resilience not performance). Basically you just need to keep in mind to not
use KV for some write heavy operations. Serving configuration files generally
is fine.

~~~
cube2222
In both paxos and raft you only need a quorum to sign on it. In the case of
raft/consul it's the majority. Which means, you need 3 of 5 nodes to sign on
it.

Also, key value store _reads_ are usually done locally, from the agent, which
goes over gossip and is eventually consistent.

Health checks are _also_ eventually consistent. They are done by the local
agent, and through gossip are sent to the masters group.

~~~
takeda
I'm aware that Paxos and Raft require odd number of nodes. My point was that
the more nodes you have the slower the cluster is, because increasing number
of the nodes doesn't increase performance but resilience. With 3 nodes you can
afford to lose at most 1 node, with 5 -> 2, with 7 -> 3 etc.

> Also, key value store reads are usually done locally, from the agent, which
> goes over gossip and is eventually consistent.

That's not what the documentation says. The KV is using raft, you can relax
some guarantees for reading by having clients use consistency=stale, but
you're still communicating with the server, except in stale you only
communicate with one instead all of them.

> Health checks are also eventually consistent. They are done by the local
> agent, and through gossip are sent to the masters group.

Also that's not what the documentation says. Gossip is used for
adding/removing nodes and the overall health of the node (whether host is
reachable) is also done through gossip, but the actual health checks (whether
a service is running) is done through consensus protocol.

I still want to emphasize that using consensus protocol for SD is not a good
idea. Let say you're running in AWS and follow the best practices which is
spreading your servers over multiple AZ. Let say you use us-east-1 which has 6
AZ.

An AWS starts having network issues (like few months ago) and 2 AZs have
network issues and can't communicate with the rest. The machines are still
running, just can't communicate. It just happens that on 2 of those AZs you
run 2 out of your 3 consul servers. Even though all the machines are healthy
and running your Consul cluster is down, even for the remaining 4 AZs that
don't have any problems.

~~~
scarface74
_That 's not what the documentation says. The KV is using raft, you can relax
some guarantees for reading by having clients use consistency=stale, but
you're still communicating with the server, except in stale you only
communicate with one instead all of them._

How do you come to the conclusion that you're communicating with the server?
When I'm doing any type of communications with consul via the API I'm always
communicating with 127.0.0.1/....

~~~
hamandcheese
Local agents forward to your central consul servers.

------
aprdm
Big thanks to the Consul team! We have been using it for a year with basically
0 maintenance.

* Decent REST API

* K/V store

* Service discovery

* Single Go Binary

* Geographical failover

* Load balancing

A real Swiss knife.

Haven't tried different things (e.g: ZooKeeper) but don't feel very compelled
to since everything just works.

------
Chyzwar
HCL - why inventing another config/markup language? There TOML, yaml and dozen
others.

~~~
zenlikethat
It's really quite a nice config language -- I much prefer HCL to any of the
alternatives. Has comments (unlike JSON), is properly nestable (unlike TOML),
is extremely human readable and not too verbose, has a built-in pretty
printer, etc.

I wasn't ever really a huge YAML fan. It's way too ambiguous and sensitive to
whitespace. One erroneous space is way more likely to mess up your YAML file
than your HCL file.

It's not perfect, but I actually would like to see MORE systems using HCL, not
fewer.

~~~
willtim
HCL is not used in the rest API though. So we have to integrate our apps using
_both_ HCL and JSON. It's also schemaless, which means we have to guess the
API using various "samples".

~~~
zenlikethat
Which REST API?

~~~
willtim
[https://www.consul.io/api/catalog.html](https://www.consul.io/api/catalog.html)

------
iofiiiiiiiii
How many people are actively working on it and maintaining it? From my
experiences with Vagrant and Packer, I get the feeling that there is not much
development effort (by Hashicorp employees) allocated to the free Hashicorp
products so I hesitate to even consider using Consul.

Vagrant and Packer feel like early alphas and while there is little
competition, I do not feel comfortable using them. Because of this bad
experience in the past, I have not even tried out Consul and Vault and am
unlikely to do so in the future.

~~~
jnsaff2
I don't know the answer to your question but while Vagrant and Packer both
have been pretty bad experiences for me, Consul, Nomad, Vault and Terraform
are very different products and all pretty actively worked on. I am using them
all at the moment and am very happy.

Consul seems like a very mature product and development has maybe slowed down,
but I think this is mostly because it is mature and looks like HashiCorp does
not feel the need for feature bloat.

------
anubhavmishra
Consul is a dream for most Operations Engineers. At work we have used it in
production for almost 3 years with basically no maintenance except dropping a
new binary for the old one when we update the versions. We use it for service
discovery, feature flags, consul lock for deployments and use the KV as a
config store. It is also our backend for Vault and stores all the encrypted
data. Thank you HashiCorp and the Consul team for an amazing achievement.

------
erikb
We used it heavily, but are switching away since it's not that stable inside
kubernetes. Also some features are unnecessary if you have k8s already. Like
you have a k/v store and you have service discovery already.

I wasn't part of that discussion though. Thus I'm curious. Anybody got a
different point of view on Consul in k8s?

~~~
slackpad
Did you try 0.9.3 under k8s? We did some work specifically targeting k8s by
enabling pods with servers to get restarted with new IPs, even all of the
servers at once. That required Consul to be configured with -raft-protocol=3
to benefit in 0.9.3. That's the default now in 1.0, so both of those versions
should be much more stable under k8s.

I'd be curious if you ran into other kinds of issues on k8s and if you tried
0.9.3 with the required raft protocol version set before you gave up.

------
neduma
Changelog -
[https://github.com/hashicorp/consul/blob/v1.0.0/CHANGELOG.md](https://github.com/hashicorp/consul/blob/v1.0.0/CHANGELOG.md)

------
xatnys
This is only tangentially related but is anyone using Spring Cloud + Consul?
Is the integration of the two in a good spot now?

~~~
t1o5
We are a Java shop and use Consul (as a docker container) for our service
discovery. Spring Cloud Consul plays well with Consul. The REST APIs are what
makes Consul stand out. We have an in-house monitoring service querying consul
apis for service removals & healthy service counts and send an alert if there
are no more healthy services, so that our docker can spin up one just in time.

Another feature that we use is the KV configuration dynamic reloading. So all
your yml configurations are in Consul and can be updated and propagated to all
of your services on the fly.

Now the problem we face is Spring Cloud Consul project is maintained by a very
few people, so if there is an issue, we have to workaround than waiting for a
fix.

------
oweiler
For Service Discovery, why should I use Consul over Eureka?

~~~
slackpad
Consul's documentation has a comparison at
[https://www.consul.io/intro/vs/eureka.html](https://www.consul.io/intro/vs/eureka.html).

------
mrmrcoleman
Congrats.

