For orchestration, offhand the most active projects seem to be Kubernetes , Swarm , Deis  and Mesos. Kubernetes is built primarily by Google, Swarm by Docker and Deis by EngineYard, with each team having experience in different areas (orchestration, containers and full-tier solutions, respectively).
Kubernetes, Swarm and Mesos handle the orchestration portions only, while Deis is a more feature-complete solution that handles the CI and registry portions as well.
Delivering updates to these solutions and doing so with zero downtime is still very early as well. Kubernetes has a rolling update mechanism, but it can still (occasionally) result in downtime if not setup correctly. Deis handles updates via git-push and will ensure that new containers are in place before the old ones are taken out of service. As for Swarm, my personal knowledge is limited in regards to rolling update, so I'll leave that for someone else to fill in.
For building and delivering images, there are as well multiple solutions. The common solutions are to use a Docker-compatible registry such as Quay  (Disclaimer: I'm a lead engineer on the Quay team) or the DockerHub . In addition to supporting simple image pushes, both registries as well support building images in response to GitHub or BitBucket, so they can also be used as an integrated CI, of sorts. Both these services are paid for private repositories. Docker, as well, has an open source registry  which can be run on your own hardware or a cloud provider.
Registries are secured by running under HTTPS at all times (unless explicitly overridden in Docker via an env flag), and having user credentials for pushing and (if necessary) pulling images. Registries typically offer organizations and teams support as well, to allow for finer-grained permissions. Finally, some registries (such as Quay) offer robot credentials or named tokens for pulls that occur on production machines as an alternative to using a password.
In terms of how servers know when updates are available, it all depends on which orchestration system is being using. For Kubernetes, we at CoreOS has been experimenting with a small service call krud  which reacts to a Quay (or DockerHub) image-push webhook and automatically calls Kubernetes to perform a rolling update. Other orchestration systems have their own means and methods for either pushing or pulling the fact that the image to deploy has changed.
Hope this information helps! (and if I forgot anything, I apologize)
It's hard to know whether to wait for Docker to provide a solution or to use something that already has momentum. Take networking for example. Solutions have been bandied about for the last year or so and only now do we have something that's production ready. Do I rip out what I already have for something that is docker native or do I continue with the community based solution.
Storage (Data Locality) also follows a similar path. Kubernetes provides a way for making network based storage devices available to your containers. But now, with the announcement of Docker v1.9 do I go with their native solution or something that has been around for ~6months longer?
I've been working with these technologies for the past year and it has not been easy building something that is stable with a reasonable amount of future-proofness baked in.
Here are a couple guides that walk you through your first Docker cloud deployment:
This gives you a private build and registry service that are secured in your own VPC and accessable only through authenticated API calls.
The software that sets this all up is open source and free, but you do pay for your AWS usage (EC2, ELB and S3).
Servers know how to fetch the new version by issuing one `release` command that triggers zero-downtown rollout on the EC2 Container Service (ECS).