Usually you can easily push your local docker images by calling docker save, scp the tar file, then docker load on the remote host. The downside here is that you are pushing every layer of the entire image every time (maybe GBs).
An alternative is to set up your own private registry or use a public one like dockerhub. This can be undesirable or cumbersome for a number of reasons, especially for code you prefer to keep private.
So this tool essentially establishes a private registry on your host which is only bound to localhost (no outside access), establishes an ssh tunnel from your host to the remote to access that private registry, pushes only the layers which don't yet exist on the host, and then cleans up after itself (closing down the registry, ssh tunnel, etc...).
You should be able to "docker build" the image on the remote host that way. Downside there is that you are transporting the entire build context rather than just the layers. IMO most cases will be less efficient.