* Software installation/configuration knowledge remains solely in existing Chef/Puppet/etc. code-bases. Dockerfiles can add another format for software to be "installed" (or "packaged" if you prefer). Packer + Docker allows you to use your existing expertise, CI process, etc in order to create these containers.
* Common "image" configuration format: again Dockerfiles represent a Docker-specific way of building images. This is all well and good, but it is still very common to have multiple types of images (AMIs, Docker containers, VirtualBox, etc.). In a world where Docker isn't used everywhere, it is a burden to maintain multiple methods of building images. Packer provides a single way to do it that is flexible to multiple platforms. And even if an org decides to transition completely to Docker, Packer helps get them there. Perhaps they want to switch to Dockerfiles after that, but there is still point #1 above.
* Portability: Packer represents a low-risk way to adopt Docker containers, if you use Docker. Dockerfiles are somewhat of an "all-in" approach to Docker. If you don't like Docker, or Docker isn't good for this specific use case (yet, or ever, doesn't matter), then Dockerfiles have to be translated over to another format. As I'm sure you know, big IT is all about minimizing risk when adopting new technologies (actually, a top point to NOT adopt new technologies that we have to fight!). Packer represents a way to say "yes, Docker is new, but Packer provides a pretty low-risk way to get into it. Let's first build vSphere images, like you're used to, and see how those transition to Docker containers. If you don't like it, we still built automation to build vSphere VMs!"
* Extensibility: Packer is very plugin-friendly. You can hook into almost anything. This allows some nice plugins to exist to help augment the process for building images, whether they be containers or not. If Dockerfiles don't support a command to do something, then Packer plugins can very easily do that for you. Maybe it doesn't make sense for this certain feature to be a core feature of Dockerfiles, OR Packer. Either way, it doesn't matter, because the org can just build a plugin for themselves and use it internally. No harm done.
* Process friendliness: In addition to the portability above, centralizing on Packer for image creation is 1..N less processes to adhere to. Docker has a different process for building containers. Aminator has a different process. Every new process is a new special snowflake CI handler to run them, new education for employees, new maintenance. By using Packer, you can use the same CI runners/parsers/steps (Bamboo, Jenkins, etc.) to build any sort of image.
And to answer your question on "what does a Packer config file look like" here is a basic, but fairly typical config file:
"scripts": ["base.sh", "nginx.sh"]
I hope that clears things up. Packer has been helping with adoption of Docker for many people I've helped! I think its clear from my work on Vagrant and Packer (and some future stuff), that the one thing I try to avoid is lock-in of any sort. I focus instead of human process, rather than technical specifics. You can argue that Packer itself is a lock-in format, but the point is that its a single similar format for many. Its agenda is to be as helpful to as many projects as possible that need images, and not to discriminate in any way.
And to address the grandparent (I'll comment directly on that to): with regards to speed, we're working on what we're calling "snapshotting" functionality in Packer now. With this, Packer will snapshot containers at various points in the build process, just like `docker build`. So when you run a `packer build`, it'll start only from the point it needs to, rather than from scratch. A cool thing is that this feature will extend to all the other builders, too, so if you're building a VirtualBox VM from an ISO, for example, it won't reinstall the OS if it doesn't have to. Cool stuff.
My sense was Docker was the future, but AMIs are the present. Perhaps that is wrong?