DNS data gets cached at many layers, and often times caching DNS servers won't honor extremely short TTL's, they'll default to caching data for at least 4-24 hours. So while your DynDNS info is updated, you can't rely on the fact that the rest of the world is going to be similarly up-to-date.
A better solution would be to break down an actually host a machine someplace where you can count on it having a static IP. Use Apache, haproxy, pound, squid, or a proxy of your choice to manage the dynamic IP's of your EC2 machines. Connections come into this static machine and are dispatched to the appropriate EC2 machine.
There is really no reliable way to have a 24/7, "5 nines" sort of presence relying solely on EC2.
Another solution is to have them share an s3 key and have 'client' nodes write a file like <s3>/private_dir/client.192.168.1.1 a controller node recognises them and issues work.
True, but I think they want it this way for a reason. Amazon is really cherry-picking the best part of the colo business with EC2. They get to charge people a decent rate for a moderately powerful server with a public IP address, yet they really don't have to provide much beyond "best effort" availability. 99% uptime is easy, it's those 9's after the decimal point that get costly.
DNS data gets cached at many layers, and often times caching DNS servers won't honor extremely short TTL's, they'll default to caching data for at least 4-24 hours. So while your DynDNS info is updated, you can't rely on the fact that the rest of the world is going to be similarly up-to-date.
A better solution would be to break down an actually host a machine someplace where you can count on it having a static IP. Use Apache, haproxy, pound, squid, or a proxy of your choice to manage the dynamic IP's of your EC2 machines. Connections come into this static machine and are dispatched to the appropriate EC2 machine.
There is really no reliable way to have a 24/7, "5 nines" sort of presence relying solely on EC2.