Google has awesome data center technology, and my guess is they have at least as many servers as AWS (even if most of those servers are for internal use). If they are willing to invest for the long term, they can be a credible player.
Microsoft appears to be deeply committed to Azure. As in - we can not lose, we will spend whatever it takes to make this fly. They will also leverage their somewhat captive enterprise customers with Azure / AD integration.
Not clear to me how the rest of the pack will fare.
Last year (2014) we announced petabyte-scale offsite filesystems that are price-competitive with S3/Glacier.
In fact, we have for many years solved a pain point for (some) customers who run their infrastructure on S3: "my infrastructure is on S3, and my backups are on ... S3 ?" ... and our support of s3tools in our environment makes that very simple.
 UNIX based, runs on our ZFS platform with snapshots.
 ssh firstname.lastname@example.org s3cmd get s3://rsync/mscdex.exe
In terms of performance, it is pretty good for the price (free).
My only issue has been the down-time, Its actually been down more than my previous Linux host. I hope MS can figure out how to keep it online a bit more often :)
I will probably shake out an account and see what it's like for a cookie cutter user. I'm sure the experience is vastly different for a power user.
The term 'cloud' is a LOT of different technology based offerings/markets, which is why it is going to be so big. Compute is inevitable.
If the cloud is so great, how come it's so much cheaper to rent a dedicated server at scale?
Plus even for a single server with a cloud provider you still get advantages like one-click snapshot, monitoring dashboard etc etc.
For production/serious servers looking at price alone is not the best idea; what about other features like reliability (uptime), support etc.
If you know all you need is one server, dedicated is great. If you need flexibility, less so.
Scalability is largely a design exercise, not, as much as AWS sales engineers want your CTO/CFO to believe, an infrastructure exercise. At the point where infrastructure becomes an issue, you're building your own AWS.
I'm happy to admit that AWS might make some of this easier, but it's almost certainly going to be more expensive , and it's often at the cost of flexibility (lock-in, AWS-specific knowledge).
How are dedicated servers, or even collocated servers, possibly less flexible?
 There are exceptions, S3 and Route53 stand out as, at the very least, being cost-competitive to a greater extent than other AWS/Cloud offerings.
One of the things I feel AWS has succeeded at is putting control of infrastructure in the technology team's hands. In many places dedi or colo requires contract negotiation with the provider and involves some sort of purchasing dept. There are some places I've worked with where getting a dedi/colo could be weeks or months of different teams paperwork. With AWS the tech team can spin up 100 servers with no outside involvement needed after the initial work of getting an aws account with $x-xxx k/mo limit.
It's hard to beat the operational flexibility AWS provides, but I can see a few scenarios where creating your own mini private cloud out of dedi/colo servers could be more cost effective.
You are looking in the wrong places. Dedicated server providers offer dedicated servers. What you want is a VPS. There are dozens of VPS providers with a multitude of products and billing by the minute or by the month and everything in between.
> In many places dedi or colo requires contract negotiation with the provider and involves some sort of purchasing dept. There are some places I've worked with where getting a dedi/colo could be weeks or months of different teams paperwork.
This is merely a failure on the part of your employer.
> It's hard to beat the operational flexibility AWS provides, but I can see a few scenarios where creating your own mini private cloud out of dedi/colo servers could be more cost effective.
It's more like the other way around. To refute my point, please give some examples where AWS is cheaper AT SCALE.
AWS might make some things easier or more convenient, but in no way does it come cheaper at scale as their PR flak claims.
I'm late to the AWS party, myself, but have been working on a project recently that leverages some of this. It is an eye opener--when you have virtual servers that are controllable by API, you really open up new frontiers of designing applications.
I think about the health of my applications and design them to fail gracefully when failures happen. You can do this with hardware but then you've got to go to the DC or send someone there to fix it. It's not just an API call away from being fixed.
I define my server fleet as an AutoScalingGroup. If any machine fails, a replacement is brought online and begins taking traffic automatically - taking only a few minutes. No operator intervention needed.
As a person who once had to recover failed machines manually and individually, it's a beautiful feeling. Not to mention the enormous flexibility of being able to launch additional machines at any time, or automate the scale-up process.
The reality is that AWS is likely not any less secure than legacy IT infrastructure. It also provides great primitives for writing more secure applications.
What does this mean for personal privacy when most of the services you use are backed by one platform?
So yes, much online infrastructure _does_ have AWS dependencies. Sometimes (by way of secondary services) in ways that the operators of the service itself may not be directly aware.
Curious to what that means or how it is measured.
What did Amazon run on in 2004? A warehouse full of E1xk's or had they started moving to x86 by then?
Are they adding a single cabinet of dense x86 servers each day (which I'm sure is as powerful as a datacenter full of Sun gear from 10 years ago)?
The future of the cloud is not AWS. Its not in Amazon's datacenter or some other company's data center. Its not even necessarily in a server.
The servers are going to mainly go away as we transition slowly from server-based networking to content-based networking.
That means that the fundamental protocols are completely unconcerned with what server they are running on or where.
The future is things like Named-Data Networking, Ethereum, distributed apps.
As a stepping stone we might see public clouds that allow you to deploy to ANY city anywhere in the world, enabled by distributed secure data storage and other technologies like Docker and OpenStack.
There is absolutely no reason everyone should run their applications on AWS.
We will also eventually move away from vendor-specific REST APIs to systems built on open semantic interface/data definitions.
If you're going to claim that's the future, you ought to understand why distributed content-addressable P2P networks like Chord (created by YC's own Robert T. Morris), Kademlia, Alpine, and JavaSpaces all failed, and P2P sharing networks like Napster, Gnutella, Audiogalaxy, and Kazaa were unable to break out of their illegal-music-sharing niche. And then explain why it's different this time. If anything, the forces that made distributed hash tables unworkable in 2001 are stronger now, as Ethernet bandwidth, file size, and disk space have increased much faster than consumer Internet bandwidth.
For example, I have not idea why future you are describing is going happen. There are many other alternatives.
What if future will be all about cloud computing provided by gargantuan sized companies? What if only a few hosting companies will remain and personal owning of computing device would be economically inefficient?
The next step is formalizing interactions for code that runs on the same node (central or edge), but originates from competitive businesses.
There are a huge number of use cases where AWS / data center / servers are the best fit. That will remain true for the foreseeable future.
Yeah, so let's host everything in the world on the servers of a single company. What could possibly go wrong?
There was an AWS:reInvent 2014 presentation about NASDAQ OMX. OMX is the holding company of NASDAQ that develops and grows the technology that runs stock exchanges in many countries.
OMX is using Redshift to build a cloud solution (FinQloud - Regulatory Records Retention) for their 20+ exchange customers (worldwide stock exchanges). To protect their data, they use HSMs (actually, a cluster of HSMs) to encrypt the data. NASDAQ OMX has a direct, leased connection to the AWS Data Centers. The data is stored on s3 in encrypted form and only decrypted at the time of Redshift building the reports by getting the decryption key from the HSM (over the leased connection). They have multiple alarms and monitoring around Redshift access in their offshore ops center (e.g. the postgres audit table).
True data privacy and protection is near impossible but Amazon makes it easier to achieve high-levels of data privacy and security.
If you can find an example of NASDAQ OMX using EC2 machines to host the data, maybe I'll believe you. But for now, your post is pretty bland hyperbole.
I'm sure the cloud is secure enough for a lot of businesses. But I think the links you have are reeking of "marketing", as opposed to a practical example.
I'm working on this: http://utter.io/
What an audacious call by Amazon