Wow, I wish I found HN when it launched. I can't begin to describe how much it has shaped my life, my outlook and my career in the few years I've been (mostly) lurking here.
It always worries me when I install a well-known or large package from npm and it ends up downloading dozens of dependencies maintained by disparate and unaccountable github users.
Adding more context. Sorry for missing it out in first place. I mostly work in Big Data Space. Google Clouds Big Data stuff is built for Streaming / Storing / Processing / Querying / Machine Learning at Internet Scale data (PubSub / Bigtable / Dataflow / BigQuery / Cloud ML). AWS scales to terabyte level loads. But, beyond that, its hard and super costly. Google's services autoscale to Petabyte levels / millions of users smoothly (for example BigQuery / Load Balancers). On AWS, it requires pre warming / allocating capacity beforehand and that costs tons of money. In companies working at that scale, that usual saying is "to keep scaling, keep throwing cash at AWS". This is not a problem with Google.
Quoting from the article: "This accomplishment would not have been possible for our three-person team of engineers to achieve without the tools and abstractions provided by Google and App Engine."
Talking about the use case from the article, they release the puzzle at 10 and need to have infra ready to serve up all the requests. On AWS, you need to pre warm load balancers, increase the quota of your Dynamo DB, scale up instances so that they can withstand the wall of traffic, ... and then scale down after the traffic. All this takes time, people and money. Adding few other things author mentioned: Monitoring/Alerting, Local Development, Combined Access and App Logging ... will take focus from developing great apps to building out the infrastructure for apps.
Currently, I am working on projects that use both Amazon and Google clouds.
In my experience, AWS requires more planning and administration to handle the full workflow: uploading data; organisation in S3; partitioning data sets; compute loads (EMR-bound vs. Redshift-bound vs. Spark (SQL) bound); establishing and monitoring quotas; cost attribution to different internal profit centres; etc.
GCP is - in a few small ways - less fussy to deal with.
Also, GCP console - itself not very great - is much easier to use and operate than AWS console.
Could you please post the URL for the resource and the number of hits it receives? I'm interested in high load websites and I have a hard time picturing how this could lead to petabytes.
The impression I'm getting is not that GCP scales better, but it scales with less fuss - the anecdotes here all suggest that with AWS, once you hit any meaningful load (i.e. gigabytes), you need to start fiddling with stuff.
I don't know if this is actually true, I've never done any serious work in AWS.
Hold on, please do not say google cloud scales well, yes they do have services that make a ton of claims, but unlike AWS, things don't work as promised which is magnified by the fact that their support is way worse.
Additionally, Big Query is far more expensive than Athena, where you have to pay a huge premium on storage.
The biggest difference is that what amazon provides you in infrastructure, where as google provides you a platform. While app engine is certainly easier to use than elastic bean stalk, you have very little control over what is done in the background once you let google do its thing.
GCP support can sometimes be bad but these other claims don't add up. What isn't working as promised? BigQuery can do a lot more than Athena and it's storage pricing is the same or cheaper than S3.
We've used 5 different providers for a global system and GCP has won by both performance and price. We still use Azure and AWS for some missing functionality but the core services are much more solid and easy to deal with on GCP, which is also far more than just app engine.
It seems very strange to paint AWS with such a broad brush, considering that AWS has tons of services at various levels of abstraction (including high-level abstractions like Elastic Beanstalk and AWS Lambda).
Sorry to not add context. I was referring to the use case the author of article was talking about: running a website: You need to stitch: ELB / EC2 / Database / Caching / Service Splitting / Auth / Scaling / ... Where as on Google Cloud, App Engine covers most of the points.
AWS needs to release a more affordable and simpler feature for inter region connectivity. Even MS Azure has a Vnet to Vnet connectivity option in which traffic flows through the Azure backbone vs the internet and it doesn't cost much.
That Vnet to Vnet is unreliable when you start using it at scale.
We had issues as soon as we started launching instances ( after connecting vnets ) , and azure supports response was to give them the ids so they can manually add them to routing between vnets.
Also BGP routing, was impossible to do beyond their tutorial level setup.
I'm based in the Middle East and was really looking forward to this since announcement we use the Singapore region and currently get 105ms pings to our instances. Yet, how come I get 135ms pings to Mumbai despite it being MUCH closer (~1930km vs 5840km to Singapore) ?
Lol. I am in South India, and when gaming, i frequently get better ping to Singapore than other parts of India; Also performance varies widely depending on my ip and the time of the day.
The routing in local isps is atrocious. I thought the situation would be different for commercial connections but your experience seems to suggest otherwise.
6 39 ms 39 ms 39 ms ix-xe-9-0-1-0.tcore2.MLV-Mumbai.as6453.net [180.87.39.57]
7 40 ms 39 ms 39 ms if-ae-2-2.tcore1.MLV-Mumbai.as6453.net [180.87.38.1]
8 131 ms 123 ms 124 ms 180.87.38.6
9 141 ms 135 ms 134 ms 115.114.89.118.static-Mumbai.vsnl.net.in [115.114.89.118]
10 143 ms 136 ms 183 ms 52.95.66.176
11 139 ms 136 ms 136 ms 52.95.66.197
12 124 ms 124 ms 123 ms 52.95.67.208
13 * * * Request timed out.
14 * * * Request timed out.
15 * * * Request timed out.
16 136 ms 136 ms 136 ms <Instance IP>
Whereas for Singapore:
6 145 ms 154 ms 146 ms 38895.sgw.equinix.com [27.111.228.215]
7 94 ms 102 ms 92 ms 52.93.8.10
8 94 ms 92 ms 92 ms 52.93.8.29
9 106 ms 104 ms 104 ms 203.83.223.31
10 107 ms 104 ms 104 ms <Instance IP>
UK-AL is right, today is the day to be awesome. My personal answer is that the people working at Microsoft are awesome. We built this product in less than 90 days based on existing platforms; we were given the freedom and resources to do that. Awesome things are going to happen when we have that kind of environment.
I got asked a lot of questions about how we're built today and so I'm adding this comment for historical purposes.
We're built around WebJobs SDK and App Service, so much of the core work was done already and I had a plan in the works for a few months to bring Node support (I'm a Node.js nerd) to WebJobs. WebJobs has been and is increasingly popular, so it was a no brainer for our team to go and expand and promote it. It was still a lot of work to deliver a WebJobsSDK-as-a-service, but we knew we had a solid core and a great PaaS platform before we even got started.
I wish there was a simpler answer, but a good way (which I've seen posted several times here) to get remote jobs is via referrals or word of mouth.
My first job was to build a simple website for small business a couple of years ago, and now I get calls for things like sharepoint deployments and ecommerce sites. At the moment I have more than I can handle and have to turn down gigs.
You've got to build a reputation with your first project and sell yourself hard.
Thank you Paul and everyone else involved.
Thank you fellow HNers <3