Having worked at both places for ~4 years each, I would say Amazon is much more of a product company, and a platform is really a collection of compelling products.
Amazon really puts customers first. Their platform and organization are made up of small teams that own services with well-defined interfaces, accountable for customer metrics. All profits are reinvested, so resources and perks are scarce, efficiency matters, and management is tight. The platform emerged because internal teams thought of their infrastructure services as products with customers.
Google really puts ideas (or technology) first; it aims to hire the smartest people and rewards them for launching new things and solving complex problems rather than optimizing UX or making customers happy. Resources are ample and management is loose, so individual contributors can try new things with greater leisure. It's been compared to grad school. But simplifying customer experience is less of a priority, so the internal infrastructure was notoriously complex and hard to use. They're now learning to prioritize customers, but it's hard to change culture.
Of course, both companies are huge and diverse and evolving, so you'll find plenty of variance.
App Engine wasn't evidence of Google being a product company, nor does it exemplify the company's strategy. It was a grassroots project that for years didn't receive much leadership support, but was still allowed to launch and grow.
App engine is the biggest enigma. On paper it is a near perfect cloud experience. It has a huge range of services covering the needs of most web applications: task queues, caching, RDBMS, storage, monitoring, logging, a fantastic dashboard; the list goes on and the scope of GCP and GAE in general is truly massive. At first glance the documentation is exhaustive, support quick, and you get the feeling that the product has the full backing of Google with many hundreds of engineers continually improving and iterating.
Yet then you get into the trenches of it and (IMO) you realize the sum of its parts is much less then the value of the individual pieces. You feel the pain of the documentation writers who had to transcribe examples and helper libraries to ten languages, "beta" features that have been out for years, "examples coming soon" in README's that are two years old.
Want to use python3? That's cool, use the flexible environment. But it doesn't support taskqueues or many other features.
Need websockets? Thats cool, we kinda have this socket API and similar for some languages and environments. It doesn't really work in the flexible environment though sorry :X.
All our python examples are in framework X, that's sufficient for everyone using framework Y, right?
Don't get me wrong, my company uses GAE and its benefits outweigh the costs for us. But there is a very real "Googliness" to the failings of the platform. The shear breadth and requirements of "fixing" and iterating on GAE must not be a very fun project to work on.
> All our python examples are in framework X, that's sufficient for everyone using framework Y, right?
I work on GCP on the Python samples.
We generally pick Flask, since our thinking is that for many API calls, it is pretty much identical code in other frameworks, and Flask has minimal boilerplate.
We have quick starts for Django for all our platforms. I think Flask + Django covers a huge chunk of Python frameworks people are using.
If you think we are missing important Python samples, you can file an issue here:
Thanks for your comment and I want to say, though my comment was rather pointy, I respect the work all of you are doing and I do see a lot of improvement in the platform.
I keep a local branch of the python-docs-sample repo and just took a gander to refresh my memory. Specifically I see a quite a few examples using class based views based from the webapp2 package. I don't think it's unreasonable to have this as a major reference point, but it does require some extra documentation reading when converting to say, function based views in Flask.
Our personal use case is python3 in the flexible environment and I'd like to point out two notes while I have you here (if it's appropriate):
1) Are task queues coming to GAE Flexible environment - python? (and more over is feature parity coming between the google.appengine and google.cloud packages)
2) It's undocumented that the flexible environment of python requires a specific configuration variable to be set in order to make a connection to cloudSQL. I raised a support ticket for it a few weeks ago and the documentation hasn't been changed. It took me a few hours to debug it personally and I would like to save others the effort, can users make a pull request on the docs directly? For reference the variable is "beta_settings: cloud_sql_instances:" in app.yaml (it's present in the python-docs-samples but has no comment explaining its significance/requirement).
EDIT: I can no longer edit my original comment, but it seems GAE flexibly environment for python does support web sockets, though I would question the effectiveness of stateful servers in GAE. Of course thats an implementation problem and not one with GAE.
You're right my original comment was misleading, most of the GAE Standard samples are webapp2, but that's because it comes built-in to the platform and can be specified in the app.yaml, so webapp2 doesn't require people to `pip -t` to vendor Flask into the project. It might be worth revisiting if some of those samples should be in Flask or in both.
1) Guessing you already know you can use Pub/Sub for background tasks, examples here in our Bookshelf app:
I think Product knows that the developer experience for tasks could be better and closer to Standard, but we haven't announced any public roadmap for task queues on Flexible.
2) I see references to that variable in our docs so I'm not sure where you're saying it's missing. Unfortunately you can't submit PRs for our docs, I wish you could.
Thanks again for the feedback, getting pretty off-topic so maybe good to take any further conversation to #python in GCP Slack?
>>> Want to use python3? That's cool, use the flexible environment. But it doesn't support taskqueues or many other features.
As far as I know there is no planned support for Python 3 in the standard env. I've been using GAE since 2010 but I'm a little uncomfortable continuing to writing new apps in Python 2.7 when they have a clear end of life date set now.
Given it might take you guys a year or two to support 3 after you decide to do it, then a year or two for me to port my apps over to python 3, my apps might be running for a while past the end of life date for Python 2.7.
Besides, I can't just keep writing 2.7 apps forever, so either I you guys have to update the SE to 3, or I need to start evaluating and comparing the flexible environment to what everybody else offers.
We are indeed working on creating a standalone Task Queues service that will work across all hosting platforms. You can sign up for the alpha here: https://goo.gl/Ya0AZd
(I work on the Python developer experience for Google Cloud Platform)
This is why I come to HN. Where else can you read an article about a high profile company, then find comments from people who work on what the article is about? Unless you guys are a brigade, it's pretty cool you happen to be browsing where the rest of us browse.
"Their platform and organization are made up of small teams that own services with well-defined interfaces, accountable for customer metrics."
You are using the terminology of products and platforms with the exact opposite meaning of the article's author.
Those "services with well-defined interfaces" become a platform others can use to build their own products. Similarly with the e-commerce and fulfillment infrastructure third party sellers can use. And maybe the transportation infrastructure in the future.
I also think you vastly underestimate the quality of Google's UX. Type any thing you can think of into one simple box, and you get surprisingly useful auto-completions, relevant spelling corrections, and almost always find the answer to the question you had when you started typing. There are any number of complex back end services working in parallel to resolve your query, with results gathered, ranked, merged, and rendered in a fraction of a second. Pretty hard to beat that user experience.
This is how the author defines product focus, emphasis on combining many components internally to provide an outstanding end user experience, without necessarily making the components available to be used by others to create their own user experience for their customers.
Products and platforms overlap quite a bit, e.g. many enterprise products can be thought of as platforms, or AdSense and Android. That's why I think it's less useful to contrast the two companies on a product vs. platform axis, and why I find the cultural differences more interesting.
Google does make great products, especially when it's a matter of presenting a simple elegant interface to a complex internal system. I'm a big fan of Google Search and Maps UX, and Google Now.
But the UX of a product isn't just its immediate interface, it's also all the interactions you have with support, documentation, and change over time, and trust. The cultural differences are more evident there, though some teams at Google are getting pretty good at these things as well.
Amazon gives each team independence. Therefore it is virtually impossible to insist on consistency between what different teams do. Each team makes sense on its own, but the whole can be very, very confusing.
Google has a process that results in much greater internal consistency. It may not be a great UX, but it is consistent. Inside and out.
For small systems, Amazon is going to give a better UX. But for a complex system, I prefer what Google will produce.
as a long time Google user, I find it pretty ironic to use consistency and Google in the same sentence. If consistency exists internally at Google, none of it made in to their products unfortunately.
> But for a complex system, I prefer what Google will produce
Having worked there in teams near to their tablets I really think Amazon would have a hard time producing software of the complexity of Android or Chrome.
I have a question. Do you think this Amazon culture is the cause or the result of service-oriented architecture at Amazon? Or maybe am I completely off the mark here.
I found this quote from SEC filings. Jeff Bezos says:
> Service-oriented architecture -- or SOA -- is the fundamental building abstraction for Amazon technologies.
You mean the way Google piggy-packed on Apples work which in turn piggy-backed on KDEs work? Amazon did an extension to optimise rendering on small devices, which is complexity wise not too far off to what Apple & Google contributed to the rendering engines, which at the end is the tricky bit an a browser, not the Chrome.
Really don't understand the downvotes. Whether they gave back to the community or not is another story, but they did piggy-back on previous code, even though adding a lot themselves and giving back a lot, but it still built on massive existing work.
Apparently Amazon also puts developers into the customer support rotation. If you want to improve customer experience, this is a great idea. I've worked small-medium business support before, and you're generally treated like chaff by the devs, who don't get to wear their own cut corners and bad decisions.
> App Engine wasn't evidence of Google being a product company, nor does it exemplify the company's strategy. It was a grassroots project that for years didn't receive much leadership support, but was still allowed to launch and grow.
App Engine was a precursor that came 5 years too early.
We jumped on Appengine very early, and have never regretted it. It's remarkably stable, and not having to worry about the security issue du jour, scaling, or any other sysadmin stuff means we can concentrate on what we do best - building apps.
I've been surprised how few people understand the value proposition, and how little competition there is. When I first heard about Azure I expected it to be PaaS, but it turned out to be Windows-first AWS.
Not as I understand it. Both AWS and Azure force you to use machines - you may put them behind load balancers, and they may start up automatically when required, but you still need to design the system and do provisioning and plumbing to make your application working.
With Appengine, you don't know anything about the hardware of software your application is running on - you simply upload your app.
I haven't used Azure, so please correct me if I'm wrong. I believe AWS Lambda is the beginnings of a "serverless" environment - maybe Azure has an equivalent.
Azure has a ton of serverless components. Functions is their lambda equivelant. Azure has many services you can create apps and capabilities without standing up a sever. Like anything else it has some pros and cons.
Was it really just "too early"? I always thought App Engine was a fantastic idea, and I wondered why it never seemed to catch on.
Choice of languages -- initially just Python and Java? Fiddly APIs, different from competing platforms but not actually super-simple for simple tasks? Lack of a straightforward way of running background tasks (still a bit of a mess)? Lack of management support? Maybe just underpowered at first for large sites, and insufficiently compelling for small sites to build a loyal fanbase?
I just built a new, very small project in App Engine (Python, standard environment). It works fine but the tools are quite fiddly. There are plenty of docs but they're a bit of a trainwreck, in the classic Google "the old way is deprecated, but the new way is still in beta" way (e.g. standard versus flexible environments).
App Engine seems to me like the classic Google product: some cool ideas but the initial experience was clunky and the pricing model scared people, especially since you were locking yourself into a proprietary architecture. Google engineers could point to various things it did to help with future scaling needs and took the advantages of things like the NoSQL data model for granted but everyone I knew who didn't work for Google was generally asking questions like “How long would it take to migrate if they cancel the service?” or “What's my coping strategy if they have another major outage?”.
I think a little time invested on customer service and user experience would have gone a long way.
I just realised there was another problem: no internal customers. If there had been an important team inside Google keen on using App Engine, that might have helped them figure out the right feature set. But instead they were just guessing at what users outside Google might want. (Compare to Gmail, which was and is very heavily used inside Google.)
That doesn't help though, as Googlers learn to write apps the Google way (massively horizontally scalable, managed NoSQL data service), which looks very much like App Engine. Outside, people still wanted to run their relational databases and large VMs, and App Engine didn't let them do that. That's why we came out with Compute Engine.
I'm sure there are other reasons why App Engine didn't catch on, but the deal breaker for me was support.
They had an issue where outgoing emails just disappeared, without errors, and no way to debug what was happening. Support denied the issue for quite some time. Wasn't fixed for 2 or 3 weeks.
Great point on simplifying the UI being a priority. This takes a lot of coordination and willingness to say "We will drop features A, B and C to make the overall experience easier." This is ultimately both an organizational and cultural issue. Some companies optimize on centralization (historically Apple) while others have a more decentralized "empower the individual" ethos. This article from the same source highlights how Apple's centralized org is core to it's view of product integration. https://stratechery.com/2016/apples-organizational-crossroad...
When dealing with a platform, this user experience coordination becomes much more important. (Think the old Apple ecosystem versus all the crap that came pre-installed on the Wintel systems)
AWS has fallen flat in the past with cross-service initiatives. Tagging and IAM keys come to mind. I haven't personally used Google much but from what I hear their stuff is a bit more cohesive..
This new Org stuff might be a recent exception though; I haven't read through it all and tested it to know. Hopefully.
Haha, I have never read Salesforce docs.. I must say though that I'm pretty happy with their docs. Sure, they are dense. However, they have most of what you need if you spend the time to read them. I MUCH prefer comprehensive docs to those that are lacking..
This is changing - I got a call from Google Cloud sales/support today. I am just using the GCP for prototyping so I have spent maybe $50 on it so far. Looks like they're stepping up their gane and really trying to be helpful and willing to listen to customers.
I think Ben (who I generally think is right on) in this post misapprehends the effectiveness of generalized data for machine learning services and thus the effectiveness of this approach in Google’s strategy here. Perhaps the slide makers in Mountain View have the same misapprehension.
1 - Prediction API - you provide your own data there, so no data advantage from Google there.
2 - Cloud Natural Language API - the effectiveness really depends on what type of text you want to understand. If Google’s training data includes information about my type of text application great, but if it doesn’t then what? How do I know that?
3 - Cloud Vision API - likewise. Can I subset the training set? Provide my own examples? If they subset, can I inspect the examples?
4 - Translation API seems like the exception here, mainly because the odds are that customers of translation service are unlikely to have collected language pairs and this collection is more highly specialized. But it’s unclear that this one API would be the deciding factor for many companies choosing which cloud vendor to use.
ML services as a differentiator have yet to be proven out. I am highly suspect. Yes, some big general data sets will be better on some applications than others, but an enterprises’ own data about their problem will always be better than a huge, general data set. And if you’re using your own data anyway, you’re going to care about all the platformy things Amazon has already been winning with.
Barring proprietary breakthroughs in unsupervised learning, I don’t believe that this strategy as outlined will work in practice.
I don't think Google is marketing their Cloud NLP/Vision APIs for big enterprise customers that have very specific needs. Those APIs are meant for people that have common needs ( = want to identify which items or people are on a photo, understand queries in commonly used languages, etc.)
If you have specific needs, then you can use TensorFlow running on the app engine (as they will soon be providing hosted and GPU-accelerated instances), which at worst makes it equal to Amazon offering... but something tells me the vast majority of Google Cloud customers will be satisfied with pre-trained models that can be applied on a very large swath of problems.
They definitely are for most customers: it is extremely expensive to gather and label enough data for a deep learning model to work correctly. It's very unlikely that you'll manage to configure and train your models + generate input data that Google lacks to make your model work much better than what Google already provide with their "generalist" API.
Say you are an insurance company and you want to use build a model that uses damage photos and meta data about car as a backstop to make sure that your repair shops aren't ripping you off.
In this case you already have a bunch of historical labeled data and a pre-trained model is useless to you application. It doesn't help you that the pre-trained model can recognized 10 different types of cats, you need a model trained on photos of damaged cars. Obviously the insurance companies own photo data will be more useful here because it's data about the application domain.
Google has collected a ton of photos for the purpose of image search and consumer photo organizing and that models utility has been tuned to those application area.
The key question is what is the overlap between all applications of images models and what photos Google has collected.
There will be for some but my guess is that those are the mission critical, I can only get this performance from Google cloud are few and far between.
I'm not saying there aren't any. Ben's article suggests that Google's data is somehow going to be a mission critical asset for all applications areas. Which I think is a terribly naive idea when it comes to ML.
I think you've really pointed out a very important flaw in that as developers and startups we don't have terabytes of training data.
However, I'm not sure who's in the leading edge here while Microsoft seems to suggest as the leader, but in the end machine learning, deep learning will be commoditized with enough training already baked in.
ex. Need your web app translated flawlessly into Farsi? Just drop in farsi.js to your </head> and etc.
Information embedded in a pre-trained model (Google isn't given out that data, just access to an artifact) is helpful if your application lines up to goals of the model. I think it's an open question how broad that alignment might be. It totally depends on the application.
You can control the app, but you don't have control over the model.
It will be useful to some apps but I don't think it is going to be the secret weapon in Google's fight against AWS as Ben's article suggests. It's a neat argument but I think it ignores the reality of applications and machine learning.
> Amazon’s AWS strategy sprang from the same approach that made the company successful in the first place
I'd argue that Amazon isn't a successful company, they are a popular company with a few large successes surrounded by decaying and decrepit failures that won't die. But then again I'm biased.
As far as Amazon's AWS strategy, I can't comment (I worked in the retail business side). But I can comment on a relatively small aspect of management that I witnessed. At one point in time I had a very strong need for PostGIS, and I lamented on an internal email list about AWS not having a Postgres version of RDS. I received an email directly from Raju Gulabani, VP of databases in AWS. He scheduled an appointment with me, him, and two product managers. He asked me pointed questions about why I wanted Postgres over the other options, how I would be using it, what extensions I wanted, and what features were important to me. He thanked me for my time, and less than a year later it was released to the public.
In the retail business side, I never had more than 2 minutes at a time with someone at the director level, and not once had I spoken to someone at the VP level. Literally zero communication from the bottom up, everything was top down. Whether AWS had already been working on it or not I don't know, but they definitely took the time to hear my case, and when it was released it was almost perfectly as I had asked for. And that, IMO is waaaay more important than anything regarding the size of a team or whatever the fluff pieces have attributed.
> I'd argue that Amazon isn't a successful company, they are a popular company with a few large successes surrounded by decaying and decrepit failures that won't die. But then again I'm biased.
Couldn't you say the same thing about Google, or Microsoft? I think the point of 'success' is that your big wins outweigh your failures as determined by your revenue. Is it not?
Creating the largest retailer in the world and the largest cloud services platform in the world seem like two pretty big wins. Either one on its own would be an extremely successful company IMO.
>Couldn't you say the same thing about Google, or Microsoft? I think the point of 'success' is that your big wins outweigh your failures as determined by your revenue. Is it not?
Of course it is. And by that definition, Amazon isn't successful, and nowhere in the same league as Google or Amazon. Not yet, at least. They've managed to break even more or less, but it is still yet to be determined if they can become the wildly profitable company that their stock price suggests they can become. My opinion after working there is that AWS is to Alibaba as Amazon is to Yahoo.
Their gross profit in 2015 was 35 billion. Google's was 46. The difference in final profit comes down to how much they reinvest in R&D and future growth. I'd call that the same league.
You are sorely mistaken as to the difference between gross and net profit. R&D is in there, sure...along with a billion other things that also don't get accounted for in cost of goods sold.
Ya, but Amazon doesn't break it down any further than that. I'm aware that there are other things in there, but it is well known that the primary contributor to that figure is R&D.
Not a single programmer, manager, or any other central office employee gets paid out of COGS. There are at least 50,000 of those. Same goes for real estate costs, legal costs, etc. Servers and their operations costs might go into COGS on the AWS side, but definitely not on the retail side.
Breaking out investment vs administrative cost is actually very hard to do. Is a programmer working on a new feature an R&D cost, or an administrative cost? What if it's a new service? What if it's a new product? What if it's a bug fix? What if it's a critical vulnerability? What if your programmer does all of the above at different times of the year? It's pretty much impossible to separate administrative overhead from research and development in tech companies, which is why they tend to not do it unless they are forced to. It's up to their shareholders or the SEC to force them to do it if it happens, which hasn't been the case for Amazon yet.
It is assumed that they would be turning a profit if they decided to just keep the lights on and not invest in the future. That's what they tell us, and that's what we see (new product and service announcements tell us as much). What we don't know from public information is whether they would be 10% more profitable or 10,000% more profitable. And that's before we know if their investments will pay off or if they become another perpetually subsidized program like Amazon Fresh. That's why Amazon stock is considered to be a speculative investment, whereas Google and Microsoft are more in the blue chip camp. My experience and hunch tells me that Amazon stock prices are at least 50% undeserved hype.
There's always the simpler explanation to why Google is getting traction:
- Price/performance is better in some/many cases for VMs
- It's easy(ier?) to use
- Clear technical advantage with some of their other services e.g. Load balancers
- Customers prefer when there are multiple companies competing for their business
I get that there are long-term strategies that involve the likes of container services. But just the fact that they are better in some areas will help them get traction. Plus they have a fantastic brand name.
Frankly I still like the Heroku model the best. Do one thing and do it well. Have third-party plugins handle the other things. It fits the "cloud" vision better, than consolidating all your functionality with one provider. That seems like a regression. I just wish Heroku was cheaper at scale. I don't understand why it's not. It seems like they could reduce prices and still remain profitable, while increasing their visibility greatly.
The article mentioned kubernetes multiple times and I think it's providers like Heroku that actually stand the most to gain there. With kubernetes in theory they should no longer have to make you choose between providers, and be able to run on spot instances of whatever provider is cheapest at the time. You just choose your max latency at certain geo areas and a budget to balance by, some AI to help you determine the tradeoffs, and it does the rest. If Heroku isn't working toward that then there has to be someone doing so soon. If not it seems like there's a pretty big opportunity there.
Cloud Foundry is closest to this vision, in my entirely biased opinion. We're already able to mount standalone installations on AWS, GCP, Azure, OpenStack, vSphere and others.
Distributing apps across multiple clouds is easy to say, hard to do. Each IaaS has peculiarities and wrinkles, different tradeoffs in performance and cost and so on. It takes a moderately tricky scheduling problem and turns it into a much gnarlier one.
What's easier is using high-level tools like Terraform and BOSH to manage installations on different IaaSes, and pushing apps to whichever one you like as you like. I can easily imagine setting up round-robin deploys.
That said: data has inertia. Any sensible architecture has to bear that in mind; typically apps will wind up living close to their datastores.
Disclosure: I work for Pivotal, we're the majority donor of engineering to Cloud Foundry.
>> Microsoft did the same with its Win32 API. Yes, this meant that Windows was by design a worse platform in terms of the end user experience than, say, Mac OS ...
I want to try out Google, but they need to make it easier to try it out. I have petabytes of data in S3 that I would need to move first (at least some of it).
`Transfer data to your Cloud Storage buckets from Amazon Simple Storage Service (S3), HTTP/HTTPS servers or other buckets. You can schedule once-off or daily transfers, and you can filter files based on name prefix and when they were changed.`
It would be nice if they managed the transfer themselves via AWS Snowball. Sure, they would have upfront costs, but based on what I spend on AWS monthly, it's probably worth it to them.
Yup, and I work for Google. We routinely copy petabytes between AWS and GCP. It's not currently more efficient to ship petabytes if you include the time to copy to the device and then recover it.
I've work on high performance networking file transfers. my experience is that most people who move data get very low utilization compared to the actual throughput of the network. People typically use one TCP connection, one process. high performance data transfers use thousands of TCP connections and thousands of processes.
Many other people underestimate the time/labor effort of dealing with a snowball.
We used a box of disks to get about 20 terabytes out of Amazon to CMU. It ended up being about 50℅ cheaper (from memory - may be off a bit) because we did not account for any employee costs. Startup, running on fumes, none of us drawing a salary, etc.
Technically, that's a logical fallacy:. A&B->true does not mean !B->false.
But, really, I'm not trying to prove or disprove your point. Just noting that there was a situation for us where disk made sense, and we were satisfied with the outcome. Spending 4 hours of person time to save a thousand dollars was reasonable for us in a way it probably wouldn't be for many real companies, because we had comparatively little money and we're willing to work for peanuts.
(Note that I actually share your bias in this one. I both use GCP for my personal stuff and I'm writing this from a Google cafe. :-)
Many customers want to dual-host their data to not be beholden to a single cloud provider. Or to have redundancy across providers. or to put their data closer to the compute.
There's likely a significant difference in cost and capacity between copying a PB of data from a corporate datacenter over that corporation's connectivity to S3 and copying from S3 to Google over Amazon's and Google's connectivity.
Hah. What about about the egress cost? Getting 100 PB out of AWS is not cheap. (maybe it is cheap in actual cost, but not in what the end user has to pay to AWS cost).
I didn't realize that they have direct connections. The AWS data center down the street (Virginia) is directly connected to some google cloud datacenter? Sorry - I'm generally ignorant of datacenter technology.
So a back of envelope calculation says that it would take around 10 days to transfer 100 PiB over Gigabit ethernet. When you say immense, do you mean faster than that?
Hey ap22213. Never got your email. Just following up to see if you've resolved your issues with GCP, if not feel free to contact me at bookman@google.com
> Yes, this meant that Windows was by design a worse platform in terms of the end user experience than, say, Mac OS, but it was far more powerful and extensible, an approach that paid off with millions of line of business apps that even today keep Windows at the center of business.
Is there any validity to this? I don't do much OS-level programming, but is the Win32 API really that much more powerful and extensible?
To give one example: Windows Explorer has been extensible since Windows 95. That was 21 years ago. Dropbox has to pull nasty hacks to integrate with the macOS Finder [1]. That's now.
That's not really an OS level thing, though. That's an application level thing. Finder was chosen not to be extensible, and Explorer was. There are Finder replacements that are.
> Is there any validity to this? I don't do much OS-level programming, but is the Win32 API really that much more powerful and extensible?
I certainly don't think so (although there are some nice things about the Windows kernel, if one comes from a VMS background), and it's certainly not the reason for the success of Windows. Remember that MS-DOS beat Mac OS; Windows 1 beat Mac OS; Windows 3.1 beat Mac OS; Windows 95 beat Mac OS. None of those was technically any good whatsoever, and only one of those had a UI that was worth shaking a stick at.
The reasons for the success of Windows are non-technical: Bill Gates's mother served on a charity board with the chairman of IBM; business bought IBM computers because no-one ever got fired for buying IBM; IBM clones were cheaper than Macs; IBM clones were more extensible and hackable than Macs; Unix workstation vendors thought they could keep milking their cash cows; Microsoft engaged in noncompetitive behaviour; random chance.
I would like to try out google cloud due to its lower costs, but don't trust them with my data--partially perception I know. Also Amazon has never failed me over the last 15 years I've been doing business with them, so I'll probably continue despite the clunkiness of their products.
I'm not sure if Google's commercial products are as bad as their consumer-facing offerings, but if we can't get someone on the phone when a support issue arises then we'd never consider using them for any cloud services.
Depending on how you use the services the AWS free tier is actually worth less than GCP's $300 free-trial credit: https://cloud.google.com/free-trial/
I will give Amazon that theirs extends out to a year, but summing up the costs of everything included in the free tier if you use all of it, I think it may still come out to less than $300 (or the equivalents on GCP would, anyway). For example, running an f1-micro for a year will cost you a bit under $60. If you add in another for Google Cloud SQL you're up to a total of about $150 over the course of a year. What they offer you in S3 is basically free (<$2 over the course of a year).
It's possible the other services are a better deal than compute and storage, if you have a use for them, but the GCP free trial lets you allocate that $300 however you like. You can scale up more in that 60 days than you can within the AWS free tier. To me, personally, this strikes me as more valuable if I'm trying to sketch out a new product — I'd rather not try to figure out how to fit inside the AWS free tier resource envelope and instead understand how my costs are scaling with the resources I'm using while not being on the hook for those costs (up to the $300 size of the credit, obviously) for the first couple months. Especially to absorb things like shaking out automation -- go ahead, spin up a GKE cluster, scale it out to 5x the size you currently need, run some quick load tests, and then scale it back down 30 minutes later (and only pay for that 30 minutes).
Full ack. I don't understand why GCE doesn't have the same offer there. Their free trial is too short to really test it. If you're a developer and experimenting with cloud offers on side projects, AWS is often free. That way a lot of people have some experience using AWS. I'm sure the free trial pays off well for Amazon.
But it's good if you just test a bit as a developer. Not to check if a project is valid, but to gain some experience. That way, if you hire as a company, developers more likely have AWS experience than GCE/Azure experience.
I don't agree on the free trial. I think it lets you play around a lot more than the AWS free tier -- as I said in my comment up the thread, go ahead and move the slider to the right and scale up a bit (a little -- the free trial is still quota limited to prevent abuse), see how things perform, slide it back to the left and have only used a tiny fraction of your credit.
I think you get a lot more "kick the tires" flexibility with a moderately large up-front credit.
I concur. In fact, I find the podcast better than articles. I'm not sure why though. I think the conversational tone of the podcast is better. They attack the problem from different area and then hone in on the main point.
If Google released an IDE with tight integration to Google Cloud like Azure + Visual Studio, that's a potential killer app that lowers the perceived switching cost.
If you told me to use Azure two years ago I would've laughed you out of the room. But here I am in 2016, using Azure, using ASP.net + IIS on Visual Studio. that's some powerful shit and currently AWS has cost leadership and perceived switching cost as their edge.
By introducing a layer of learning curve, you lock in your customers but eventually the other guys will race to lower that curve.
Those Azure gains only exist within a small space. Once you step outside the C# ASP.net sphere, their picture is no where near as rosy. Much of their offerings are minimal products for checkbox comparison sake.
When the answer to your tech problem is not ASP.net/SQL Server, you are going to find their services much more difficult to put up with compared to the competition.
Before someone jumps in with ".Net is huge in the X space" thing: There has been double digit growth of close to a decade now of platforms that don't run .Net. That ecosphere was a giant. Now it isn't.
And I will also add this - as a software developer today you are probably thinking about learning stuff like Solr, Hadoop, CoreNLP, NLTK, Spark etc., as you try to learn more about data science and related stuff.
As you do this, I personally think that you are much better off getting the JetBrains all-in-one subscription (if you can afford it) over continuing to use Visual Studio. There are a lot of things happening on the data science front which are just that much harder to do within the MS stack.
Actually I really love the C# language, and wish there were .NET ports for Solr, Hadoop and NLP libraries etc. But it just makes more sense to get into the native ecosystems for these libraries (Java, Python etc.) with the most convenient IDEs for those languages (e.g. IntelliJ, PyCharm) and not struggle with trying to get all your NuGet ducks in a row.
Doesn't Google already have a web based, but internal only, IDE? I don't know if that'd be easy to make external, but my understanding is that they've got a lot of internal users on it.
Yeah, and Cider was getting surprisingly good at the point I left Google (I was initially a skeptic). But so much of what made it good came from its tight integration with other internal tooling. I'd be surprised if it's ever externalized in a form that captures most of that value.
I haven't used Azure, so I'm a bit confused. Do you mean something with different capabilities than Google Cloud Tools for Visual Studio? https://cloud.google.com/visual-studio/
nice find! I didn't know about GCT plugin but yes that's more or less along the lines of what I expect from AWS.
However, Google Cloud really hasn't entered my mind as much as Azure has this year. AWS has always been there. I'm not sure as to why this is maybe I have built a perception that GCE is more expensive and my documentation experience with Google wasn't anymore smoother than AWS.
One thing for sure, Build 2016 earlier this year was one of the key driver for my conversion + Nadella's leadership.
Google Cloud has always been obscured by AWS and now Azure in my mind and so far, they've yet to really jump out at me like AWS does on HN regularly which Azure is now catching up.
What are the killer features of that tight integration for you? I use VS but run on other platforms with a fairly simple git push. Does VS have some other features beyond deployment that integrate directly with Azure?
Yes so deploying on Azure is right clicking the VS project and hitting deploy, it walks me through all the setup steps in one spot without leaving the IDE.
This takes a huge cognitive load off the developer who won't have to do context switching between portal.zure.com and VS.
But portal.azure.com is a much better, smooth, streamlined & intuitive interface than AWS, and I find myself wanting to work with Azure more and more.
There is still a stickiness to AWS and stuff like Cognito seems way more less intrusive from brand point of view (MS AD redirects you to onmicrosoftonline.com when logging in user)
> But portal.azure.com is a much better, smooth, streamlined & intuitive interface than AWS, and I find myself wanting to work with Azure more and more.
Ah, the interface that makes you scroll in all 4 directions on a small screen. I get slightly dizzy using it.
Google Cloud UI is vastly superior to AWS. It's clear to me AWS didn't put a lot of effort into their interface, Google console is nice too in order to quickly experiment with the platform. On the other hand, it seems to me that AWS is still cheaper than GCloud right now.
I definitely agree on your critique of the bandwidth claims, but the gist I got from that article was that custom instances outcompete AWS's reserved instances on price - i.e. per CPU and per GB memory, GCloud is cheaper.
Actually I do think that GCE offers cheaper instances, but the 50% overall price reduction claim is unsubstantiated. The author compared completely different instance types. Monthly discounts does make GCE cheaper but for many instance types without that discount it's actually more expensive on GCE.
Don't forget, GCE VMs also support per-minute pricing while EC2 is still per-hour (don't think that changed in the past few months, happy to be corrected). many users spin up a VM, use it for 20 minutes, then kill it (all the data remains in persistent disk).
@ranman: You could start by disclosing that you work for AWS ;)
I'll stand by my word. I can present the pricing comparison at reinvent 2017 if you'd like.
1) The equivalent instances are cheaper on Google.
2) Automatic discount is significantly superior and more customer friendly that reserved instances.
3) Google is faster or more flexible, usually both. That means that when you have specifications to achieve (IOPS/bandwidth/SSD), AWS has to get severely over-provisioned compared to Google cloud.
I completely agree with this. But, what I found more impressive about Google Cloud is that it requires a far fewer number of people to get things done compared to AWS. AWS believes in providing fundamental building blocks rather than frameworks. It takes a lot of time, money, skills, expertise and people to make it work. Google Cloud provides frameworks which are easy to use, secure by default, does not require tuning or turning knobs, high performance, scale automatically (or automagically) and are ready to use. It is an impressive feat that Snapchat, a 25 Billion dollar company, runs on Google Cloud with 2 part-time DevOps engineers and recently Pokemon Go was able to scale to facebook level user engagement in a mere month time period with 4 backend engineers (and of course with lot of help from Google). Things like these are impossible to achieve with AWS. Bottom line, if you want to get something done, Google Cloud will get you there in a fraction of time compared to AWS and can scale way better than what AWS can, at half of the AWS's price.
>Pokemon Go was able to scale to facebook level user engagement in a mere month time period with 4 backend engineers (and of course with lot of help from Google).
I mean... did they? I was trying to play for weeks and I couldn't even login. I loved the game when I was able to play but I don't think they scaled seamlessly. Maybe that had nothing to do with the cloud provider and had more to do with the application itself (I don't know) -- but I wouldn't personally use Pokemon Go as an example of successful scaling.
>Things like these are impossible to achieve with AWS.
Twilio, Slack, AirBnB, lyft, duolingo, FINRA, yelp, pinterest, foursquare, adroll, shazam, supercell, etc. etc. etc.
I'm curious why you think these things aren't possible on AWS? They really are... and I can think of hundreds of examples.
Regardless, it's important to recognize that the cloud provider is only ONE piece in your ability to scale. I have examples of failures on GCE and on AWS. Your application architecture is far more important than the cloud provider you choose when it comes to scaling quickly like this. Sometimes it's not worth the dev effort to be prepared for these things.
>"Fraction of the time", "half of the AWS's price"
Nope. As outlined in my linked post above, the comparisons in the article are not accurate.
I work on GCE — when I read that post I also thought the 50% was a bit generous, but your other comment made me curious. I ran some numbers and posted a reply over there: https://news.ycombinator.com/item?id=13078157
tl; dr: Even matching instance types more closely, for on-demand pricing the 50% number held up remarkably well, although I'm worried that I got something wrong with my math on provisioned IOPS pricing, as a 10x difference seems unbelievably high.
The way it often works out is first you chose your provider because the interface is nicer, then your needs slowly grow, and at some point you you're blowing $1,000/mo on a given provider just because it had nicer interface a few years before.
Should we be looking at Slack as an example here? Their interface is necessary for people to use it, if your company uses Slack you will have to use their UI/UX. Whereas for AWS, your engineers are more likely to make the call on UI/UX, and not the executives.
I disagree with this, in my opinion slack has a lot of features on top of IRC (offline messages, webhooks, etc) that make it much more usable for its target audience.
Actually, in larger enterprises (or maybe this should be "non-startup stage companies") I find its actually often the exact opposite.
If we're profitable, we're going to invest in a better working experience so that our employees spend minimal time 'dealing with' a tool's 'issues'. -because we can.
If we're a smaller pre-profit/cash-strapped/VC-beholdened company everyone is going to be expected to tolerate working with 'rough' tools if it can save a few bucks.
The closest it comes are the f1-micro and g1-small instance types, at $4 and $13 per month, respectively, with a full month's usage. These seem most comparable to AWS's t2.nano and t2.small types though it's not an exact comparison. And you still have to attach a network drive, which may go against your definition of "plain ole server."
I disagree with the central thesis of this article that Google is a product company rather than a platform company. I think that's wrong because throughout its history Google has asked itself "what if we had this?" first, and built the products around that later. Essentially the company believes that products will naturally emerge if you hire tens of thousands of engineers and deploy an unholy number of computers. I said this before on this site: Google's core product is dirt-cheap computing. Everything else follows from that.
Is company's business model based on what they use to create their Widgets or what Widgets they sell (and how)? I agree its hard to paint Google solely as a products company, but its also more than the sum of dirt-cheap computing.
Most modern companies don't need cheap computing. They have a lot of users generating revenues, and they usually need little computing resources.
Google always worked by giving everything free to people. (Google, Gmail, Maps, Youtube, Android, Chrome). And they have extremely infrastructure intensive applications to run (e.g. just gotta copy the entire internet to index it + serve years of videos per second :D).
For every paid click/page a user will see, he will go through hundreds of page paying nothing, and Google will have to make thousands of pre-computations to be able to serve it in the first place.
That's the world Google lives in. They had to be hyper efficient since day 1 or they couldn't survive.
(Also note that they started > 15 years ago. The available hardware was 2^5 smaller at the time).
Google is probably the one company is most dependent on their computing efficiency.
Google's most prominent products are possible only because they could deliver those with the unprecedented computing efficiency. This is not to be found anywhere else. For example, search, Gmail, Youtube.
Before AWS, Amazon care about efficiency. But fundamentally, Amazon do not depend on computing efficiency to support the company's bottom line, or at least only in a degree that is far less important compared to Google.
They depend on cheap computing in the sense that they only pay for what they use and don't have upfront capex costs but I don't think thats what GP means. Google initially grew by filling data centers with commodity consumer computers instead of "enterprise-grade" servers and now they have cheap computing in the form of ASICs and highly customized/integrated general purpose servers.
A "platform company" would provide a platform to its users. Unless I misunderstand your comment you say that Google has an internal platform that it uses to build products for customers, which is something different.
Amazon really puts customers first. Their platform and organization are made up of small teams that own services with well-defined interfaces, accountable for customer metrics. All profits are reinvested, so resources and perks are scarce, efficiency matters, and management is tight. The platform emerged because internal teams thought of their infrastructure services as products with customers.
Google really puts ideas (or technology) first; it aims to hire the smartest people and rewards them for launching new things and solving complex problems rather than optimizing UX or making customers happy. Resources are ample and management is loose, so individual contributors can try new things with greater leisure. It's been compared to grad school. But simplifying customer experience is less of a priority, so the internal infrastructure was notoriously complex and hard to use. They're now learning to prioritize customers, but it's hard to change culture.
Of course, both companies are huge and diverse and evolving, so you'll find plenty of variance.
App Engine wasn't evidence of Google being a product company, nor does it exemplify the company's strategy. It was a grassroots project that for years didn't receive much leadership support, but was still allowed to launch and grow.