So back when I worked there Google had no clue what it cost to run their infrastructure at a fine grain level. Sure they knew the aggregate cost, that was easy, but knowing on an application level didn't exist. This was a problem since as more and more things were using the machines, how did you "bill" a department for their machine usage? That really crystallized when the bottom fell out in 2008 and suddenly there was going to be no more new machines/data centers for a while and everyone had to 'make do.'
They mobilized an effort to figure this out, its not like it isn't knowable, and ever the data driven company the first signs of light were appearing just as I was leaving. It should not be a surprise but they discovered many things they did not previously believe was true, and I don't doubt it has driven a lot of change going forward. One of the more interesting outcomes was that projects/products were actually getting cancelled if they cost more to run than they could generate in revenue (I'm looking at you Goog-411)
So this knowledge is being applied to GAE, which is great, its also another way to back compute some of their operational efficiencies.
But that it costs money to run stuff? Well that isn't really news is it? That it costs that much? Well there is the whole if it doesn't make money it will get cancelled threat.
And the kicker is pricing out the scarce resource. It looks (and I've been gone over a year and half so I am speculating based on this move on their part) like their 'scarce' resource is web server front ends. (the labeled "Frontend instance") Traditionally they've been like most multi-tier web properties split between front end machines which host the web facing stuff, and back end machines that do the heaving lifting and storing. And by this change one can reason that residency on the 'front end' is more valuable than crunching in the 'back end.'
I'm guessing PlusFeed gets a lot of web traffic. So they spend a lot of time 'actively' on the front side, and from their numbers they do practically nothing on the back side. This fits well with the sudden massive price increase.
This gives you an insight into Google's business dynamics as well. Where page-views are the limiting resource, and computation is not. When you look at it that way, you can see that most of their 'revenue' has to be delivered through their front end services, and so consuming that resource reduces (potentially) their income. Hence the charge inconsistency.
Now contrast that to Billy-Bob's Web Farm (fictitious service) where every machine in their data center can be a web server, and front end serving is trivial, its all about the bandwidth. Their pricing would probably be more gigabytes transferred.
I would not be surprised at all if it is impractical to run such 'translation' services (basically all web traffic very little compute) on a hosted environment like Google's.
Lets suppose that google doesn't have enough machines. Do they:
a) Massively increase the price to reduce customers?
b) Massively increase the number of machines?
It is entirely possible that they discover they can't make a business out of selling 'engines' like this, it wouldn't be the first time they decided they were leaving too much money on the table.
They chose "c" - scale the price so that AppEngine is revenue positive for the company using these constants for the various moving parts.
Clearly some customers will find that it no longer makes sense for them to use AppEngine. It doesn't say anything about whether the market reach will be sufficient to sustain that business.
Of necessity, most machines that Google runs have 'private' (as in not directly addressable) addresses. This is how it seems that everyone runs things when they get above a certain number of servers.
Thirdly because one can assume that 'attacks' (whether they depend on XSS, overflows, or what not) are coming from the Interwebz its prudent to present the smallest possible attack surface on a machine which is being fed internet stuff. As you can imagine, those security constraints make for uncomfortable restrictions on what you can do in the 'front end', so a machine that is 'front end' has a different set of security constraints than a machine that is 'back end' (they are still constraints there of course but having gone through the filter of the front end machine which has greatly attenuated the possible exploit vectors).
I believe it is this armored 'front end' resource which is the scarce resource, but again, I've been gone over a year and things change quickly inside the 'plex.
Without going into too much detail, it shouldn't surprise anyone that physical machines are interchangeable via cluster management software, so application front end and back end software all end up running on the same machines. They aren't addressed through DNS but by another mechanism. There is no special system administration for application-specific front ends vs back ends. Spinning up more instances of a particular server is a 1 line change. Reverse proxies (i.e. a non-app-specific front end) are also involved but they're not a bottleneck AFAIK (this problem is well "commoditized" in open source whereas other parts of Google's stack are not).
I would be curious to see a comparison with competitors like Heroku, which I've heard are pretty expensive too. Theoretically Google should be cheaper because Heroku runs on AWS and thus is paying for Amazon's profits (I have no idea how they compare.)
Yes, those offline processing instances could be suspended while dealing with a traffic peak, but peak traffic is still often a challenge, especially now that Google is pushing more and more stuff onto the web servers (Instant).
That might be what he meant.
Disclaimer - I don't work for Google, it's just a guess.
I, and many others, spent a lot of time figuring out how to write apps that do it the "app engine way":
* Fast completes (30 second timeout)
* Offloading to task queues when you cant
* Blobstore two-phase upload urls
* Mail eccentricities
We believed them, because it seemed reasonable. We laughed at those who complained that django would hit the 30-second limit: "Its not a general hosting! Figure out the App Engine way!" And we educated on how to do it right, and many were happy.
Well, it turns out that it is general purpose hosting, with all of the costs, and yet also with all of the (once rational, now bullshit) idiosyncrasies.
But that's not the biggest complaint. The biggest complaint is that when my friends and peers objected to App Engine, its strange requirements and its potential lock in, they were right and I am a fucking naive idiot. And I really don't like to be proven a naive idiot. I put my faith in Google's engineers and they have utterly destroyed my credibility. THIS more than anything is the cost to me.
But mostly, GAE doesn't make sense for larger apps. You can't buy your way out of trouble, by putting your db on a dedicated server with fast drives and tonnes of RAM. You can't really use relational data without performance and reliability issues.
It's not just about the "app engine way". It's not like learning C or Haskell, and having to find a new way to write the code. You fundamentally cannot do big ad-hoc database operations.
And consider this - it was July last year that they introduced the MapperAPI. Before then, I don't think you could do Map-Reduce without manually re-implementing it yourself (on top of the cantankerous Appengine Datastore). Just think about that for a minute - how were you meant to do stuff the Appengine way without map-reduce?
Anyway, I don't think your credibility was "utterly destroyed". It was really hard to know whether or not the learning curve was worth climbing until you had tried. You just had to judge the book by its cover, and the "Google" brand is pretty compelling to an engineer. It's not the first time someone has been fooled into buying something because the provider has a good reputation.
In fact if you look at the recent comments of certain GAE engineers, they seem to believe that GAE is precisely for scaling, and that's why it now costs so much: its only for the big boys.
The problem is that I can never become one of the "big boys" on their system, because pretty much as soon as I get any traction, I have to move to EC2 or heroku or go broke. Their new found belief in the scalability of their system is just arrogance. Anyone can claim to handle lots of traffic when you require that your customers run 20 times as many frontends as they should reasonably need.
Good luck to them in the Royal Wedding market.
EDIT: on the nature of "relational"
Care to elaborate?
There are no longer, in my view, any situations where a SQL db is the best idea. You either want a giant NoSQL database, or you want a massive in-memory object-graph using pointers. Or you want something for $20m from Oracle or IBM.
Not saying shit cant happen. Now look me in the eye and tell me you never had some noob drop a constraint and forget to put it back.
If Michael Arlington changes his job from "editor in chief" to "founder, former editor, occasional contributor, and CEO of Arlington Investments", and his old posts aren't all updated, it's not the end of the world.
It really depends on the problem domain. You wouldn't run a bank's ledger off MongoDB. On the other hand, a bank's ledger should be radically simple, with little need for normalization.
That's obviously an example of something that will practically never happen, which is why it doesn't work all that well as a justification for ditching SQL databases altogether.
I've never used NoSQL for anything, so there must be a lot that I'm missing, and that's why I asked. But it seems to me like you'd be digging up necessary information through quite a few steps if everything is "flat".
Digging kills you. I assert that SQL does the digging automatically, and thats exactly why it doesnt scale.
Your app will most likely have some kind of "entities", and then records to represent them. How much information can and should you cram into records of various "types"?
How much information do you typically end up duplicating across all those "entity records", and is it not a problem?
That included learning NoSQL. At least that part was not a waste. There are no right answers to your questions, there are only right actions, starting with stepping outside the SQL box and writing an app using NoSQL. I started by thinking of a simple app that would be useful to me personally. I knew java servlets, I knew SQL, I knew all sorts of things, but after several iterations my app is architected like no app/server I've ever written before. Almost every iteration involved starting doing it the way I knew how, running into either roadblocks or major cognitive dissonance, and then rewriting it to fit these new-fangled constraints. Its been a huge learning experience. You might like to try it.
Could you just give me a brief description of how you arrange things (like "entities") with NoSQL?
> That's obviously an example of something that will practically never happen
Women changing their name when they get married? A tiny assumption like that can make our software brittle. Now every model that caches the old name needs updating and you need to make sure there aren't any overlapping saves in any of those models that'll overwrite any items in your bulk update. If a single linked model has the wrong old-name cached, your data update process is buggy.
Well, that sounds like the kind of stuff I'd like the other guy to talk about. How does he avoid the bad sides of having all your data in a key - value store?
How would/do you?
Come to a Microsoft event some time and let a Silverlight developer buy you a sympathy beer.
From what else I've read, it sounds like engineers who didn't also wear green eye-shades (or good enough ones, or who didn't possesses or use good enough crystal balls) set up this debacle. And it was people wearing green eye-shades (who we can sincerely hope are also engineers) who aligned it with reality. Causing way too many people way too much pain.
Object lesson: if you're going to sell a service for cash money to others, paying close attention to your costs from the very beginning is not optional.
You might be thinking that in the original measure they did something insane like measure only user time of a process, or only when its executing a request, not booting or whatever (or fuck, I don't know because honestly there is no reasonable explanation). That is to say, that the 31 cpu-hours is a misread, and if the fellow in the article ran his code on EC2, he really would need 879 EC2 instances that day.
But this is not my experience. An extreme example: my app that served 14 pages was rated as taking 0.02 cpuhours, or 720 cpu seconds. This is entirely reasonable, if not excessive (because looking at the app it only took about 200 seconds including warmups). Under the new system, it is claimed that these 14 pages will require 2.8 instance hours.
0.02 => 2.8
31 => 879
The app in the article serves 1.5gb/day and takes 879 instance-hours. What server would you need to do that on EC2: 1mb/s? The hourly cost on GAE is $1.46. Can I do that on a $0.085 EC2 instance? Yeah, I think so.
EDIT: My figures were wrong as I was comparing a $16 (wrong) figure to a $0.8 EC2 figure. The actual figure is $1.46, not $16. So I looked at the bandwidth/cpu numbers to see if a $0.8 EC2 instance is what is required, and I don't believe that it is. I think a $0.085 instance would be enough. YMMV.
This. We always knew GAE was inefficient. There's no doubt about that. Serving 30 or 40 requests per second would spawn quite a few instances and start producing request errors.
This is a load a 4 year old machine could handle with ease.
Why did we put up with this? Because Google didn't make us pay for the crappiness -- the pricing made sense. You don't pay Ferrari prices for a slow car...and during a surge it scales up gracefully. Go from 30 rps to 1000 rps and it'll just work. An old machine co-located someplace won't do that.
Now under the new pricing gouge Google is making us pay for their inefficiencies. All appearances are that this is what this really costs (plus some reasonable markup)...well that's pretty piss poor. Because we're essentially paying to haul cargo in a Ferrari and it's dumb dumb dumb.
I'm on the App Engine team, and I just wanted to clarify one thing: The main difference between CPU hours and Instance hours is that CPU hours are charged based on CPU usage, while instance hours are based on wallclock time. The high ratio between the two you can see with PlusFeed is because it's spending a lot of time to serve each request, most of which is spent doing nothing - likely because it's doing outgoing HTTP requests.
Previously, we had no way to account for apps like this, that take a lot of wallclock time but very little CPU time, and as a result we couldn't scale them well. Under the new model, the charges reflect the real cost here - memory pressure. Every second an instance sits around waiting is a second that the memory occupied by that instance can't be used to serve other requests.
As others have pointed out, we're in the process of launching Python 2.7 support - it's currently in Trusted Tester phase - which will support multiple concurrent requests, and services like PlusFeed are likely to be able to take great advantage of that, reducing their instance hours by a large factor. Likewise, doing asynchronous URLFetches (where that's practical) can cut a huge amount off instance time.
If memory pressure is the issue, how are the trusted testers finding their memory pressure when they have a whole number of in-flight requests? If the PlusFeed fellow got 2.7 working, we'd expect to see 100/3.5%= 28 in-flight requests. Do you have data on how big the base memory vs per-thread memory requirements of these apps are? Python isn't famous for freeing up memory.
Which is to say, do you have any solid numbers that tell us that when we switch to 2.7, you wont have exactly the same memory pressure and either have to up the instance-hour cost, or start charging for ram, or just limit in-flight request to 3 or 4 so that our costs are only 5 times as much instead of 20?
Bottom line: what we all woke up to is that fact that as of right now:
* you set the price of an instance, and
* you get to decide how many instances I'm going to pay you for
Some of us are thinking that while that was a great idea when engineers were in charge, its not such a great idea now the bean-counters have taken over.
The issue with charging by CPU hour was that you could occupy memory-seconds as much as you wanted without charge; that's no longer the case - by charging for instances, we're implicitly charging for the memory they use.
As far as determining how many instances you run - you can do this to a large degree, both by setting budget limits, and by setting scheduler parameters.
GAE has always had two main disadvantages. First, there is vendor lock-in because you code specifically to the data store, worker API, and so on (though arguably there are alternative platforms that implement the GAE API). Second, you cannot run custom code (custom C in some virtual machine) or have a custom architecture (if, say, Redis might be useful to have around). These disadvantages probably aren't changing and are probably necessary for auto-scaling, security of Google's infrastructure, and so on.
However, there are lots of little things that GAE has been getting wrong for a while that are totally unnecessary. Lack of hosted SQL support. Lack of SSL for custom domains. Just little things that are probably annoying to implement and boring, but totally necessary for real websites or websites just gaining traction. (I know these are in varying stages of early support at the moment.)
But now, the GAE team almost seems to want to actively disappoint users. With hosted SQL being a request for years, Guido appears to have spent a bunch of time re-architecting the API for the datastore instead. With this pricing increase, they're pushing the many developers who came to their platform based on price (due to the very interesting scaling properties of the Google front-end) off the platform.
Overall, I'm very confused.
But, that does mean that you have to figure out what your revenue-producing tenants are going to look like, just as you would do in physical real estate. Yes, it looks like the high-traffic commodity-product (think Halloween store) doesn't make a good tenant. But that doesn't mean that a jewelry store or office (low throughput, high value per-square foot) wouldn't be a good tenant.
> The Idle Instances slider allows you to control the number of idle instances available to your application at any given time. Idle Instances are pre-loaded with your application code, so when a new Instance is needed, it can serve traffic immediately. You will not be charged for instances over the specified maximum. A smaller number of idle Instances means your application costs less to run, but may encounter more startup latency during load spikes.
There is another setting for latency:
> The Pending Latency slider controls how long requests spend in the pending queue before being served by an Instance. If the minimum pending latency is high App Engine will allow requests to wait rather than start new Instances to process them. This can reduce the number of instance hours your application uses, but can result in more user-visible latency.
So if you are fine with a little higher latency for your app then you can reduce your bill by a great deal. If you want all that GAE can offer with max. instances available and lowest latency you gotta pay - as you would when you run n instances at another cloud provider.
(Just out of curiosity, I'm not a GAE customer)
Also (sorry for the armchair quarterbacking here, can't resist..) it was exactly what Google can do better than anyone - best server infrastructure + Guido Van Rossums - while stuff like Google+ is exactly what Google haven't a clue how to do.
And by the time the pricing takes effect the updated python runtime should bring costs down even more.
The instance costs are comperable to Heroku AND you get a high availability data store AND the ability to store really huge amounts of data in the blobstore AND a CDN for serving images from said blobstore. Not to mention background processes, task queue, XMPP, memcache, Multitenancy and multiple versions of apps so you can easily roll things back or test out updates painlessly.
Try and replicate that setup on Heroku or AWS for anywhere near the costs and time that you can get there with app engine.
While you're fighting with AWS and playing sysadmin or trying to think of ways to bring down the costs of Heroku's database services by using RDS instead or being nickel and dimed by add-on fees I'll be shipping code. Code that actually takes advantage of the platforms strengths.
The blobstore is basically S3, and the CDN for the blobstore is basically CloudFront. Both S3 and CF seem to me to have really simple APIs, plugins to Rails and other common frameworks, and are almost certainly battle tested by an order of magnitude more companies than the blobstore. What am I missing?
As far as I can tell, of your list, Heroku has all of the things you describe, with the possible exception of XMPP. Task queues (workers, SimpleWorker), many flavors of high availability data stores, memcache (or even better, Redis as a service), multitenancy, multiple versions of apps (git, releases).
Is there something bad about RDS that I don't know about? I had always assumed you just spun it up and could optionally replicate a MySQL instance a few times in different zones.
More broadly, it sounds like you had a really bad experience with Heroku. If so, what was it? I'm a bit curious because we are rather dependent on it.
I've worked with both platforms and I'm not saying heroku is horrible. In fact it might be the easiest to get started with of all the PAAS providers but it's expensive (they all are) and does not provide the same level of features out of the box that GAE does. Saying that because I can eventually get all the features I need by clobbering together a bunch of random gems and bringig AWS into the mix is quite different than having a unified api out of the box.
The point I'm making is that there is an additional real cost this kind of integration work (just like there are real costs learning googles API and accepting a certain amount of lock in.) and I don't think these costs are being fairly represent in the current discussion.
Bare Bones with Add-ons : Heroku handles deployment (via git, releases), app servers (thin), reverse proxies (varnish), and caches (varnish). (That's "bamboo" rather than "cedar" for what it's worth.) Heroku provides optional data storage via Postgres and background jobs via worker queues. For everything else, a combination of add-on providers (MongoHQ, RedisToGo, SimpleWorker, ...) and your own custom EC2/S3 code is used.
Integrated : GAE handles deployment, app servers, reverse proxies and caches. It also has channels, XMPP, workers, a data store, and a blob store built in. All of the documentation is in one place and from one provider. All of the billing is unified.
Personally, I have always assumed that the integrated approach is by necessity. GAE can't rely on other providers to provide functionality like a blob store because they don't want other providers running in their data centers. By contrast, since Heroku is in US-East, basically any SaaS can pop up to offer functionality with low latency to Heroku apps. If there's no SaaS, in the worst case, the web app author can write their own backend services within US-East to meet their own needs.
I think you're right that the inconsistency of pricing, documentation, and quality of many providers can be a problem for the bare bones approach. However, I've found in general that almost everything I want is (a) S3, which has a simple API and a well known gem, (b) Redis, which has a simple API, a well known gem, and basically one (somewhat overpriced) provider, and (c) very rarely EC2 instances if I need to run some really weird, open source Java or backend code. I'm also much more comfortable knowing that there are basically no problems that cannot be solved, at some difficulty, by the EC2 backend solution.
I could see the appeal of the integrated solution. For example, if I had a class of students, I could just point them to one documentation source with GAE, or monitor one billing page with GAE.
Do you think that web developers generally prefer (a) integrated solutions with shared documentation and the possibility that some functionality may be impossible or (b) piecemeal solutions where they may get varying quality from a set of services they select, but where basically all functionality is possible?
This depends a lot on what you're doing. GAE is now pretty terrible if you're using it for high-bandwidth applications (say, a data proxy or similar). Especially when you compare it to other providers for whom bandwidth prices have been dropping (AWS, Linode, etc now offer incoming bandwidth totally free and the rates on outgoing are very low as well). If you're using it as a number-crunching backend with relatively small in/out datasets, the new prices aren't too bad at all.
c1.xlarge on EC2 is $1.16/hour. How many of the $0.08 GAE units does it take to match one of those? How many of those GAE units can run optimized SSE4/AVX? (Answer: none of them). I've run 250 c1.xlarge units at a time. GAE wasn't even possible, let alone affordable.
"your app can be slashdotted or tweeted by demi moore -- http://adtmag.com/blogs/watersworks/2010/10/mobile-app-creat... -- or perhaps you may need to build/host something on the scale of both the royal wedding blog and event livestream with traffic numbers that are mindblowing -- http://googleappengine.blogspot.com/2011/05/royal-wedding-be... ... these are the reasons for using App Engine. it was not meant as free/cheap generic app-hosting but to provide a premium service that's difficult to get elsewhere in the market. if you're just after the former, there are plenty of options for you."
My take-away is that GAE is hard to justify unless your usage pattern is unpredictable and spike-y. I'm taking the long weekend to give dotcloud a serious test-drive.
If you're expecting insane traffic to begin with, it makes more sense to create your own systems using a more standard stack and not tie yourself to GAE. If you're just some small to medium sized site that gets "slashdotted or tweeted by demi moore", I guess it is nice that your site will automatically scale to serve the millions of unexpected incoming users, but when you're filing for personal bankruptcy due to the unexpected and infinitely scaling GAE bill that comes along with that, how much consolation is the fact that your server stayed up during the rush going to be?
Personally I'll never touch GAE with a 10' pole simply because of the support issue and perpetual-beta-culture uncertainties.
My question is why the heck 31.42 CPU hours turned into 880 frontend instance hours, and whether this is something a lot of other GAE apps are seeing as well.
But measuring process-instance-hour is misleading. In a typical web box, spinning up an extra process takes very little resource since most process memory are shared with the parent process, and most web apps are IO bound, like waiting for network/file/DB, idling taking little CPU. In a server box, many processes can be crammed in (like the shared hosting box). But Google counts those duplicate processes as separate ones and charge the idle time these processes take. That's why you see the outrageous bill.
What's more insane is that Google charges the process-instance-hour like a machine-instance-hour. GAE charges $0.08/hr for process-instance-hour while AWS charges $0.02 to $0.08/hr for the whole machine.
Someone needs to rethink their architecture.
The problem is that google count each process of the python runtime as a separate instance and count them when they run only partial hours (amazon counts this way too, but you can run many, many python programs at the same time even on a micro instance).
I'm guessing a small minority of apps were doing things in a way that was eating up tons more resources than they were paying for. I bet for many apps, this could end up no worse or better.
I picked GAE mainly because it was fast to setup, but I could have equally chosen Heroku over app engine when I first built it. In hindsight, I wish I had.
With Google the only thing you'll get for $69/month is a sore ass.
and the choices available for fun projects:
What are some good services layered on top of EC2 to provide more management and automation?
Does anyone have any experience with whether DotCloud is priced reasonably?
The huge price jump is a serious problem, but the weird metrics switch makes it feel like such a bait and switch. I'm going to be highly surprised if this isn't challenged in court.
Anyway, this is all theoretical (in the worst sense of the word) - the app is not even public.
I initially fell in love with the "no sysadmin" aspect of App Engine, and started building apps around it. Eventually I realized that (for me, anyway) the upside isn't really worth the trouble of having to contort my apps to work in Google's sandbox- can't run SQL, have to deal with datastore timeouts, CPU timeouts, etc. When you're done coding work-arounds for all of these things, are you really coming out ahead?
Also the way Google handling this confirms my fear of proprietary lock-in, leaving a really bad taste in mouth. I meant I've spent lots of time to develop in their framework to work around their quirks and limitation. They have been advocating to developers to optimize for CPU time and that's what we did. Now they found out they don't make enough money with CPU optimization and change the whole billing scheme, screwing all the efforts done by developers. What would stop them to change again tomorrow if they find instance-hour is not making enough money for them?
Ultimately, after a lengthy debate with a friend, I concluded that Heroku was a sufficiently better option that it was worth learning Ruby (which I've been writing more or less as an uglier Python anyway).
How many users were using it?
I have an app that I want to deploy and control my costs. I would like to pay for 1 FE instance to always be running and limit temporary FE instances to a maximum of 1 (free?) instance when needed. I expect my app to have a relatively small number of users, but they will be active.
I would like to prevent the scheduler from ever spawning more than these 2 FE instances. Occasional unavailability when the site is busy is OK. Except for bandwidth and storage, I would like to know roughly what my costs will be.
Just to be clear, how do I set this up?
ie was GAE ridiculously cheap before compared to other options, and now comparable? Or was it somewhat cheaper than competitors before, and now somewhat more expensive?
I realise it's never that simple, but nearly everyone's complaints seem to be (understandably) given in relative terms of before vs. after. I'd be interested to know how it stacks up before vs. after vs. if-we'd-taken-another-route.
GAE is a half baked AWS. Google+ is a half baked Facebook. Google Docs is a half baked MSOffice. They have no blood in these projects and don't really care whether they succeed or not, which coincidentaly means they probably won't.
Google is getting absolutely clobbered in every category other than search advertising dollars. Their products wreak of ambivilence and neglect, and I'm surprised anyone expected GAE to be a good platform.
Except for the largest business on the internet... Well search, video delivery, mobile phones, email, mapping, news, RSS, web analytics, and web browsers. Getting clobbered in almost everything else though.
Android is a junky iOS-knockoff operating system they pawn off for free because it can't be sold. This comapany can't produce a single product worth paying for other than search ads, and even that is going down the tube. Sure, maybe their shareholders have been fooled, but the paying consumer hasn't. Whether they admit it not, Google is in severe trouble if search advertising decreases even a few percent.
Everyone seems to have a crush on Google, but they are going to get fucked pretty badly long term unless they find some alternative, major, multi-billion dollar sources of revenue of which they've found zero so far. Self-driving cars maybe?
No, Google makes 97% of their revenue from advertising, not just search advertising. 31% of the revenue comes from "Google Network web sites" (AdSense).
> Android is a junk iOS-knockoff operating system they pawn off for free because it can't be sold.
It's the most popular smartphone OS... Is Linux free as well because it can't be sold? Your blind hatred of Google is bizarre.
Would it be if they charged for it? Android has massive issues across development, the market, the user experience and poor vendor practices that are routinely ignored by the same people that jump on every mistake Apple makes.
Don't get me wrong, Android has a lot of promise but it's popular because it's on so many phones. That's not exactly a "winner" if it takes dozens of phone to compete against one phone. The same goes for tablets.
The more Android devices you make, the more iOS devices stand out among them.
You're not contributing to this conversation at all, you're just spewing opinionated venom with no support for your claims.
The SSL problem is a limitation in some browsers that causes the type of certificates that GAE needs to use a CNAME, not IP, based routing to display huge warnings.
Man that's some nasty code, horrible, don't open source that kind of "code"... it's bad, even for a 5 minute projects, it's really bad.
Open Source software is not a dumpster of your bad code.
Global variables galore, tabs no spaces, no documentation, compares to empty not just once but multiple times, pokemon style exception handling (you gotta catch them all!), 'is not None' lol, type checking instead of duck typing... Just to name a few.
14 pople forked it already, I hope those are Github bots.
Also note that checking for "x is not None" is the recommended, idiomatic Python style (see http://www.python.org/dev/peps/pep-0008/ for reference).
More importantly, the script works, and was apparently used by quite a few people. Rewriting something that is already known to work, just because you don't like its style, is generally a bad idea.