Hacker News new | comments | ask | show | jobs | submit login
Update on Our Planned Move from Azure to Google Cloud Platform (gitlab.com)
103 points by octosphere 6 months ago | hide | past | web | favorite | 91 comments



Just as a sidenote: Gitlab may no longer be accessible any more from Crimea, Cuba, Iran, North Korea, Sudan, and Syria. From the article:

    NOTE to users in Crimea, Cuba, Iran, North Korea, Sudan, and Syria
    GitLab.com may not be accessible after the migration to Google.
    Google has informed us that there are legal restrictions that are
    imposed for those countries. See this U.S. Department of the
    Treasury link for more details. At this time, we can only
    recommend that you download your code or export relevant
    projects as a backup.


I'm more curious as to why their existing Microsoft hosting doesn't abide by the sanctions.


This is very sad. We are using GitLab.com because the terms of use says:

21. Governing law [1]

This Agreement shall be governed by and interpreted in accordance with the laws of the Netherlands.

Is there a code hosting service in Europe or any place of the world that cubans can use?

1. https://about.gitlab.com/terms/#gitlab_com


I believe that GCP is accessible from all of those places. They simply don't guarantee it will remain available, or have any kind of SLA.


Do we know if there are people using gitlab.com in North Korea?


Sometimes open source projects tend to link to their git repos.

Hypothetically something to bypass censorship might just point you to a README on a repo for installation details.


Right, I get that it can be useful, I was just interested if there was any indications of gitlab being used in NK. Considering Red Star OS (the NK linux distro) it wouldn't be unimaginable if there was a gitlab fork or a few gitlab.com repos used by NK.


Why Crimea though? If Russia annexed it, why not block entire Russia?


Such a ban is pointless unless you block every business which does business with Russia and that is not possible. Eastern European countries, ex soviet satellites, etc would have to pick sides and that kind of final decision isn't in our best interests or theirs.

If you only block doing business with Russia, then middlemen in other countries will just have to be go-betweens making a politically inflammatory action (blocking a world power) that wasn't actually effective beyond being offensive.

Blocking Crimea is a symbolic but also mostly ineffective practice done to make a statement. Making that statement about the entire country wouldn't be prudent.


Restricting US companies from doing any business with Russia hurts the US economy too much.


Based on my napkin math using 2017 data, Russia makes up 0.45% of US exports, and 0.43% of imports. So I think there must be more to it than:

> Restricting US companies from doing any business with Russia hurts the US economy too much.

sources:

https://www.census.gov/foreign-trade/balance/c4621.html

https://www.statista.com/statistics/218255/total-value-of-us...


No it doesn’t. Non-energy related trade is basically nil.


Sanctions on specific areas of conflict are not unique or new and they are not limited to online services you can’t sell anything to Crimea doesn’t matter if it’s physical or not.


I would assume anyone who uses Gitlab in one of these countries has already been using proxies for a long time, and that this is not going to pose the slightest problem?


Could they refer customers to proxy service(s) on neutral soil ?

If not, can/should someone post here?


If they could access Gitlab when on Azure, this is a very shitty regression.


Kinda sad that GitLab doesn't point to VPN providers that those users could use to route around this legal issue.


That opens a can of worms from a legal perspective, and they are a US business..

EDIT: It's one thing to accidentally be violating an embargo, it's another thing entirely to suggest ways to bypass it on your company site.


GitLab doing so would be arguably a deliberate violation of sanctions. While GitLab is not responsible if users decieve them about origin by using a VPN, they absolutely are if they choose an compliance mechanism that can be tricked by a VPN and then point people from sanctioned countries to a VPN provider that will trick their compliance mechanism.

The law can be stupid, but it's rarely as stupid as the people suggesting simple hacks around it think it is.


Why is it that Microsoft are ok with serving traffic from countries under embargo but Google are not? As far as I know Amazon don't follow the embargo like this with AWS traffic either.


This is a very very weird world of encryption, ITAR, licenses, etc.

The truth is there could be a lot of reasons.

Another weird example: code.google.com was not available in 5 countries due to US state department controls. Github was.

Github was basically ignoring explicit guidance (this is not a statement of intentionality. I'm pretty sure when they started they didn't know and it became a pain to be compliant).

Google went and tried to get licenses (We eventually succeeded for most of them. At the same time, everyone ganged up together to try to convince state/etc to loosen up)

In the meantime, people complained that github was available but code.google.com was not, when the reason was "one was following the law" :)


This might have something to do with how data on GCP will be encrypted at rest. US law treats encryption effectively as a weapon and restricts where products that include encryption can be shared.


Do you have any source for this?


Aren't many of the Azure data centers actually contracted out to other independent companies in those respective countries?


That's to deal with local restrictions, not to deal with US trade embargoes.


I believe it's just China and Germany.


I guess Microsoft does not have the same level of ties with the US military unlike Google

https://futurism.com/google-jk-project-maven-work-military-s...

I am really interested in the real reason too.


I highly doubt they were.


Amazon does not actually operate any data centres in China. They contract it out, or license their brand name, I am not sure exactly which. I believe to avoid legally squiggliness between Chinese law and laws elsewhere.

This causes a lot of annoyances -- lots of features are not available in Chinese "AWS" regions, or they require special treatment. When I've gotten reports about slow, incomplete or stalled downloads from CloudFront it is almost always someone who's inside the Great Firewall.

I guess GCP decided not to bother with that approach for now.


China isn't under any embargo that would apply to AWS. Amazon do this to comply with China, not the US.


I didn't say they were under embargo.

If they were, AWS would not be present in any form.


I don't know what part of my comment you were trying to address then.


On 2nd reading, it doesn't. Overactive pattern matching.


Get hosting in Hong Kong, with a company connected ideally to China Telecom, China Unicom, PCCW, Pacnet.

Use GeoIP to serve your consumers in mainland from HK.


I'd say it's a positioning move by Gitlab to be a better acquisition target for Google. They might have hoped for Microsoft buying them, but since Microsoft beat Google in getting a hold of Github, they now are realigning. Makes sense.


This is the feeling I get as well. Tighter integration with Kubernetes gets more people interested in using GCP and GKE.


For sure this is a political move to get closer to Google and antagonizing GitHub which has gone to MS.

Weird for me as I love gitlab and azure equally.


This has been planned since before the GitHub acquisition.


And this is sad either way since people moved to GitLab likely specifically to seek a still-independent code hosting site. I wish more businesses didn't seem like they were built to be acquired.

I hope you're wrong. :|


Kinda neat how careful they were to not say Azure performance was terrible.


Was it? Are there any numbers?


Assuming a buy, gitlab becomes part of gcp offering in the coming future ? :D


GitLab is VC backed, so they have to be looking for an exit/liquidity event, it’s inescapable. Google, Atlassian (roll it into Bitbucket) or Oracle, has to be one of them.


I honestly don't know, what is vc backed, and how does that turn it's destiny into an exit?


Venture capital is based on distinct funds which are raised one at a time. Each pool of money is distinct from the others that a VC might control. They need to report on them individually and they need to pay out of them individually.

The key is that each fund has a fixed lifetime, often something like 7-10 years. At the end of the lifetime the final value of the fund is returned to the investors.

It's hard to return cash you don't have -- and worse to return a reduction in value to investors (since you'll find it harder to raise next time). So as a practical necessity, at some point, there needs to be a big liquidity event for one or more investments made out of the fund. That could be an IPO or a purchase by another company.

Because of the highly speculative nature of VC, almost all of the companies supported out of a fund will fail. These will essentially be ignored. Some few of them will be bought at a price that covers their cost. A very few will do well enough that they are genuine prospects for a big liquidity events.

Those companies will be under the most pressure to cash out.

Gitlab has taken several rounds of funding now, some very large. These do not appear to have been made on the basis of current assets or current earnings. It seems reasonable to conclude that VCs predict a large exit and wish to ensure that Gitlab will make it to what is, for them, the finish line.

Disclaimer: I have never worked in finance and my views are cobbled together from books and blogposts.


They have taken money from venture capitalists who will expect a return on their investment via a sale. That is how the entire venture capital system works. This is all a matter of public record tho’ it makes uncomfortable reading for some. Those who fled Github to Gitlab when Microsoft bought the former might now find a critical piece of their workflow owned by Oracle!


You can run a Gitlab installation locally, unlike Github.

I wonder if setting up Gitlab on premises is already a (small) business.


> They have taken money from venture capitalists who will expect a return on their investment via a sale.

Or, you know, an IPO. Which is precisely the exit they claim to want:

  We want to IPO in 2020, specifically on Wednesday November 18.
https://about.gitlab.com/strategy/


Is there any word on how big their losses are, assuming they don’t make a profit? GitHub wasn’t making a profit anymore, but they also don’t provide as much for free as Gitlab (not that Gitlab is doing it purely to be nice. It’s a key differentiator for them). I’d imagine Gitlab’a financials are weaker than GitHub. IPO would probably be a lot weaker play than an acquisition. But I’m just guessing.


That would be a pretty awesome development. But Google buying a Rails platform would be kind of a surprise move.


I'd bet "being on the Rails stack" would carry little weight if they were deciding to acquire them.


Software stack is actually a pretty big decider.

php was pretty much a dealbreaker for google acquisition years ago. It pretty much means google might be happy to buy shares in the company as an investment, but won't buy the whole company and plan to integrate it into their own offering.


Github was a rails platform and google was interested in buying them.


Why? I can't imagine a corporation the size of Google would decide to not buy a company, purely based on what language/framework it uses, assuming that buying the company otherwise makes business sense.


Google bought Crashlytics/Fabric from Twitter and that is (was?) a product based on the Rails stack


That would be a great way to force people to stop using gitlab.


It really seems like Google is becoming the new Micro$oft in ppls minds, while Microsoft is actually branching in two different “entities” Microsoft Cloud and Ol’Microsoft and the emergence of the cloud faction is changing (some) people’s perception of the overall brand name...


Has anyone yet built a tool which will deploy an environment to all three major cloud providers and bench mark them against each?


There are a few "cloud portability" companies, but the fundamental problem with this approach is that cloud providers don't offer the same APIs and pricing models. If all you want is virtual machines and block storage, you can do it, but the answer is that your site is going to be more expensive on any of the providers than if you built it for an architecture that fits your cloud.

If you design for the strengths of, say, Amazon Lambda, you can't use Google Cloud Functions / Azure Functions directly. Most likely you'll just run it on top of self-managed VMs on the other clouds, making Amazon the "clear" winner. If you design for GKE, you'll find that Amazon and Azure's Kubernetes support isn't as good, but maybe there was a better design for your site that wasn't based around GKE. And so forth.

And if what you actually want is a fixed amount of compute capacity and block storage and you'll just manage OSes yourself (either because that's actually better for your needs, or because you're operating something that predates containers/serverless/object storage/etc. and the development work to get it there is expensive), just get some dedicated servers in colo, it'll be cheaper than any cloud for that model.


Exactly my point for the most part.

The fact that you basically had to provide a ELI5 here on HN for this, shows what a dismal job we, as an industry, have done to educate the larger world as to the best-practice of cloud selection should be.

We, on HN, for the most part, can spout off this tribal-domain knowledge because we live it most days.

But what would be great is to have these types of nuggest of information, empiracally delineated, available without effort to those who would like to know.

Specifically, in a way format/method where you don't need to have the lexicon to be able to formulate the proper question.

Most people, even those IT people who may work for a tiny office who has very little need for anything "at scale" may not be able to ask the questions.

An analogy could be (?): "everyone knows if they need a bike, motorcycle, car, truck or bus - and everyone should be able to assess their basic computing needs as well. It should be fairly clear if you need a single machine in an office, a colo or a cloud - and figuring out a path to doing so and selecting a vendor should be a straightforward task."


Selecting a car is not straightforward at all.


Yes, it really is. Anything you buy new will get you around and be (relatively) safe. If you're buying used, you have to know enough to check the right things or you have to hire a mechanic, but assuming that, for anything a decade old or less it will be the same. If you're talking about value, acceleration, fuel efficiency, etc. that's a matter of taste and your statement is equivalent to, "selecting a wine is not straightforward at all."


Im talking about bike|motorcycle|car|bus more than Mercedes|BMW|Yugo type comparison...


Spot on! I like to compare it to building mobile apps. You can create an HTML based app that works on Android and iOS but it won’t take advantage of many of the native platform features. Similarly, VMs and block storage will run in any cloud but won’t be optimized for any vendor. It’s portability vs. optimized for the provider.


it won’t take advantage of many of the native platform features

Kinda like how back in the day you could use Java and AWT just provided either the lowest common denominator widgets, or rendered the widgets itself to be equally unsatisfactory everywhere. Those who forget history...


So, I would actually say that you can do it fairly easily with open source software these days.

If you need apps to be hosted, something like Flynn can host the apps.

If you need Lambda, you can rack your functions natively into OpenFaaS.

The issue is that these systems aren't really 'multi-cloud capable'.

I really tried to put together a 6 node Flynn cluster with 2 in AWS, 2 in GCP and 2 in Azure but the response time in cross-talk was terrible!


You certainly can implement it - you just won't implement it in a way that's either price-competitive or as easy to operate as the native approach. (Or put another way, there's a reason AWS customers use Lambda instead of OpenFaaS on EC2.) If your goal is to make it work, sure. If your goal is to have a meaningful comparison of price, performance, reliability, and human ops overhead, you won't get that with DIY on top of cloud VMs.

The DIY approach is fantastic when you're interested in operating your own cluster on your own hardware, though - if you're mostly in a place where colo / datacenters makes sense to you, and you need a bit of something like Lambda, don't do a cloud migration just to get there.


There is often way too much customization needed for each platform. Terraform is a tool that can help you provision on multiple platforms with a common provisioner, but because they're all so different, you typically end up writing an entirely different config for each platform anyway.


http://googlecloudplatform.github.io/PerfKitBenchmarker/ can run a ton of different benchmarks on a lot of cloud providers.


I think there's a bunch of tools that simplify deploying to multiple cloud hosts. But I don't know about benchmarking support, sorry.


Seems silly to move just because MS bought github. Are there other compelling reasons?


They had started this move months before Microsoft purchased Github. If you read the linked article one of the biggest reasons is that the GCP implementation of kubernetes and better integration with their CDN they'll be getting a significant performance boost. It's taken this long since they've had to make infrastructure changes along with some code changes to get it better supporting the deployment there.


What meaningful CDN integrations are in Google Cloud?



not saying it's the reason but their funding comes from Google Ventures

http://uk.businessinsider.com/gitlab-raises-20-million-from-...


Yes, I also read they would do this to prepare for a Google buy.


That makes little sense for Google. The corporate cultures are very different, GitLab isn't a major money maker and it does nothing for them internally since they use a mono-repo. Cloud platforms can change after acquisitions. If Google wanted them they would have already bought.


> The corporate cultures are very different

Generally, companies are not bought for culture. Folks say a lot about their most valuable resource being their people, but typically a purchaser wants to buy assets. In software that will be less about cash or inventory or plants and more about customer contracts, partnerships, brands, trademarks, patents, copyrights, existing sales organisations and so on.

> GitLab isn't a major money maker ... If Google wanted them they would have already bought.

They might buy simply to have a hedge against Github. Gitlab comes with a lot of independent brand awareness that Google has not, so far, been able to gain against Github in this area.

The cloud providers are involved in a no-prisoners effort to lock in customers while the market emerges for the first time. This is golden age of rail stuff, there are massive path dependencies that will be very difficult to unwind.

Each will grow aggressively on every edge of their current surfaces, whether by implementation, partnership or purchase. Or all three, in varying orders, as they think works best. They will rapidly copy each other to try and prevent competitors gaining an unassailable lead in some segment.

Disclosure: I work for Pivotal. We have partnerships with Google and Microsoft.


Gitlab is more than just git hosting. It provides many pieces of the development workflow (like CI, staging deployments, etc). Moreover, Firebase wasn't much of a money maker either, and took a smaller amount of funding than Gitlab.


It would let GCP compete with some of the code hosting services Amazon offers.


Free Gitlab membership with a paid GCP account sounds attractive.


> it does nothing for them internally since they use a mono-repo

Completely irrelevant; Google does not version-control their projects substantially with Git. They used Perforce for a long time and now I understand they use some internal, proprietary VCS.


This is the kind of thing that happens after acqusition.


A lot of companies have been migrating to GCP for pragmatic "it's cheaper" reasons. GitLab in particular had a big booth at Google Next.


When they were talking about moving off cloud altogether, they mentioned that the reason they were on Azure in the first place was that they were given a pile of free credits.

So I suppose once those ran out it was a good idea to choose the best provider from first principles, rather than letting the inertia marketing succeed.


From my experience, Google cloud is much easier to use than Azure.


Sounds like a rather good reason to me.

As a matter of fact they should in fact avoid any kind of cloud vendor lock-in and spread the deployment across the big 3


kubernetes


AKS is solid, but agree that GKE is the better experience.


It was before AKS was released. ACS was Azure's Kubernetes entrant and it was hot garbage. AKS is a solid system built by Brendan Burns and his team at Microsoft.


I never got around to trying ACS and at this point have no reason to.

My experience with AKS is mostly with the Azure REST API (building security and compliance tools around AKS). Unfortunately AKS isn't within scope of MSFTs BAA yet, so my focus has mostly been on other services like Cosmos and App Service.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: