Hacker News new | comments | ask | show | jobs | submit login
Microsoft and Red Hat partner (microsoft.com)
301 points by kenrick95 on Nov 4, 2015 | hide | past | web | favorite | 167 comments



I'm not fooling myself into seeing this as anything but a "you scratch my back, I'll scratch yours" setup. RedHat is getting a ton out of this, and so is Microsoft.

Who's the main target for Azure? Enterprise companies who trust Microsoft implicitly. When an exec comes to the head of IT and says "we need to be on the cloud! I read about it!", Azure eases the transition by being able to go to vendor you've already been using for a dozen years.

RedHat's core audience is enterprise as well. RHEL is the de facto standard for that level of infrastructure due to the support you can get versus any of the distributions that are equally as good, but minus the support contracts.

So, they're helping each other out and that's good in my humble opinion.

Microsoft's new direction under their new CEO is one surprise after another. I've only tinkered with Azure so far but makes me want to pay MS more attention than I would have a couple years ago.


> Microsoft's new direction under their new CEO is one surprise after another.

I don't know about that, it all seems fairly logical. Don't forget that Microsoft is primarily a "business business" rather than a software business (ie. optimizing for business longevity above all else).

Microsoft appears to have practically perfected the game of "maintaining vendor lock-in in an otherwise open ecosystem". That game requires "giving in" at times, when it is no longer viable (in the long term) to compete with other options.

A good example of this is the open-sourcing of .NET. I don't believe for a second that this was a change of heart of a developer or even a team - it is far more likely that Microsoft is realizing the increasing shift away from Windows and .NET, and towards more 'open' platforms (whether Python, Node.js, Ruby, Golang, or whatever else).

It is in their best interest to make .NET open-source, as it allows them to maintain their foothold in the application development community - it still has direct integration with the rest of their products (and thus incentivizes picking MS as a vendor), but can now compete on openness.

You can see something similar for Windows 10, and it being given away for free. Both OS X and Linux are increasingly eating away at Windows' marketshare. By offering it for 'free' to existing Windows users (ie. nearly everybody), they can attempt to win users back, as it is now offered at the same price (in the eyes of the consumer).

Throughout the existence of Microsoft, they have consistently pushed the boundary of vendor lock-in and marketshare, trying to keep it as closed as possible but as open as necessary. The more recent decisions from Microsoft are not surprising to me at all - they are simply the result of a rapidly changing computing landscape. Microsoft hasn't changed, their environment has.


> I don't know about that, it all seems fairly logical.

For people dealing with Microsoft for many years... this is the surprising part.


The open sourcing of the Dotnet CLR and compilers is done for more than just competing with other open source languages, didn't Microsoft offer to buy out Unity which was based on Mono and other things?

Microsoft open sources their languages and then buys out the company that does the best job in making it crossplatform with as many platforms as possible.

What Microsoft can't do they usually buy out a company that can do what they can't. Getting Visual Studio for other platforms is near impossible for them, so they open source the CLR and compilers and see who can do a better job at porting them to other platforms.

Microsoft makes a lot of money selling Workstation and Server version of Windows to businesses as well as Visual Studio licenses and BackOffice (SQL Server, Exchange Server, ISA Server, Sharepoint, etc) for Windows Servers to make custom apps on. Not to mention Office licenses and other software for businesses.

New CEO got Microsoft into the Cloud and Cloud services, so it only makes sense to make a deal with Red Hat for an Azure Server deal to sell to more businesses that want a GNU/Linux solution with paid support.


Azure has been mostly Linux Applications for a while now. Whenever Enterprise wants a contract they almost exclusively think Red Hat. So this makes sense for both of them to team up.I also don't see any fallout in the community because of this.

Since Ubuntu ruling the cloud instances this also makes sense for Red Hat.


Ubuntu rules cloud instances elsewhere? I almost always see the choices as CentOS or RedHat. But that might be my myopic view cause I am always just looking for those right off the bat.


Ubuntu is usually the first to be offered by most cloud services (especially on free tiers), because its a good compromise between a liberal license, commercial support and light requirements. This gave it a good momentum in some quarters, although I agree that it doesn't really look like "domination".

It would be cool to get numbers across all providers, in terms of deployed instances of this or that OS. There's a good story to write, there.


Ubuntu was made the default instance on EC2 (even though EC2 is built on RHEL/CentOS/XenServer).

That's were Canonical began to gain traction regarding installs, yet they still can't find a way to get users into support contracts en masse like Red Hat has been able to.


> Ubuntu was made the default instance on EC2

Citation needed. If Amazon Linux is still the default, you're wrong. It is based on CentOS, uses RPMs and yum.


I thought that was common knowledge? Amazon Linux is now the default instance, but that was not always the case. Ubuntu enjoyed a huge uptick in "cloud" usage because of EC2.


Amazon Linux is not based on Ubuntu.

(And I edited my comment since the wording was unclear... sorry if that caused confusion)


I think they mean Ubuntu was the default before Amazon Linux was even a thing. As I recall, that's correct, but I don't know where to find a citation on that. Amazon Linux only rolled out to production in 11/2011, so less than half of EC2's life: https://en.wikipedia.org/wiki/Amazon_Machine_Image#Amazon_Li.... It's


Ah, you might be right. I can't find anything in a quick search of archive.org and old press releases, except that Canonical did offer an official Ubuntu AMI pretty far back in the day. Still can't find out what the default EC2 linux was based off though.



It depends on what you are going to use it for.

Ubuntu has gotten some solid traction in the "web" end of things, while RH is more "infrastructure".


CentOS / RHEL are non-starters for cutting edge work in machine learning due to their ancient mathematics and scientific libraries


Almost everything else is a non-starter in commercial settings because of their lack of credible support contracts.


Sorry I don't see how. RHEL7 is quite modern, however, you could always get bleeding edge anything with software collections[1], of which Redhat has a blessed set of packages in software collections named the developer toolset[2].

[1] https://www.softwarecollections.org/en/

[2] http://developerblog.redhat.com/tag/developer-toolset/


Msft to buy Red Hat?


Azure is the OS. Red hat's stuff is the application.


Yes.

And different Linux flavors were already on Azure as "applications" prior to this deal.

But now RedHat is getting the official stamp.

If Azure the OS only had Windows the App, that would be a great failing. MS is smart and playing Azure out to not just promote other MS products. Expanding it as an ecosystem for various OSes is a Good Thing.


Yes, besides any sufficiently lucrative enterprise customer is bound to have a few Lunix boxes for $ENTERPRISE_APP they can't live without.

I say this as someone who works for an ISV that is single-handedly responsible for the only Linux servers most of our customers have.


>RHEL is the de facto standard for that level of infrastructure due to the support you can get versus any of the distributions that are equally as good, but minus the support contracts.

This was the biz model for LinuxCare.... but they failed.


Yet it seems to be working well for RedHat.


Yup - its good for both - which is what is fantastic about Microsoft's new direction.


> I'm not fooling myself into seeing this as anything but a "you scratch my back, I'll scratch yours" setup. RedHat is getting a ton out of this, and so is Microsoft.

That's the point of a partnership.


Almost like it was planned.


It's not a trust thing. Microsoft is wheeling and dealing with Azure. So you won't get a deal on O365 or Dynamics anymore, but you can buy Azure credits at a discount.

They are using Azure as a loss leader to get you hooked, then the price goes up. This new SaaS world is kind of like buying cars.


Given the pricing compared to AWS and others, I don't think they're losing money on it... It was my first choice when I stopped self-hosting a windows server for some personal projects/sites. Though I'm on about the smallest option, and that is running me around $10-12 a month, I'm using CloudFlare in front of the 5 websites on that box.

That said, nothing big, and planning to migrate things to linux solutions but time and motivation don't always align for personal stuff at the end of the work day.

They seem to be one of the better options if you need hosted windows anything as part of your cloud strategy. And having played around with Azure's other services [1], it seems to work pretty well all around. I did find the default interfaces for Azure's storage a bit difficult to work with, so I created a wrapper with imho a nicer interface.

[1] https://www.npmjs.com/package/azure-storage-simple


Our Dynamics reseller was adamant that we run it in-house. You could run it on Azure, but we don't recommend it. It's just not there yet.


I wonder if that is really true or:

1) The reseller doesn't know how it runs over Azure (I had this with vendors sticking to in-house Sharepoint vs Sharepoint online mostly from a "we don't know how" angle we found out)

or

2) Does the reseller also make money from in-house vs on Azure? Like, their hardware? Anything like that? Just curious.

I mean, he could also honestly think that Azure isn't there yet either. I could just be curmudgeonly and doubting the reseller.


Sometimes with heavily DB bound apps, it's a performance/scalability issue. Doubly so if the app's DB relies on aggressive locking for most/all of its transactions. You can do it with clouds but you often pay through the nose for the adequate storage (or cheat with 512GB of RAM these days.)

I've seen folks spend 3-4x more money (almost all of the excess to the SAN vendor) to get a VMware setup (because snapshots and VMotions) with the same reliable I/O a commodity Dell server with a 4-disks SAS Raid-10 and a decent RAID controller can deliver, not even talking SSDs here.


So much this.

We always said to use "the right tool for the right job", isn't it? There are a lot of apps for which virtualized infrastructure is just not a good fit from a technical standpoint. When these end up in the cloud because suits decided OpEx is sweeter than CapEx, they suffer horribly; the customer ends up paying more (because prolonged I/O spikes in the cloud can be hugely expensive), the application runs slower than it did 5 years ago on commodity hardware, and nobody is happy.

Still, suits decided cloud it must be, so cloud it will be. Sometimes the IT sector is depressing.


I think I confused the point. When you say "Microsoft, I'm buying X more licenses for Dynamics, give me a discount." they reply, "How about no discount, but some cheap Azure instead?"


In the last 5 years we've experimented with Linux on top of hyper-v a few times.

The basic I/O, compile & runtime performance was significantly inferior to xen & kvm (didn't bench against vmware, phasing it out due to cost), it wasn't worth any effort to even deploy apps for testing.

Therefore I don't see Linux on hyper-v being a compelling option for the cost conscientious technical officer or lead engineer.


With all due respect: I don't buy it. That isn't how the technology works. Compilation in particular would never even hit the hypervisor layer regardless of which one you were utilising. It is CPU/memory bound.

The difference between hypervisors for IO is tiny. They're all slower than native, but vary very little between one another. That's why the manufacturers have all almost given up trying to use performance for comparison, and instead argue value adds/management/automation/cost.

If you really found one specific hypervisor slower than another (regardless of which two), I'd want to look at your exact test conditions since that has never been the case for me (everything else being equal). And even the manufacturers don't argue that point much.

There are plenty of reasons to e.g. pick AWS over Azure, but Hyper-V performance is not one of them. That's just FUD.


You're missing something very significant: Compilation jobs involve a lot of forking. Forking, memory accesses, and TLB differences across various VM architectures can have an enormous impact on performance.

For example, we found two orders of magnitude performance difference between PV and HVM on ec2 on fork-heavy CI jobs. A difference between 3 and 30 minutes. This difference is trivially reproducible by measuring the time taken to complete a fork() syscall. As the memory size of the parent process grows, the difference becomes more pronounced.

Here's actual data I collected from c3.large instances, using a contrived Ruby one-liner to benchmark fork():

PV: [ec2-user@PV ~]$ ruby -e'$ref=[]; (24..29).each {|n| $ref << "X" * (2 n); start = Time.now.to_f; fork.nil? and exit!; puts Time.now.to_f - start; sleep 1}'

0.00550532341003418

0.011992692947387695

0.02542281150817871

0.05213499069213867

0.10562920570373535

0.21283364295959473

HVM: [ec2-user@HVM ~]$ ruby -e'$ref=[]; (24..29).each {|n| $ref << "X" * (2 n); start = Time.now.to_f; fork.nil? and exit!; puts Time.now.to_f - start; sleep 1}'

0.0005323886871337891

0.0007219314575195312

0.0012335777282714844

0.002183198928833008

0.004210233688354492

0.008102178573608398

As you can see, the difference is considerable. This is all on Amazon ec2, the only difference between these VMs is PV versus HVM.

(edit: HN formatting seems to be eating the exponent operator in the "2 to the n" expression. It's a double *)


that wasn't always the case. i remember times where pv could be faster on a lot of forks. however you can't compare pv/hvm vs kvm vs hyperv since the difference between pv and hvm is caused by a totally other aspect than the comparsion between hvm, kvm and hyperv. also note hyperv got way better in windows server 2012.

also note that fork() involvs "lots of" io computations which could also be a driver problem and they evolved in newer kernels, too. and also fork() could syntactially be different on hosts with ballooning and hosts without. especially on kvm.

edit: as of today we switched to hvm instances (however we are on many many micros so that performance boost is likewise zero) on our inhouse hardware we are varying between kvm and vmware. however I dislike vmware more and more. vcenter is just unusable (flash) and consumes a shitload of memory. so mostly we use vmware api calls which makes vmware unnecessary somewhat. also we sell our software as a appliance with a xenserver host and mostly we tested our software an all ends and we are mostly disk io driven and that barely changes between them. there are some aspects that could improve performance but that depends more on how you configured your virtual disk.


>that wasn't always the case. i remember times where pv could be faster on a lot of forks

32bit, or 64bit pre-virtualization hardware extension (vt-x/d, ept, amd equivalents) are really the only ways this should have been the case. Each time you make a fork() system call on 64bit PV you're making an extra context switch and flushing the TLB that an HVM instance does not need


I don't buy it either and it is not my experience. My bad experience with hyper-v was running a desktop Linux but was expected because other factors like 3D/2D acceleration are not supported.

I would expect a full description of the benchmarks and setup to follow up.


I have a developer who can show a marked difference in performance between VMWare Fusion (paid) and VirtualBox (free) running Windows 7 on OSX. Same Windows ISO used to install both, and both running on the same hardware and OS, with exact VM configs (as close a you can get between the two). I'm talking a night-and-day difference - VMWare Fusion is usable, but VirtualBox is not.

After reading your comment, and admittedly not knowing much about hypervisors (yet), I am genuinely curious as to what else could be causing this drastic difference in performance, if not for the hypervisor itself. Any ideas?


I was more talking about actual hypervisors that run on hardware. Client hypervisors (those that run on an underlying OS) are an entirely different thing.

Real hypervisors are utilising a lot of hardware features to accomplish jailing. Client hypervisors may or may not be, depending on how they're implemented, and or how they're integrated into the underlying OS (e.g. specialised drivers, just simulating hardware features, etc).


That's not a very good example since both of those are desktop applications providing virtualization and are expected to play nice with other desktop apps. You can virtualize in a web browser too, but that doesn't make it a good high performance idea.

You should be looking at dedicated virtualization hosts to do an apples to apples comparison, like vmware on bare server metal vs hyperv or any of the free tools also on metal. Not on a cpu throttled desktop.


There is a big impact in performance if you are running a graphical user interface instead of a server only with the console. VMware put more efforts in the UI than other vendors.

I would always separate benchmarks for server or desktops.


I'd be interested in the same test with the latest VirtualBox 5 - you can use HyperV by switching a setting in the VM control panel.


It might be worth your time to run that same benchmark without any hypervisor to establish a baseline.


I'm getting approximately the same CPU & storage performance out of Azure's A0 machine w/Debian8, as from AWS t2.micro instance with spinning rust storage and Debian8 (both are cheapest offer available, and price is quite similar between them).

Network on Azure though is a totally wrecked thing:

- A0 instance is capped to a miserable 5 megabit/s uplink (downlink is fine);

- If you want TCP/UDP ports open, you need to open them one-by-one, going through a 2-page wizard with 8 fields in total to fill in for each port you are opening, and then waiting 15-20 seconds for each "change" to be applied. Good luck opening a 1024-port range for your SIP server;

- You can't open ICMP/GRE/IPSEC or any other custom protocol;

- There are ton of really weirdly configured network gear between your VM and the Internet - packet drops and private IPs in traceroute, etc...

- On a plus side, you can enable so called "direct server return" and you will get incoming packets with your public IP as a destination IP; a really nice feature sorely missing on AWS.


>A0 instance is capped to a miserable 5 megabit/s uplink (downlink is fine);

I thought it was 10Mb/s?

>If you want TCP/UDP ports open, you need to open them one-by-one, going through a 2-page wizard with 8 fields in total to fill in for each port you are opening, and then waiting 15-20 seconds for each "change" to be applied. Good luck opening a 1024-port range for your SIP server;

...or use the powershell cmdlets and do it in one line


> I thought it was 10Mb/s?

Alas... Also, in my experience, throttling is really slow to react, i.e. you get a chunk of data out at 10-20Mb/s, then everything freezes for few seconds until average would drop below 5Mbps, then another speed-up/slow-down cycle. Long-term average is exactly 5Mbps as per their spec-sheet [1].

[1] http://download.microsoft.com/download/4/1/1/411621F0-D0BF-4...

[2] https://www.oaklight.us/2014/06/azure-network-speed-quick-te...


I don't see why private IP's in a trace route is necessarily wrong...

Many large providers are numbering their hops with private IP space and routing public space across it because they don't want to waste /30's of public space for each point to point.

Yes, /31 is an option, but there were a lot of edge cases or network gear that didn't support it.


Yes, you are absolutely right that IPv4 addresses are scarce.

Which bring a related question - what's up with none of AWS/Azure/GCE supporting IPv6?


Unfortunately IPv6 in cloud deployments is still very much lacking. OpenStack is one of those where IPv6 has only recently become something that is being built into OpenStack.


Hi aexaey,

There are some things wrong on your comment, please let me comment on them:

I don't think A0 are capped at all, I don't know where did you get that:

user@A0VM:~$ speedtest-cli

Retrieving speedtest.net configuration...

Retrieving speedtest.net server list...

Testing from Microsoft Corporation (40.113.XX.XX)...

Selecting best server based on latency...

Hosted by KsFiberNet (Wichita, KS) [45.26 km]: 139.703 ms

Testing download speed........................................

Download: 44.48 Mbit/s

Testing upload speed..................................................

Upload: 22.54 Mbit/s

I just had to run one single test to get these results. Multi-threading will probably pump this up (let's say a multi-thread iPerf test between two VMs on two different regions, through their public IP addresses).

(Funnily enough this VM is in Dublin, hence the latency to Wichita).

You can get access to all your TCP/UDP ports by adding an instance-level IP: https://azure.microsoft.com/en-gb/documentation/articles/vir... (you can later protect it with NSGs --> https://azure.microsoft.com/en-us/documentation/articles/vir... ). What you've seen is when you create endpoints on the load balancer that points to the VMs behind it. Now in Azure Resource Manager, spinning up a VM from the new portal will actually get you an ILPIP and no load balancer by default.

About that statement about a ton of weirdly configured network gear: Well, for starters you're doing network virtualization here :) but seriously now, the platform doesn't forward ICMPs to/from Internet, so I'm curious about those private IP addresses on a traceroute (and of course the packet drops). If you're talking about hops between you and the VIP (the load balanced public IP in front of your VM), that's probably your provider, other providers and finally Microsoft's Network. As per the rest of the path, you've said that ICMP doesn't go through, so traceroute it's not going to work (tcptraceroute also needs ICMP to work :)).


ILPIP look promising. I'll try it, thanks!

Speed test though... hm... 22Mbit/s is not exactly unrestricted, plus not sure how long is the burst that "speedtest-cli" makes. In my experience A0's uplink is burstable (but unusably slow when recovering from a burst), with long-term average being bang-on 5Mbps.

I do of course see exactly one "my" private IP in traceroute, but I'm not talking about that. I meant there are microsoft's private IPs (#12-#15) and dropped packets (#16 and on).

And of course, "platform doesn't forward ICMPs" is an annoyance in itself - i can't even ping my own VM.

    # target IP anonymized, but still have enough granularity to reproduce the traceroute
    traceroute to xxxxx.cloudapp.net (191.235.128.128), 30 hops max, 60 byte packets
     1  192.168.1.1 (192.168.1.1)  0.093 ms  0.117 ms  0.063 ms
     2  193.47.232.12 (193.47.232.12)  0.785 ms  0.785 ms  0.776 ms
     3  microsoft.mix-it.net (217.29.66.112)  1.219 ms  1.233 ms  1.225 ms
     4  xe-0-1-1-0.fra-96cbe-1a.ntwk.msn.net (207.46.42.12)  8.700 ms  8.902 ms  8.430 ms
     5  * * *
     6  * * *
     7  104.44.9.142 (104.44.9.142)  42.112 ms  39.194 ms  38.987 ms
     8  * * *
     9  be-6-0.ibr01.dub30.network.microsoft.com (104.44.4.142)  39.738 ms  38.266 ms  38.929 ms
    10  * * *
    11  ae2-0.db3-96c-3b.ntwk.msn.net (204.152.141.81)  36.986 ms  36.984 ms  36.925 ms
    12  25.149.64.247 (25.149.64.247)  36.951 ms  36.775 ms  37.152 ms
    13  10.10.132.81 (10.10.132.81)  36.917 ms  37.477 ms  37.287 ms
    14  10.60.31.77 (10.60.31.77)  37.317 ms  37.699 ms  37.899 ms
    15  10.60.31.69 (10.60.31.69)  37.653 ms  37.326 ms  37.441 ms
    16  * * *
    17  * * *
    18  * * *
    19  * * *
    20  * * *
    21  * * *
    22  * * *
    23  * * *
    24  * * *
    25  * * *
    26  * * *
    27  * * *
    28  * * *
    29  * * *
    30  * * *
P.S. #12 - lol!


> P.S. #12 - lol!

In view of today's news, this is not funny at all.

		netname:        UK-MOD-19850128
		descr:          UK Ministry of Defence
		country:        GB
		org:            ORG-DMoD1-RIPE
		admin-c:        MN1891-RIPE
		tech-c:         MN1891-RIPE
		status:         LEGACY
		mnt-by:         UK-MOD-MNT
		mnt-domains:    UK-MOD-MNT
		mnt-routes:     UK-MOD-MNT
		mnt-by:         RIPE-NCC-LEGACY-MNT
		created:        2005-08-23T10:27:23Z
		last-modified:  2015-07-24T14:31:16Z
		source:         RIPE # Filtered


Based on iperf tests, it appears uplink on a A0 is being capped around 5Mbps. Considering I work with Azure Networking, I will find more details on this internally.

I generally work with Azure CLI (https://github.com/azure/azure-xplat-cli) and ARM templates (https://github.com/Azure/azure-quickstart-templates), helps with simpler configuration and consistent results across deployments.

Though we are aggressively working on adding more features and improving documentation, evidently there are gaps in certain areas. Please feel free to reach out to us through Twitter (@AzureSupport) or Stackoverflow (tag-Azure), for anything we can help with your deployments on Azure.


Yeah, the ICMP stuff could be a bit annoying :)

My comment about the traceroute was because you previously said that you've found those weird IP addresses between the VM and Internet. That's not how I actually see it, but I get your point, too.

The ones you identify as packet loss are just the ones you didn't get an ICMP back. I don't know anything about your background, but as per my experience with networking I've stopped considering traceroute as something remotely useful further than the first hop long ago, especially on environments outside my control (i.e. Internet). There are things out there like ICMP throttling and the fact that I could send you the "TTL Expired" ICMP identifying myself with any IP address. Remember the "Star Wars traceroute"? http://www.theregister.co.uk/2013/02/15/star_wars_traceroute... (doesn't seem to work for me now).

(About #12, WTF? Weird!)


Actually, in public Internet traceroutes are reasonably reliable. It's corporate networks where you would expect weird stuff like that.

> (About #12, WTF? Weird!)

There is a fairly boring explanation to that. Similar to public IPv4 exhaustion, RFC1918-style private IPv4 addresses can easily be exhausted in a big private network as well. If you have noticed, MS has actually run out of RFC1918, and uses RFC6598 (a.k.a. 100.64.0.0/10) network in Azure for their "classic" segment. 100.64.0.0/10 was not supposed to be used as RFC1918-style general-purpose private IP space, but rather as a local side of a CGN pool. But then again, if you squint a bit, well... it's still private, so will never be advertised into the Internet, so it's fair game to use it if you run out of RFC1918, right?

Some people choose to squint even further, and declare anything that is not advertised into public BGP as a fair game. I've seen a big ISP "borrowing" 44.0.0.0/8 (amateur radio AX.25) subnet to deploy video segment of their huge "triple-play" network. 33/8 (US DoD) is also popular as "borrowed" private space, 1/8 was fairly popular until it actually became real public address space handed to APNIC (oops!). Now we have an example of using 25/8 like that as well.


This is why it is unfortunate that more companies are not using IPv6 or moving towards it.

Currently working at a large MSP, and all of our stuff is still IPv4 only. Although we happen to have a ton of extra IPv4 space, so we aren't as concerned, but there is no movement towards IPv6 at all.


Wouldn't this work for configuring lots of open ports: https://msdn.microsoft.com/library/azure/dn495300.aspx?

Never used the cmdlets for Azure and it's been a while that I dealt with it, but that sounds like an issue you'd only do via the web interface if you're insane.


Wouldn't work for me, as the {Get,Add}-Azure* scripts are windows/powershell-only. But then again, existence of CLI tools means there's an API. So, with a bit of luck, we might one day see a portable CLI tool (maybe boto3 [1] ported to azure API... one can dream! :-)

Thanks for pointing those out!

[1] https://github.com/boto/boto3


There is a cross-platform CLI (based on Node.js), which runs on OS X and Linux as well as Windows - I don't specifically know if it has commands for opening ports though. There's information and links at https://azure.microsoft.com/en-us/documentation/articles/xpl... if you want to check it out.


Try their xplat cli [1], it's built on node/npm. I'm on Windows but still prefer this one over the PowerShell one.

[1] https://www.npmjs.com/package/azure-cli


But the game isn't the performance on a machine, or even a few machines.

The game are tools like Kubernetes and Apache Mesos. I have X thousand VMs spread out through the world, with a few hundred per datacenter for geo-stability.

If a network link fails, no biggie. Still runs. If a rack dies, still no biggie. Redundancy makes it work. Earthquake/tsunami/natural disaster ? Your stuff still runs, with a bit higher latency and slower throughput.


I find this interesting. I see many conflicting and biased studies comparing cloud/hypervisor performance.

There are a great number of variables that come into play with configuring each hypervisor. Also, considering continual updates, a solid analysis 6 months ago may be stale. Not to mention most of the studies come from the vendors, not from independent parties.

Question: does anyone know of an ongoing/reliable performance review of the major hypervisors?


Main problem with Hyper-V: getting into the admin console with anything but IE. Yes, really.


Vmconnect(1) is a local exe. The best experience for a console to your Hyper-V VM is remote desktop, but that requires a Windows guest and virtual networking. If you are trouble shooting networking or boot, or are using Linux guests, vmconnect is very easy to use (no network required) and works great.

I'm not sure of any way to get a console through IE. Can you explain?

(1) https://technet.microsoft.com/en-us/library/dn282274.aspx


This isn't really new, it's just Red Hat being late to the party. From Jun 6, 2012:

"The Linux services will go live on Azure at 4 a.m. EDT on Thursday. At that time, the Azure portal will offer a number of Linux distributions, including Suse Linux Enterprise Server 11 SP2, OpenSuse 12.01, CentOS 6.2 and Canonical Ubuntu 12.04. Azure users will be able to choose and deploy a Linux distribution from the Microsoft Windows Azure Image Gallery."

http://www.pcworld.com/article/257073/microsoft_to_run_linux...

Or you could always load your own distro of choice in Hyper-V.


Or rather Microsoft desperately clinging on RedHat to make its cloud proposition look commercially legit. I mean, look at that list: the first two items are for SuSE, a distribution that is hardly popular these days, then you have "knockoff RedHat" CentOS and "hobbyist's choice" Ubuntu. I'm surprised they left off Debian, I guess that's still a bit too hippy. In any case, not the stuff of dreams, from a commercial standpoint.

So you can spin it both ways, really.


Was simply pointing out the fact that Linux has been on Azure for years, because apparently some people don't know that. Indeed, according to Microsoft: "More than 20 percent of Azure virtual machines run on Linux."

http://news.microsoft.com/cloud/index.html

You're welcome to spin anything you like.


Yeah, but "Red Hat being late" is not a fact, it's placing the burden on Red Hat for getting their stuff running on Microsoft's systems for Microsoft's benefit. "Microsoft was late getting Red Hat on board" is the exact same fact with the opposite spin.

No spin would have been "both companies ironed out an agreement allowing their products to be commercially supported when working together, three years after Azure launched support for Linux systems". I know, not sexy.


Red Hat could have been one of the first Linuxes on Azure, if it had wanted. But back in 2013, it said: "Red Hat CEO: We don't need Microsoft to succeed" http://www.infoworld.com/article/2614357/linux/red-hat-ceo--...

I don't see why you're quibbling about "late". It's a fact that it's more than three years later than a bunch of other versions. But hey, you can have your own spin.


Note how RedHat didn't say that: if you read the actual article (despite being from InfoWorld):

> InfoWorld: Microsoft has a close business relationship with Suse Linux. That seems to be Microsoft's Linux of choice, and the company doesn't seem interested in having the same kind of partnership with Red Hat. Is that a problem for Red Hat?

> Jim Whitehurst: We'd be happy to work on interoperability with Microsoft or anyone else.

So yeah, Microsoft made a specific choice to partner with SuSE. RH said "whatever". Three years later, an agreement was finally struck between the two.

Did RH "come around"? Or did MS finally recognize SuSE is a losing proposition? You don't know and I don't know, but implying one side took action without having any proof for it is, well, spin.


I know Microsoft pretty well, and it would have wanted to support as many versions as possible, within the available time-frame. It obviously didn't just support SuSE, so your implied either/or is just yet more of your spin.

I don't know Red Hat that well (it's a few years since we've talked), but it has a strong focus on its own cloud business (1). It might have seen Azure as a rival to Red Hat cloud services, but that's just my speculation.

(1) https://www.redhat.com/en/technologies/cloud-computing

BELOW

> I have no horse in this, no investment, nothing; I just don't like unsupported bias. Can you say the same?

If you really don't like unsupported bias, perhaps you shouldn't post comments that reveal so much of it ;-)

> Yes, it is "just your speculation".

You could also cut out the cheap tricks. I referred specifically to the comment on Red Hat's motives, not to anything else.


> I know Microsoft pretty well, and it would have wanted to support as many versions as possible

That's nice to know and I'm sure everyone always means well, but it doesn't change anything in factual terms. I don't know Red Hat but I'm sure they'd like to support as many cloud services as possible.

> It obviously didn't just support SuSE, so your implied either/or is just yet more of your spin.

Dude, honestly, I'm only reading what you linked, with InfoWorld saying MS had a preferential agreement with SuSE. I didn't link that, you did; if it doesn't agree with your view, why did you link it?

Your first list had SuSE (twice), CentOS and Ubuntu, and again I took it at face value, so I don't think I implied anything.

I have no horse in this, no investment, nothing; I just don't like unsupported bias. Can you say the same?

> It might have seen Azure as a rival to Red Hat cloud services

Sure, exactly like Microsoft might have seen Red Hat as a rival to Windows Server in the cloud.

> but that's just my speculation.

Yes, it is "just your speculation". That's what I said, and why I responded to your initial comment. Maybe because of insider knowledge you might have, you're interpreting facts in a somewhat biased view. That's fine, but you cannot assume everyone shares this particular view of the facts and spin it as an absolute truth.

IMHO we've said everything that needed to be said so we might as well close it here.


SUSE partnered with Microsoft, that's true, before SUSE was bought by Microfocus:

https://www.suse.com/company/press/2015/suse-is-now-part-of-...


For me, this piece was the most interesting:

"Collaboration on .NET for a new generation of application development capabilities, providing access to .NET technologies across Red Hat offerings, including OpenShift and Red Hat Enterprise Linux, which will be available within the next few weeks."

Further development of .NET as cross-platform, not just Windows-based? That could bode well for the stack.

I doubt I'll ever write .NET code again, but this seems like a sensible decision to me.


I think this is just ticking a box for Microsoft since I believe you can get RH on AWS. But let's be honest - how many people will perceive any value in stacking a Microsoft technology (.NET) on top of UNIX? Sure if you already have a .NET app, hosting it in UNIX may give you some benefit. But is anyone really going to write an app from scratch with this in mind? I'm skeptical.


With their release of CoreCLR[1], which combined with the class libraries, I think there is a lot of reason to look at .NET on UNIX. The implementation includes the same JIT and GC that the .NET framework uses, it just cuts out a bunch of bits that likely wouldn't be relevant to people developing server backends (WPF, AppDomains, etc.).

For server stuff, .NET is now close to the level of openness and compatibility that the OpenJDK enjoys. If you like C# or F# better than Java, there is plenty of reason to switch.

[1]: https://github.com/dotnet/coreclr


I think this is the same problem Google had with Motorola. Sure, Google was playing nice. But the OEMs are afraid how long Google would be playing nice until Google decide to favor it's own phone-making division? Same applies here. How long MS is going to play nice before they decide to favor their own platform. Needless to mention, MS's past history isn't going to help much either.


There is some risk that they'll take it back closed source since its MIT licensed (with patent grant) and their CLA[1] gives them a broad copyright license. However, I think with such a high-quality implementation being open source would be hard to put back in the bottle, especially since Mono is already integrating much of the code into their runtime[2]. On the other hand, external contributions to the Mono runtime are MIT licensed so that project could jump back to closed source if Xamarin so desires.

It's important to note that all of this applies to the OpenJDK too[3]. So you are at least no worse off if you make a leap from the JVM.

[1]: https://cla2.dotnetfoundation.org/cladoc/net-foundation-cont...

[2]: http://www.mono-project.com/docs/about-mono/dotnet-integrati...

[3]: http://www.oracle.com/technetwork/oca-405177.pdf

edit: word choice


You may be right, I don't know. I suppose the acid test is when YC companies start posting for front-end developers with C#, .NET and UNIX experience versus what's required today, i.e. either node.js, django, ruby, etc.


Why YC companies? There are lots of other startups out there (some of which even using .NET).


Ah, great question. I sort of view the YC guys as cutting edge, i.e. willing and able to adopt new and sometimes still changing technology. In contrast to a Fortune 500 shop which in my humble opinion is unlikely to adopt a wholesale change to a new technology.


No, the acid test is if the big companies that have invested heavily into java start to adopt .net in meaningful ways.


Well, if they've got large existing Java codebases, I suspect the chance of the enterprise crowd investing in any new language is roughly zero.


The default RHEL on AWS is loaded with a bunch of a crap - supposedly Redhat officially supplies the distro, but one really has to wonder if a single week of engineering time has ever went into this. I mean seriously WiFi-drivers?!

[1] https://aws.amazon.com/marketplace/review/product-reviews/re...


RHAT is a great business in terms of its stock price but I am not the fan I used to be.

I started w/ Slackware and moved to Red Hat and then Fedora but eventually RH and Fedora became "the Linux that couldn't" when I was installing on either an old or new machine. On the same machine Ubuntu would install just fine.

I run a Windows desktop now and mostly use Ubuntu Linux in AWS, sometimes inside a VM on my local machine. I used to run the RH-derived AWS Linux, but eventually I learned how to do things easily in Ubuntu, such as installing and maintaning an Oracle JVM, that are hard in Red Hat. Definitely an enthusiast could find a solution with RHEL, but having a server OS that "just works" is what you need if you are the guy who splits his time between devops and bizdev.


What would be the benefit of slowing the release process of RHEL on AWS by pruning it down to the bare essentials? Is redundant stuff like WiFi-drivers & firmware really taking that much space that it matters?


maybe, maybe not. What that tells me though, is that they didn't sit down and say: "Here's how we optimize this to work for the cloud". I can't say other distro's do either, but it seems like something they should do


I don't think they should. This way lies madness. You want e.g. the kernel & packages to be pruned down to only have the things needed for each cloud host?

That means that for N cloud hosts Red Hat would have to potentially build N different patches for any update or security fix.

Now you can't certify software to run on "RHEL 7" anymore, it has to be "RHEL 7, assuming it has XYZ, which some cloud hosts don't".

It can be done, but it's a huge hassle, it's much easier to just ship the same distro to everyone. Space is cheap.


> That means that for N cloud hosts Red Hat would have to potentially build N different patches for any update or security fix.

Why would they have to do that? AFAIK, patches apply to packages, not the entire distro. If $package_x isn't installed, it doesn't get the patch. If it is installed, it does get the patch.

And for me, the concern isn't space, as much as running processes, and littered un-needed files.


If you're concerned about either of those RHEL is not the distro for you. It has a bunch of running processes, some of them are even written in Python! Talk about unoptimal. Similarly it's not the smallest Linux distro by far.

I think this whole line of argument is just some misplaced nostalgia for distro micro-optimization. It's not the mid-90s anymore. Space and CPU is cheap, you're not going to gain anything significant by trimming down your distro.

If those things were actually important you wouldn't be using Azure in the first place.


I would, .NET and the latest C# is awesome and superior to other frameworks, so yes, the fact that can run in Linux is something very atractive.


C# is a decent language if you like imperative programing. .NET MVC, though, is far from impressive. The routing is crap, Razor is designed to encourage you to dump C# straight into your views, and in general it feels like a "let's do Rails in C#", which I really don't see as the future of web development.


C# and .Net the framework is objectively 'good'.

ASP.Net MVC, is very subjective. It definitely is very 'weird' from the perspective of being on a static language platform but embracing 'magic' over standard type safety.

I will give you routing though, i dont really think it 'sucks' but i have lead development teams using MVC for years... whenever a new developer who isnt familiar comes on board, the most significant 'learning curve' is the routing... i am not really a fan myself but i understand what the value proposition is...


But because it is .Net you have all sorts of libraries which bring you type safety if you want it. Things like T4MVC and Typelite combined with Entity Framework give me compile time impacts of impacts anywhere in my stack from the DB downwards. If you're writing projects measuring in the person-years of effort then these tools are vital, if you're writing something smaller then the "magic" is quite nice.


It's not a problem of learning curve, it's that when you come from something like Jersey, it feels very limited.


Jersey is a framework for REST services, the equivalent would be MS Web API, not ASP.NET MVC.


That's really a weird way of thinking about it. As something like Dropwizard demonstrates, it's a very good base for a web framework. Take Spring's routing, if you prefer, even that is better.


Why is weird? you do spect that Spring MVC gives you a full Restful experience?, it can give you a good REST experience, but is not for that, the same case with ASP.NET MVC is not spected to give a premier example is RESTFul services that is why Web API exists, and as someone who has used both, Java and .NET, I can tell you that Web API is simpler and as powerfull.


It's weird because there is no good reason to force you to use a different routing system in order to use REST routing. It's especially surprising considering how Rails-inspired it is otherwise.


Obviously you are misinformed, becuase ASP.NET MVC and Web API share the same Route Handler, heck, even I could make my own framework and use the same Route Handler, because it is a decoupled module, so, I invite you to learn more about the topic before making such ignorant statements.


I invite you to be more civil. As for .NET MVC, the fact remains that the default routing system is fairly mediocre, for no particular reason, while many other frameworks offer more flexibility with one system. Now, if it's good for you, by all means, keep using it.


>I invite you to be more civil

I can be civil till certaint point, but how to be more civil when you have no idea of what you are talking about and instead of admiting it you just keep making more ignorant statemens?, just say "I have no clue" and that's it, nobody is going to judge you.

>the fact remains that the default routing system is fairly mediocre, for no particular reason

You keep repeating the same as a broken record, yet, you have presented zero evidence, there is no route that I have implemented in Java that I wasn't able to replicate in the same way or even more easily in .NET, be it MVC or Web Api, so please, present some evidence or go home.


>C# is a decent language

Just decent? I dissagree, it is the best language to my taste.

>NET MVC, though, is far from impressive

Tell me what is impressive them, becuase to me is a well stablished, easy to use and learn with static typing, there is nothing better.

>The routing is crap

The routing is highly configurable, by default it mimics Ruby on Rails routing and if you don't like it you can use your own, you have no idea of what are you talking about.

>Razor is designed to encourage you to dump C# straight into your views

Just like in JSP pages you can write Java code, or in ERB pages you can write Ruby code or in EJS pages yu can write JavaScript code, and if you don't like there a re plenty of options.

>which I really don't see as the future of web development

Your personal views are just that, personal, at least inform your self well before making ignorant statements.


> Your personal views are just that, personal

As is yours, so why bicker?

C#/.NET is also my favourite, but it's not the be-all and end-all of web development. It might be the best for all of your projects, but not for everyone else.

It has it's flaws, like everything else, and I'm happy for you that you haven't used it (or anything else) for long enough to discover them.


Anyone is free to use the tool/framework of their preference, but not to make ignorant statements about other frameworks w/o even knowing them.


The points you are refuting are all subjective evaluations, not objective facts. If he were wrong on objective facts, then it would be reasonable, albeit impolite, to call him ignorant. But he isn't.


You don't need to use ASP.NET MVC (or even ASP.NET at all) when writing something for the web.

And if you like functional programming, use F#. It's pretty great.


This, I've recently been writing more Java lately and I miss C# quite a bit


With .NET being open sourced, they're probably hoping on a wider ecosystem developing around it.


> how many people will perceive any value in stacking a Microsoft technology (.NET) on top of UNIX?

That's not the point of this. MS is positioning Azure as the cloud for enterprise. Azure can run a bunch of hosted services you had to administer yourself (Active Directory, Office 365, etc.), it can run any Windows server apps you had, and now it can run enterprise Linux apps on the most popular enterprise Linux distro. So if you're a company that has Office, Active Directory and an Oracle database running on RHEL, you can move your everything into Azure.


The idea is not to run .NET on Red Hat. Its to run anything on top of Red Hat on Azure. Azure is far bigger than just Microsoft's stack.


I remember when Microsoft teamed up with IBM on OS/2, so I'd predict that Microsoft comes out with its own brand of Linux within 5 years. That will give them time to learn what they need to include/exclude and support.


In the late 1980s, Microsoft's version of Unix, Xenix, was installed on more machines than any other version of Unix.

Radio Shack sold Tandy HD machines running Xenix to business customers. In fact, Radio Shack stores themselves used a POS system running Xenix, up until the mid 1990s at least.

Microsoft sold Xenix to SCO in 1987.


This is the most important comment in the whole thread


Now that is a headline that would have raised a few eyebrows 15 years ago.


In 2006 we had a similar headline.

Microsoft partnered with Novell (SuSE Linux), and there was no happy end for one of them (Novell is no more; SuSE Linux once a major distribution next to RedHat is a shadow of its former self). https://en.wikipedia.org/wiki/Novell#Agreement_with_Microsof...


I remember that one. Many people were pissed off because that partnership included a patents agreement. This was because in the eyes of many, this meant an acknowledgement from Novell that Microsoft was in the right to claim patents infringement of Linux and SuSE Linux was then advertised as the safe distribution. Talk about a sure way to piss off your community.


Not just an agreement, but also that 'Novell agreed to pay royalties to Microsoft based on Novell's open source sales'.

http://www.infoworld.com/article/2654097/linux/the-microsoft...


Our clients seem to still use it a lot, I usually see it being deployed together with SAP when I get project reports.


Worked out well for Xamarin. Also, it seems to me that SUSE is still in good shape, but didn't continued to grow like RedHat.


> Worked out well for Xamarin

Only for them. Novell bet big on pivoting on Linux and then crashed.


Eric Schmidt on Novell, recently:

"I went to Novell under the mistaken goal of being a CEO. I didn’t do the due diligence, and if I had, I wouldn’t have gone. Our basic goal was to get out with our professional reputations intact and not end up in jail. The books were cooked, and people were frauds. But it turns out you can overcome that, and the skills I developed helped at Google."

https://medium.com/cs183c-blitzscaling-class-collection/cs18...


Novell didn't crash because of linux and SUSE. Suse was the healthy core of the company and was growing in recent years. The original Novell business however died, it had nothing to do with linux.


The interesting part was that from what I recall there were a couple of quarters where Microsoft's sales force sold more Novell licenses than their own sales force.


I’m definitely picturing Balmer quietly fuming, pacing a bit, staring at a Clippers logo and thinking “eye on the ball, Steve, eye on the ball”


Microsoft started offering multiple versions of Linux on Azure while Ballmer was in charge.

Microsoft also started writing dozens of apps for Android and iOS while Ballmer was in charge.


forgetting the cloud stuff for now, this is much more interesting:

"Collaboration on .NET for a new generation of application development capabilities, providing access to .NET technologies across Red Hat offerings, including OpenShift and Red Hat Enterprise Linux, which will be available within the next few weeks."

Xplat .net is coming to RC1 in a couple of weeks (per roadmap: https://github.com/aspnet/Home/wiki/Roadmap) and it's exciting to see that RHEL will support it. It makes sense for Microsoft, traditionally an enterprise company on the backend, to partner with a *nix company with, primarily, enterprise clients on the backend.

If nothing else, the toolset microsoft brings to the table will raise all boats on the -nix side IMO.


"Unthinkable" This was title of Redhat's comment on Microsoft-Suse partnership.


"Linux is a cancer" Microsoft's CEO, 2001 (Steve Ballmer)

Source: http://slashdot.org/story/01/06/01/1658258/ballmer-calls-lin...


who uses redhat ? (im not being sarcastic)


Regulatory compliance in various fields stipulates that you have defined responsibilities and support accountability. The net value here is that a vendor like Red Hat is already deployed at companies who comply to the same regulations, so we can all share and leverage best practice documents to satisfy those controls. Even if you had a team of 10 smart Linux engineers, you save time and money by leveraging Red Hat.

In addition, you may also need patent indemnification and reliable security updates (we leaned on Red Hat heavily during ShellShock and others). For example, before I release a Linux image I have a checklist of 450+ items, including things like NIST certification. Red Hat streamlines this process as it has already been certified across the most strenuous of regulatory and compliance environments, and we can reuse much of that work.

This isn't important for a clothing website startup for example, but for aircraft, CT scanners, anything ISO compliant, finance it is paramount to what is being delivered.


Generally speaking, entities where their professional administration teams need a *nix product with a strong reputation for dependability and the fall-back option of a support contact for when things go pear-shaped. This is vaguely referred to as "The Enterprise," because they tend to be large, low risk organizations with high user counts.

RedHat's userbase is basically the opposite side of the spectrum from Ubuntu users flying by the seat of their pants with that popular free server thingie they heard about.


Non-startup world uses Redhat.

Amazon Linux is based on Redhat. CentOS is a Redhat recompile, so you can say anyone using that is also using "Redhat". US govt almost exclusively uses RHEL when they mean "Linux". Also banks, healthcare, etc.


Red Hat is by far the most successful open source company and will be the first to reach $2bn in annual revenues (it's on $1.9bn ttm). It's also very profitable -- it made $180m in its last financial year -- though presumably some of that comes from JBoss.

It seems to be by far the most popular version used by governments and large corporations.

"We're an enterprise software company. You're either consumer or enterprise. We're enterprise."


Shops that actually want the support that Redhat provides, otherwise they would probably use Debian or Centos.

I would assume Redhat-required shops would be running some pretty heavy, sophisticated workloads.


Enterprise. They don't buy into anything unless they can get "real" support (not just a few neckbeards telling them they can admin Linux).


Almost the entire finance industry and banking industry runs on RHEL or SLES (mostly RHEL in the US).


It is huge in embedded platforms. Many manufacturers have migrated from Embedded Windows to "hardened Linux" which is usually RHEL.


"Enterprise". E.g. you won't find a bank that doesn't run RedHat. RedHat is Linux for enterprise people.


I work at a space science / tech research lab at the university of colorado. RHEL is our distro of choice for linux machines


People who have and need sufficient Infrastructure. The unglamorous servers that you expect to just chug along doing their jobs -- internal file servers, database servers, etc. Because RHEL has a support cycle that's a decade long, you can ensure that those servers don't need to go through the expensive and time consuming process of testing a stack with a new platform nearly as often.


Anyone who wants to run Oracle on a supported version of Linux, that isn't 'unbreakable'.

If you want support from Oracle and you're not running on RHEL or Oracle's own offering, the answer usually is, "Get back to us when you can reproduce it on a supported flavor of Linux."


the US government is a big one.


There is also a pretty big Fedora community


Microsoft did a partnership like this before with Apple (Microsoft Office) but it didn't turn out so well for Apple. Wondering what the outcome would be like with Satya in the driver's seat.


A better comparison would be a situation where Microsoft partnered with a company over an operating system... Like IBM with OS/2 which turned out great, right? haha

Or let's look at when Microsoft partnered with Novell:

http://www.cnet.com/news/microsoft-makes-linux-pact-with-nov...

The byline from that article is classic! "Former software foes pledge to work together to help Windows world and Linux world interoperate."

Turned out great for Novell, right? Hah! Does anyone use SuSE anymore? It's market is so tiny it's hard to find statistics for it.

Other fun Microsoft partnerships: Nokia, Barnes & Noble, Best Buy, Yahoo, Nortel, Sendo, and probably dozens of smaller companies that came & went or are but a fraction of their former selves.

A more interesting thing to track: How is the partnership with Microsoft working out for Docker? I'm really curious what the heck Microsoft is going to do in the next version of Windows Server to actually support realistic containerization.


Thanks for all the examples where Microsoft pulled a Trojan. I totally agree. If Win32 embraces the Linux kernel calls to make Docker work without virtualization, is there a possible Embrace, Extend, and Extinguish plan? Or are businesses smart enough to avoid a repeat of ActiveX?

Microsoft made a pact with Sun back in the day where they would support Java. If memory serves, Microsoft supported a Java that only worked on Win32 using J/Direct calls that natively supported Win32 instead of being OS independent like most Java apps.


Did/will Red Hat hire Miguel de Icaza next? First systemD, now this. Sounds like Suse/Novell.


Here is a headline I never thought I would ever see. I guess from a business perspective this makes a tonne of sense. Red Hat is a massive player in the Linux market.


Well, at least it explains systemd.

/s


Hahaha, came here to say exactly that.

So, Red Hat is the next Nokia now?


Embrace, extend, extinguish.


Oh... you heard about that?


So next version of Windows getting systemd?

Systemd would be right at home in Windows.


Awesome. I can't wait to receive my Redhat voucher.


A cloud provider that offers Linux and Windows, this is cutting edge.


Sure, it's not cutting edge, but it's one of a bunch of signs recently that Microsoft has turned over a new leaf in the last few years.

Visual Studio's page now mentions Git, a GPLed piece of software built for Linux kernel development. They've come a long way from "GPL is cancer!"


And there's git-tf: http://www.microsoft.com/en-gb/download/details.aspx?id=3047... a command line git-alike that will work with a TFS server.

Not nice to use, in my experience, but it is officially from MS.


Git is actually integrated in the latest VS and a lot of recent documentation assume that you are using it by default. Also, the latest release of ASP.NET MVC uses Bootstrap as its front end. That's not GPL, but it seems to be part of the larger trend of Microsoft integrating popular FOSS tools into their workflow where they can.


They have made improvements, but I still don't trust them. Embrace, Extend, Extinguish.

They laid a lot of groundwork for mistrust, so they'll have to undo that, and it won't be quick or easy IMO.


I can't tell if you're being sarcastic or downright honest, so I'll just put it out there: we've had cloud providers offering both for at least 5+ years.


Sarcastic


Considering fewer enterprises trust Windows 10 with every new headline about how it spies on you, I guess they had no choice but to offer Linux as an option as well.

Also relevant to "Microsoft's love for Linux":

https://plus.google.com/+SimonPhipps/posts/c636Vp4kKbf


Windows 10 uptake is remarkably good on Windows 10, perhaps because enterprises don't fall for click-bait headlines.

Microsoft Says 1.5 Million Enterprise Users Have Deployed Windows 10 https://redmondmag.com/blogs/the-schwartz-report/2015/08/ent...

Bank of America CTO Talks Windows 10 Plans, Security http://www.informationweek.com/strategic-cio/executive-insig...

"Reilly promised a Windows 10 upgrade is on the horizon for Bank of America. "We're looking to adopt as early as we can," he said. Such a project will be a massive undertaking given the sheer multitude of Windows devices within the organization, but he appears optimistic about the process."

I was a Microsoft sceptic, but Windows 10 has me convinced http://www.theage.com.au/digital-life/digital-life-news/i-wa...

"Then, last week, at an event hosted by CIO magazine, where I gave a keynote, I spoke to a group of Chief Information Officers of large and midsized companies about technology trends. The vast majority said they were buying Microsoft's Surface Pro tablets for their users and upgrading desktop machines to Windows 10."

See also: Configure telemetry and other settings in your organization https://technet.microsoft.com/en-us/library/mt577208%28v=vs....




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: