Compare VMware's 20110331 net income to 20101231 net income: https://www.google.com/finance?q=NYSE:VMW&fstype=ii - note how it has flattened.
Since VMware is public quarterly results are vital to the naive MBA. Said MBA might decide to improve profitability by pushing small customers to cloud services (Amazon) who are already customers of VMware.
This also has the advantage eliminating VMware as a competitor to some of VMware's largest customers.
ESXi is already free with limitations. This will definitely be the reasoning if we see VMware make workstation free thereby bring even more potential users in.
Lots of FUD coming from the community- at first glance it sounds bad, but a lot of us should do the math before getting the pitchfork ready. ( Script to help "do the math" http://www.lucd.info/2011/07/13/query-vram/ )
We will actually be saving money with the vRAM licensing changes.
I've been involved in a project where we have actually had to change the direction of technology as well as deployment schedule to avoid incurring a massive (think over $100 million) license upcharge.
a) 1 x Large Server + VMware licenses
b) N x Small Servers
If b) is more cost-effective then VMWare definitely dropped the ball here...
One of the key advantages of VMWare from a operational point of view is the management capability, which is either much better than what most people have for physical hardware or much cheaper. Even with this new licensing model, VMWare still offers a positive ROI for a large customer, since that customer would also need to buy more licenses for products like IBM/Tivoli, CA Unicenter, BMC, etc. Those products are mega-buck, and enterprise customers are/were realizing cost savings by getting rid of them.
It's hard to see this from a small/mid-size enterprise perspective. Imagine $2M in recurring licensing charges and $750k in annual consulting expenses for a product functionally similar or inferior to Nagios. Or spending $20M annually on maintenance on software that you don't use. This happens every day in the Fortune 500 and government spaces.
I'd say a 1:3 ratio of sysadmins to servers is actually pretty common -if- the sysadmins also do desktop support for the organization. At one of my ISP jobs the customer/production servers had a 1:50 or so ratio (but a lot of time was spent on new products/features, maintenance could easily have been 1:200 and our automation was mediocre) but the internal IT dept was around the 1:3 mark.
And he said "IT is just me and another guy", not "systems is just me and another guy", so I suspect that's the case for him.
I mean, going from 0 physical servers to 1 physical server is a pretty big marginal jump; you need someone who knows how to replace drives and deal with other hardware problems, and you need that person on pager.
But that person is going to spend a few days getting the thing set up, then maybe they will touch the hardware once a year ongoing. (as you scale up you can lower the setup time to a few minutes per server by using cobbler or another auto-provisioning setup, which is probably going to take a few days to set up in and of itself. After that, you can bank on spending some significant time every time you buy a different hardware configuration, but either way, most of the time spent on hardware will be when setting up.)
Adding more servers just means you have to run down and replace those drives more often; but like I said, if you have to physically touch each server more than once a year or so, something is seriously wrong.
(I'm sole sysadmin for a search engine, and I handle 200 servers.)
If you run out of one of these your choices are a) consolidate (which may imply virtualization) or b) relocate (which is astronomically expensive to do with no downtime).
That's why people will spend money on VMware.
Certainly smaller hosts with DR capability can be done for lower capital spend, but you need more capable admins to manage it. VMWare does that for you out of the box.
I.e., VMWare is usually a better deal, but just barely. Drives me insane.
We eventually decided to build a KVM-based cluster, and while we were already extremely glad we did before the vSphere 5 licensing change, this latest development only serves to confirm the wisdom of our choice. We have enterprise-level support if we need it, the virsh command-line interface is very straightforward and easy to pick up, and we have not shackled our fates to an organization that can yank the rug out from under us whenever they like. Moreover, for Linux-based folks who find themselves pondering their options, consider that VMware requires that you run Windows in your cluster if you want the full advantages of vSphere (e.g., live migration). Because of the added maintenance and security concerns, we were quite loathe to introduce any Windows operating systems into our cluster environment, and the unfortunate state of the VMware world is that it's extremely Windows-centric.
The best part? Not only do we have a fast, rock-solid virtualization solution in place, but the easy-to-use GUI we wanted is also on the horizon. Take one look at the Archipel Project at http://archipelproject.org/, and I think you'll agree its interface puts vSphere to shame. Archipel is not yet ready for critical production environments and (last I checked) currently lacks full support for libvirt-based storage APIs (e.g., for LVM-backed virtual storage pools), but development is progressing steadily. I, for one, am looking forward to getting the best of both worlds (liberated + easy-to-administer software) in our data center when the time is right. If that interests you, head on over to https://github.com/primalmotion/archipel and fork away!
I'm all about getting FOSS some visibility in my company.
As an aside, Antoine (the author of Archipel) just told me that beta 3 will be released next week, after which he'll be focusing on expanding the VM storage options.
Compared to using ZFS on OpenSolaris with Xen 3.1, this process is incredibly cumbersome and unreliable.
Plus I just don't get the need for libvirt. It seems an incredibly complex and useless abstraction over a simply documented configuration file for Guests, and a tool to start/stop/add/remove them.
I'm sure a big part of that may just be that I'm not familiar enough with KVM. Still, it's 2011. Tying all your data to a single host with limited redundancy and depending on live-migration seems like a fundamentally flawed approach to me with a lot of needless complexity.
Update: Here's a blog related to your very question that was written just a few days ago:
So yeah, I'd strongly recommend that you use libvirt (or some other wrapper that handles things like locking the block devices) if you use KVM.
With xen, on the other hand, it handles that level locking for you out of box, so personally I see no reason to use libvirt. The libvirt devs seem pretty focused on KVM anyhow; Xen support, at least in the past, was pretty poor, so personally, I use the native xen tools for xen.
(I'm not saying this is a reason to use xen instead of KVM; I'm just saying that if you do use KVM, you should also use libvirt.)
I don't see the relation with KVM. There is nothing special in the way it uses open-iscsi or mdadm to access iSCSI targets and set up mirrors.
Maybe you're unfamiliar with these Linux-specific tools and mistakenly attribute to KVM your difficulties which are really Linux problems.
> Plus I just don't get the need for libvirt.
I don't use libvirt; I wrote a couple of scripts that allow me to do what I need with KVM. libvirt is useful only if you need to manage a whole lot of VMs.
You're absolutely correct though. It's not really KVM's fault. It's the Linux iscsi, software RAID, and volume management capabilities that are so lacking.
It just happens that that makes using KVM much less appealing.
1. Reliable Snapshots and Replication
2. Simple Volume Management
3. Human readable device names
4. Consistency of volumes/devices between reboots
5. An "uncorruptable" FS backing the guests
6. Reliable Virtualization
FreeBSD may be a good fit with VirtualBox, but VBox on OpenSolaris and OpenIndiana was unstable for me at least. I'd like to give FreeBSD a try sometime.
On libvirt, that's about what I expected. I guess I just wanted to complain about it in general. ;-)
sudo xm new /my/config/file
sudo xm start my_guest
sudo xm shutdown my_guest
sudo xm delete my_guest
I'm sorry, but having regularly been in a situation where I'm trying to get sign-off on large tech project the first question is "what is the support package like" and the question never asked because caring about that kind of thing is below their pay grade is "but is this software free and open?"
If you use open-source software, you don't need to procure anything, right? At one level, that's great, but to people who run the contracts unit or procurement team, that doesn't compute. In their world, you hassle people for a discount and fight over contract terms.
Why do you think that Red Hat sells support contracts in the guise of a software license?
Answer: Because the processes that enable big companies/government to spend money on services are vastly different than software. On a services contract, you need to negotiate statements of work, etc. For a software contract, you just need to buy a software SKU and a maintenance/support SKU.
Yes, it is.
Alternatives to VMware like KVM aren't just a better version of VMware that happens to be free, but it's fundamentally better because it is free (as in speech, of course).
I know its being a stickler for the tiny facts, but when you are a XenServer admin find the differences rather quickly.
You wouldn't use just Xen as a hypervisor, in most cases. Amazon uses Xen plus their own stuff on top.
To make it more complicated, there's also XenClient, which also builds on Xen. It's also mostly open source.
People still stuck with a stereotype that most brilliant programmers work for corporations. This is, obviously, not true. Most of corporations outsource their R&D and QA and spend for marketing instead. That is a very common strategy.
Now tell me - how this strategy correlates with a quality of a code or services? ^_^
Oracle vs. MySQL is a very good example - high quality community code is usually much better and well tested. (hint: it is about comparing the code quality, not a feature lists)
Being attached and depended (that is exactly what their marketing department is for) or not is your own choice. In some cases, like SAP, there is no community-supported alternatives, but it this case there is more than one.
Some people could say that we really need all those modern features, such as iscsi per lun mirroring, etc. But it is exactly this code is less tested and lower quality.
One cannot compete with Linux (Ubuntu/RHEL/CentOS) communities in matters of testing and code quality. No code is better tested than those included in mainstream kernel or a polular distribution.
I know I'm feeding the troll here, but that is just naive.
There are many great open-source projects with terrific code quality, like parts of (and certainly not the whole of) the Linux kernel. There are also many commercial solutions which are far ahead of anything available in the open source world. In many spaces it makes great sense to use a proprietary solution over an open source one.
> Oracle vs. MySQL is a very good example - high quality community code is usually much better and well tested. (hint: it is about comparing the code quality, not a feature lists)
Can you provide evidence that MySQL has fewer important bugs than Oracle? I have used both fairly extensively and have noticed more bugs with MySQL, but that's just one anecdote.
> Most of corporations outsource their R&D and QA and spend for marketing instead.
Do you have evidence that vmware development is outsourced? Do you have evidence that outsourcing leads to a lower quality product? Most studies in this area have shown quality be lower with a push to lower costs, with no correlation to outsourcing.
Yes, there are many this or that, but in general, Open Source model works just because it is community driven. That means if your project or some critical part of it (scheduler or implementation of TCP) is really important to users it will receive almost constant code reviews ans quality improvements. This was the story behind nginx, openssh and thousands more.
Of course, some parts of project might be considered less relevant and users are satisfied with code that just works.
One can achieve better code quality than fanatics, nerds or very experienced programmers could show only by hiring such people for doing what they like to do. Mediocre wage-workers can't produce anything near it. Look at nginx's or openbsd's or postfix sources.
Please, try to imagine number of installations of both products. mySQL runs on almost every crappy hosting in the world. And on Facebook. ^_^
I don't way to say that mySQL is great product. What I want to say is that it is good enough and stable enough. Otherwise there will be no usage of it.
btw, outsourcing is all about cost reduction, not quality increase. ^_^
I still need to try Eucalptys sometime.
They need to to the right thing and adjust the vRAM untitlements. Sad thing is.. people are so locked in to VMWare infrastructure that they'll likely make money short term, at the expense of pissing of customers. Oracle plays this game too..
Now, I have to track vRAM usage and decide to purchase EP licenses for all available RAM or just go with usage + growth for the year...or something. I'm not sure how that's easier.
The irony is they want to ride the new wave of public cloud computing, going so far as to sponsor development of a memory database (Redis), and then go and do something regressive like slapping a 24GB-per-CPU memory limit on their core product without a price decrease.
I forget if it was the 2.x -> 3.x or 3.x -> 4.x upgrade, but they eventually dropped the hobby/academic cheap licensing and the price jumped to something like $150 per seat. I remember calling our sales contact and complaining about how much the jump sucked, and when they wouldn't budge, I essentially told no thanks and that I'd wait for the open source options to mature. He, of course, chuckled at this quaint notion.
A few years later I had to chuckle back when they started giving away the basic versions of VMWare, since open source options and Microsoft were by then eating their lunch on the desktop.
Funny how history repeats itself. I hope VMWare lives to regret this latest trend in greed of theirs.
P.S. -- I also had a similar exchange with the Accelerated X folks, back when xfree86 didn't do multi-head very well (or at all). I accused them of gouging their faithful customers and that open source would catch up and they'd be toast. Three cheers for xorg!
But I guess if your box needs 24GB then you probably have cash no?
But honestly, that's not the right way to think about the licensing. Sure, you need a license for each physical cpu, but beyond that you're just licensing for vRAM. So the real question is, is 24GB vRAM per license low.
Several people have calculated that a vSphere license costs much more than the RAM itself.
Like Windows is dominant on the desktop, ESX pretty much owns the (paid virtualisation) ecosystem through many third party programs.
Microsoft is getting there, it is just much slower, but, a price increase like this could be (but I doubt it) VMware's undoing.
In 1990, high-end engineering users were very happy with their Unix workstations. Windows NT in comparison kind of sucked -- but over time the suckage was not enough to justify paying $20,000 for a workstation versus $8k for a NT Workstation.
Ditto SQL Server. Back when I was a newbie Informix DBA, my older colleagues laughed at people messing around with SQL Server. Informix was an awesome product in many ways, but cost like $40,000/cpu when SQL Server cost alot less. Who's laughing now?
"Enterprise/Big Corp" will buy anything with the right sales person.
I say that as someone who was part of the RHEV support team at launch.... RHEV 2 was a mistake. Red Hat should never have sold a product that depends on windows. They just don't know how to support it.
There was also talk about bringing in parts of the JBOSS stack... and I am not exactly sure why... but synergy might have something to do with it.
RHEV 3 will use more oVirt developed technologies (libvirt, etc) but now they have Enterprise customers who need compatibility for an extended period... so its going to be the ugly stepchild of RHEV2 (aka Qumranet's Product) and oVirt (redhats R&D). I suspect that RHEV is going to be carrying around baggage for a while.