I already have. I was a perfectly happy VMware customer until Broadcom took over and fucked it all up. I still don’t have a license allocation for a product I paid for. Will never buy VMware/Broadcom again.
ETA: the context, for those who care, is VMware Workstation. VMware made that free for personal use right before the takeover, so Broadcom didn’t bother porting over the license database. If you have a pro/commercial key, make sure you back it up because Broadcom won’t help you recover it.
But if VMware is now free, why does this matter? Because the key words were “personal use.” They reserve the right to pull an oracle and hit your organization up with an audit and massive fees for that one guy that installed VMware workstation that one time.
We bought keys, but now Broadcom has no record of that. I’ve since switched all our virtualization scripts to use docker.
> They reserve the right to pull an oracle and hit your organization up with an audit and massive fees for that one guy that installed VMware workstation that one time.
It's funny because Oracle has done this with their proprietary VirtualBox extensions.
First we got VMware because systems were getting larger than the load we could push onto them. Then we got Hyper-V because it was free and VMWare was getting needy and expensive. Then we moved to the cloud because it was "cheaper". Then we ran Kubernetes on a cloud node because it was hard to manage the workloads. Now we are having cloud cost difficulties.
Hang on a minute... deep thought ... surely we fucked up somewhere?
Get off the cloud. Go with a reputable VPS provider like OVH or Hetzner. Get a machine allocated for a fraction of the cost of your cloud bill, and install dokku (or whatever) on it. Done.
I agree with the use of Hetzner. I use them. Very cheap machine, very powerful. Very simple and straightforward. People need to think carefully about the possible expense and complexity of AWS. Hetzner is simple and straightforward.
I doubt it's too late. There may be a cost to escape, but it may not be insurmountable.
At the height of the cloud hype, we were all more than happy to pay to lift-and-shift our way to higher operating costs. Surely, then, a lift-and-shift toward lower operating costs is comparably feasible.
If anything the barriers are social. At the height of the cloud hype, we all believed we were riding unicorns to the end of a rainbow with a pot of gold waiting for us. When you believe that, of course you want to spare no expense on ensuring you have the fastest unicorn so you can get that pot of gold before anyone else can.
Don't forget that this isn't an all or nothing idea. Sometimes it is (and should be), but there may be some things that are better/cheaper in the cloud, and some things that are better/cheaper done in a different way. Depending on how much of each you have it might be worth migrating the rest as well of course.
I work on a product with seasonal useage. If we did it on in house servers we would need 10x as many servers to handle the peak, but by putting it on AWS we pay for the servers only when we need them.
I doubt AWS is 10x more expensive when you consider all the costs - you have to pay for the humans to manage the servers if you do them in house.
bollocks. we have to pay for cloud people to manage the cloud stuff, and they're the same people except they have a cloud certification and can charge $8/hr more.
labor savings and simplification of processes with cloud never happened; if anything they got worse, and OpEx is higher than ever.
There's several popular technologies where that I would regard a general, background antipathy towards as a positive hiring signal and Kubernetes would be one of them.
I think it's just one of those things that people tend to use when they actually don't need something that complex. It seems to have supplanted a lot of other tools in domains where you're not doing dynamic scaling or any of the other things it's really good at.
If you already know it then fine - use a hammer to crack a nut. But I get the feeling that it's the first port of call for people who probably suffer the learning curve without really getting anything much in return.
But I think another way to express my original point would be "Being an uncritical advocate for K8s positively correlates with having a tendency to overcomplicate things".
I have come to believe that, when weighting design factors that influence overall product development and maintenance cost, nowadays most people will choose an exponential curve over a logarithmic curve.
Because when you're just getting started, you're just looking at what's immediately in front of you. Logarithmic curves go up quickly in the beginning. Exponential curves are nice and easy in the beginning. The fact that one levels off and the other keeps turning ever more sharply upwards doesn't matter. Thinking ahead like that, considering second-order effects, etc., is neither Lean nor Agile, and invites accusations of engaging in Analysis Paralysis.
Having already been thru the Symantec acquisition as soon as I heard Broadcom's name in connection to VMware I advised everyone I knew was using VMware to start looking for alternatives.
It's unclear to me how Broadcom can excel at apparent ineptitude so well. I assume the analysis I've read, that they want to shed all the legacy non-Fortune 100 Customers, is true.
>Broadcom can excel at apparent ineptitude so well
"Apparent" is the key word here. They don't care about all customers. They want to keep maybe 10% of the biggest enterprise customers, get rid of everyone else, raise prices 400%, and keep half of the profits while reducing costs to almost zero. Sure the product will die, but the acquisition will pay for itself multiple times over.
The sad part is VMwares tech stack for the most part is pretty rock solid, especially ESXi. It is/was rigorously tested across a variety of hardware platforms with extensive documentation on supported configurations. This is massive undertaking and their support matrix docs would make any QA nerd drool.
Broadcom is just a Private Equity firm cosplaying as a tech company. This is just what they do. They don’t care if it dies as long as they are able to get every last drop of blood from their -victims- customers.
The company I work for started migration planning right when the acquisition gained approval. We all knew what would happen and have a “no Broadcom” policy when it comes to software.
A fair number of Fortune 100 customers are also leaving as well, but you only find this out by talking to people in locations where it won't get back to anyone since the plans are confidential. But I suppose a couple very very large customers paying their prices is a lot of money.
It's extremely rare for an acquisition to benefit anyone but the acquirer. The acquisition is either to eliminate competitors, absorb proprietary tech and axe the product, or siphon off money for the larger org's coffers.
Most people don't buy a cow to benefit the cow. They either milk it or turn it into chops.
Former MSP I worked at is also looking to migrate customers off to something like Hyper-V or Nutanix. I told my old boss to hire a bunch of Linux gurus and go through the Proxmox partnership program and get certified. As much as people knock it for their support not really being ready for enterprise. I do think that proxmox is well suited to gain some of the market share.
I've been using it in my lab for the past few years without any major issues. Running an HA 3-node cluster. Only issue I ran into was when I kept getting kernel panic from an driver issues in my RAID card. But that really only affected me during reboots. Pinning to and older kernel worked. Oh and it was also because my RAID card is like 10 years old.
So HN - if you are moving off VmWare, what are you using?
No direct replacements, but there are a lot of options. Some of the options are with a different company that would be glad to find out what parts of vSphere you need and for a price (which might or might not be reasonable) make just those for your. Some are open source and you can hire someone to add the features you need. Some are things like AWS where are completely different in many ways and yet have tools that by nature are very different forcing to change to their model, but may be better once you pay that price. And of course nothing says you can't pick multiple from the above - but no matter what it will be significant effort to move to any of them above the cost, which is what Broadcom is counting on.
I really don't think it's necessarily "hard", it's just going to consume some time and planning. If Broadcom is actually banking on that, they're mistaken. I work for a very large multi billion dollar company and they want 3x our current costs. We told them to get fucked and we are moving. We didn't even try to negotiate, we essentially laughed at them. I got to be a fly on the wall for the call and it was hilarious.
It's overkill for small business. Makes more sense at "medium". Small business does fine with plain old QEMU/KVM and the open source tools: virt-install/manager, etc.
Understand that I routinely manage QEMU machine configs by hand, because it's trivial. Perhaps I lack perspective.
You've got this a little backwards. QEMU/KVM is too console-centric for small businesses. Proxmox's appeal is that it has an easy GUI web interface that hobbyists or "de-facto IT department persons" can work with.
It may be overkill in terms of features but it's a free as in beer tool that novices can install and run with quickly without much of a learning curve.
Ok, as I said, I may lack perspective. I version control VM configs and script VM deployments, and manage a herd of containers with ansible. We're talking about simple minded code monkey stuff here: the real work is elsewhere.
For sure, and I think your way is probably superior overall.
There are just a lot of people where as soon as you say “code” or “text editor” they are out.
My proxmox stack is manually managed in the management GUI, but my Linux VMs within that setup are all managed with a Chef server that runs on one of the VMs. (I would say I only chose Chef because I use it at work and I have recent familiarity and figured I could learn more about it)
Basically I do the standard graphical Debian installer and then run a bootstrap script that registers each system as a client.
The next thing I’d like to do is figure out cloud-init with Proxmox so that I don’t have to do that manual installation process anymore.
This setup works okay for me as-is since I only have about 5 VMs at a time, it’s just a hobbyist setup.
> Basically I do the standard graphical Debian installer and then run a bootstrap script that registers each system as a client.
Great approach. Might try that.
> The next thing I’d like to do is figure out cloud-init with Proxmox so that I don’t have to do that manual installation process anymore.
Yeah. cloud-init...
The only advice I have there is: use only core cloud-init capabilities -- the stuff that's been around for years, works everywhere and is unlikely to change in subtle ways -- and factor everything else into a shell script that does the rest, which you invoke from cloud-init.
Oh yeah, for cloud init I only need write_files, path, and runcmd. But also it can all just be runcmd.
For chef you put a client validator key in a specific location along with a first boot json file that has details about the client’s policy name and group.
After that it’s pretty much just installing the chef client and running it.
How is it "overkill"? Proxmox is so simple to install and administer, I cannot see any use case for manual QEMU/KVM. Who wants to manage machine configs by hand, like a cave person?
And what about redundancy and backups? Even a one-person business needs those, and small businesses arguably have fewer resources to spend on manual DR processes.
You're making a leap. In a small environment virt-manager and friends GUI tools are probably sufficient. Yes, proxmox is obviously nicer, no question, but not really necessary.
As to the "cave person" nonsense: I version control VMs, I can reproduce the whole stack from git, script any conceivable deployment... I'm not a cave person. I've just been inculcated to this whole world of VM management since before proxmox was a thing. It really isn't the hairball you think it is. People routinely author docker compose scripts and collections of k8s yaml deployments that are far more intricate than a VM config, and consider themselves on plane with modern devops.
> And what about redundancy and backups?
What about them? There are many was to solve that, and nearly all of them are better done using storage system tools: NAS, ZFS, NetApp, LVM, EBS, etc. I made that point elsewhere: the real problem with VM management isn't provisioning VMs. It's storage. Solve that and the rest is pretty basic.
On the contrary, I haven't manually touched a QEMU/KVM config in a few years because of the above tooling and Proxmox. I think there was a handful of times I needed to add manual additions.
Well, I got past being dazzled by QEMU machine config files years ago. They're actually rather simple. Once you grasp dealing with QEMU at that level its a short, and possibly dubious step, to just CLI-ing everything. The GUI tools are efficient for picking up on new capabilities as QEMU/KVM evolves with time.
I've been known to just start VMs from the CLI directly, for experimental purposes. The tooling everyone uses (Proxmox, virt-*, etc.) obscures the fact that spawning a VM is just a command with a bunch of switches. I suppose that familiarity for me came from using QEMU to emulate other archs on x86 for embedded development purposes: launching ARM VMs, for example.
VM count is a poor metric. Spawning lots of VMs is a matter of looping over virt-install or "virsh create" or what have you. The real bottleneck is storage: how are you solving that such that you can migrate VMs around to service things. Once you get past a couple (~2) network storage arrays you're probably into "medium."
But I'll answer your question directly, nonetheless. At ~45 VMs, I consider the system "small."
Understand you're going to need intimate familiarity with the extant tools. Not that it's hard or anything. It's not point/click Proxmox style, though.
Broadcom is taking advantage of the fact that most CIOs are forced to be pretty spineless. If you are big and willing to badmouth then in public, they will come to the table. There was at least one US state that did so iirc.
Otherwise, pay the tax with minimum commit, and don’t allow creation of new VMs. they essentially created a business model that writes the justification for AWS or GCP.
When companies flip to rent extraction business models, you need to change how you treat the business relationship. VMWare is like an incandescent light bulb… if the cost of re-platforming and availability of resources makes it cost effective to move, you move.
MBAs exist to fulfill the needs demanded by the market -- e.g. the Milton Freedman "companies only exist to increase shareholder value" mantra. aka stock price must go up.
MBAs, and their behavior, is inevitable under Capitalism. don't hate the player, hate the game.
ETA: the context, for those who care, is VMware Workstation. VMware made that free for personal use right before the takeover, so Broadcom didn’t bother porting over the license database. If you have a pro/commercial key, make sure you back it up because Broadcom won’t help you recover it.
But if VMware is now free, why does this matter? Because the key words were “personal use.” They reserve the right to pull an oracle and hit your organization up with an audit and massive fees for that one guy that installed VMware workstation that one time.
We bought keys, but now Broadcom has no record of that. I’ve since switched all our virtualization scripts to use docker.