Hacker News new | past | comments | ask | show | jobs | submit login

I work for a Danish municipality with roughly 10,000 employees. I’m not sure if you know, but our public sector has been competing with Estonia at being the most digitised in the world for a decade. We operate an estimated 300-500 different IT-systems, some of them enterprise sized SAP solutions deployed on old IBM mainframes with multiple layers of APIs to make their fronts somewhat web-based (don’t ask). Others are minor time registration systems or automated vacation-payouts. I said estimated because a day-care institution is free (though advices not to) buy it-systems without talking with any part of the centralised organisation.

Microsoft has been one of our better partners in all of this. They aren’t cheap, but they listen to us when we need something. We have a direct line to Seattle, and parts of what we ring up at tickets have made it into the global 365 infrastructure. Stuff like making it easier to hide teams-emails from the company-wide outlook address-book.

More than that though, our tech-team is trained and certified in Microsoft technologies. The combination of in-house staff and a 30+ year long good business relationship makes Azure such an obvious choice for cloud. Some of the co-municipal systems we buy through a joint-owned organisation called KOMBIT operate in AWS (support and operations is handled by private sector companies), and it’s not like we’re religious about not using AWS or something other, but we’d need to build a relationship and retrain staff to do so.




> We have a direct line to Seattle, and parts of what we ring up at tickets have made it into the global 365 infrastructure.

This is great for one business, but bad for the whole of businesses IMHO. Microsoft shoehorns every little thing into their products, never asking themselves if they should. As long as they think they can they will. This leads to stupid things like Durable (aka stateful) Azure Functions. Durable functions is a product of a business either not knowing how to use FaaS properly, or they were misusing FaaS for something where they should have chosen a different tech. But Microsoft being Microsoft will try to accommodate any stupidity they can as long as it will please some big customer. In the beginning they get away with it, but over the years that's how they always end up with half baked, slow and buggy products which are inconsistent, incoherent and just awful to use. Azure is certainly on that trajectory from everything I've seen so far and I use Azure every day at a client at the moment so I know what I'm speaking of.


Microsoft is here to stay mainly because once you got your foot in the door at the big companies you will stay forever. The reasoning is indeed since we already have Microsoft guys Azure will fit us well but in practice none of these Microsoft guys will be able to help you on any Azure issue. So when Azure is involved in a big corp, accenture is usually not far behind. The experience with the solutions this doom duo comes up with are absolute hell to deal with.

In upfront cost Azure looks better but in general that's rarely the case. All the azure API's seem half baked. Once you're doing anything more advanced you will run into issues, just look at the terraform azure provider issue tracker for a bunch of issues that people run into because it's not clear until you actually try out the apis.

Here's another example if you want to use shared storage on kubernetes with any reasonable iops, the azurefile premium storage increases IOPS per Gigabyte allocated. So if you want any kind of reasonable experience/price you have to easier spin up your own nfs server, use azure netapp or allocate 10TB shared premium filesystem per share, which is something like 70k a year.


I like AWS, but AWS EFS has the same problem. They've improved it a bit through some recent changes, but it's not much better.

The way it would work: they gave you absolutely pitiful base IOPS credits for EFS and everything else was related to disk space used. So more disk space used (and paid), more IOPS. After that they'd completely detroy your IOPS if you used up all the credits. By destroy I mean IOPS at the level of a HDD from 1995.

I set up a Jenkins using EFS and initially it went well. It barely had any activity and after about 2 weeks it used up all the credits. After that even the login page would take 20 seconds to load.


I think it's throughput credits that EFS gives you (e.g r/w MiB/s), not IOPS. AFAIK they don't document the IOPS available at all.

In my experience the latency for an individual i/o operation on EFS is always at the "HDD from 1995" level regardless of available burst credits. Something that does lots of small random I/O like checking out got repos on Jenkins workers is basically worst case for EFS.

https://docs.aws.amazon.com/efs/latest/ug/performance.html


It's NFS, so the bad latency isn't surprising. The problem is that they don't have anything faster -- it tops out at 2GBy/s or something, even with hundreds of TB, even with multiple clients. You have to share your data over multiple EFS volumes, or build your own virtual gluster, which are extremely shit options. Also makes any kind of bug data HPC impractical.

Bezos, if you're listening, fire someone. You should have next generation pNFS or lustre like protocols by 2016.


They actually do have https://aws.amazon.com/fsx/lustre/

Latency of EFS is much worse than running your own NFS in my experience.


Doh, how did I not find that?

But also, how does the pricing work? It seems to be half the price of EFS? It almost seems like that are assuing S3 read or Direct Connect to populate the FSx volume.


Throughput credits, you're right, my bad.

The agents were in ECS with no persistent storage, so that wasn't the problem. I was just running the Jenkins master off of EFS, for the persistent configuration storage.

And I don't think it's the latency that's killing EFS usage, it's the throughput. While the credits were there, everything went smoothly, once the credits ran out, the base throughput was fit for IO meant for the 90s.


We had the same issue with a pgsql server. Started out fine, but to get decent performance you pay out the nose for higher disk throughput. It looks competitive when your pricing things out and don't know you need to pay for that. When you find out it's a classic sunk cost fallacy and most companies just eat the cost.


That sounds like exactly like AWS though doesn't it? EFS iops scale with data size allocation.


To be fair, Microsoft has only decided on Kubernetes within the last year. Before that there was heated debate on whether to support K8 or ServiceFabric, their "competing" standard. Now that all efforts are on K8 we'll see it improve pretty quickly.


> Microsoft shoehorns every little thing into their products

That's just how enterprise software works. It's not a Microsoft thing. It's an enterprise thing.

Consumer software often benefits from simplicity and elegance.

But for enterprise software, clients have hard requirements, so you provide the solutions they request.

There really isn't any choice to it. If you don't build it, they'll go with a competitor who will.

There truly isn't any other solution, unfortunately.

At best, you can spend more money trying to improve UX and interfaces. But when you have a set number of employees, and have to choose between improving the interface or building the features that will land another client, it's easy to see which choice gets made.


I agree, but I think that’s on Microsoft. I’m not sure they’ve ever adopted any suggestions from us that weren’t universally wanted. When teams first became available in 365, it was automatically enabled for everyone. Today it’s not, we requested this change, but I really doubt we were alone in that.

That’s not really what’s important to us, but I should have made that more clear. What is important is the direct line, so that we can call Microsoft and get updates directly from the techs working on the issue when something breaks. Amazon also has genuinely great support, they were even quicker to resolve the GDPR issues that made sure no one outside of the EU will ever access any of our data, not even through logs. But other companies let you talk to automated scripts, and take days to get back to you. So that’s why we like the direct line to Seattle, because it’s better support than most of their competition.


Microsoft also has one of the best and most effective sales machines in the world.

Those direct lines help with customer retention and expansion of services just as well as it provides technical assistance.

Even if AWS has some tech or price superiority, good luck prying those sales teams away from the big orgs and convincing them to go elsewhere - especially after significant ecosystem lock in. Which is another thing Microsoft is better at.

Microsoft simply has the enterprise sales machine completely dominanting and optimized.

It was fascinating watching them role out Azure with the full force of their developer and CTO focused marketing machine which kept hitting me even though I’d never use Microsoft, their ability to penetrate markets was fascinating to watch as an outsider.

This is something Google will never be able to catch up with. And a very important part of these cloud wars which get overlooked while we debate the merits of Microsoft’s engineering yes-to-everything the managers ask approach.


> They aren’t cheap, but they listen to us when we need something

Yet amazingly their interfaces and quality is incredibly horrible. Azure like most other things Microsoft makes feels like a half baked and over fitted platform with serious quality issues.


I completely disagree - I find the Azure interface, delightful to work with. And best of all, it's consistent, which can't be said for AWS' UI, which mostly looks very dated and differs from service to service.

Years ago the Azure UI was a bit slow (but still beautiful to look at), but those issues were solved long ago.


Having functionality splattered all through the god-awful, inexplicably horizontally-scrolling interface is not my idea of “delightful”. Parts of it are URL-stateful, other parts aren’t, and there’s no rhyme or reason for the difference.

AWS might look “dated”, but I can’t imagine the mindset that cares about that. If a console is how you’re evaluating your cloud provider you are several steps behind where you need to be. And, great news: in addition to having a really dire console, Azure’s APIs are just no good, particularly if you have to step into the shitshow that is Azure AD (and paying even more for Microsoft Graph, which should just be included) or IAM, and they don’t step to either AWS or GCP to a degree that I just won’t work for an Azure company again. Trying to get anything done in those environments makes me feel stupid and personally bad for having the temerity to try, between inscrutable errors where the Microsoft support is barely English-literate even when you’re paying for it and the aforementioned API barf, and that’s a bad way to live one’s life.


> Having functionality splattered all through

I haven't found functionality to be "splattered" at all. In general, you have a menu on the left for top-level functions, and a button-bar at the top for other things - the layout feels really consistent to me, with the only exception I've noticed being Function Apps.

> inexplicably horizontally-scrolling

At least from a culture with left-right written language, it makes perfect sense to me. Alternatives would be several layers of dialog boxes, or expanding vertically, but I like the Azure portal's way

> AWS might look “dated”, but I can’t imagine the mindset that cares about that

It just feels ugly to work with it. But the bigger issue is the inconsistency - some parts have small variations, others are completely different; it feels very fragmented.

> If a console is how you’re evaluating your cloud provider you are several steps behind where you need to be

Again I'm afraid I disagree. You can work with Azure from a console using cross-platform Powershell Core, or the cross-platform Azure CLI (my preference). There are a host of REST APIs too. I've worked with numerous different Azure services, and I really haven't had any issues here.

> Microsoft support is barely English-literate even when you’re paying for it

On the (free) Microsoft forums, yes, they are complete and utter shit, and I don't know why Microsoft even still hosts them. But I've found the paid-for support at Azure to be very good, and there certainly haven't been any English language issues.


I have had to personally re-document setting up a basic Azure AD connection 4 times in a 18 month period. Each time I go back to it, the UI has changed and key pieces of functionality are just 'elsewhere'.

The EC2 console is old and outdated, its been the same since I started with it like 7 years ago. They are rolling out a new dashboard right now, a complete overhaul.

The difference - I don't have to re-document the EC2 Console for internal training.

The AWS Cli is a simple tool that just keeps working and its documentation is pretty much all you need to look at.

The Powershell interface with AzureAD was absolutely opaque, no documentation, took hours to figure out how to configure claims and when I did, it didn't even work due to hidden limitations that the paid for support could not explain.

I think that your experiences have been quite different to others, certainly mine.


Thanks for replying, because I like knowing I'm not crazy!

A lot of Azure fans like to play "hide the ball" on whether Azure AD "counts" in a lot of ways because it functionally requires Office/Graph and that's not "really" Azure. (Never mind that it doesn't play nice with any IdP, you have to basically pass an act of Congress to let it defer in any meaningful way to Okta...)


We have built a desktop app to manage AWS, and we will be rollin gout some of the main Azure serverless pieces over the next few months. I think you might found our GUI much more user friendly than what is out there now. https://getcommandeer.com


Glad you could find a place to shove in a plug for a piece of software that, if we're being frank, doesn't seem like the sort of thing I'd recommend to any user of a cloud environment. GUIs are failure modes for cloud operations. Using APIs first and last is the only way I've found to build a successful cloud team at any scale.

(And I have even less use for highly bloggable serverless stuff than I do baseline-working tools and systems. A "Dynamo tester" is less valuable in aggregate than, say, something that made Cloudmapper easier to deploy.)


You are being frank, I don't think we are. This tool is getting great usage to actually be able to view your cloud data such as S3 and Dynamo in a meaningful way. The testing suite is still in its infancy, but if you are doing serverless development, and want to test out your lambda firing, and see the console logs from a dynamo stream, or an s3 file getting created, it is a great way to do so. Lastly, our tools for IAC are starting to roll out, so it enables you to run your serverless, ansible, or terraform files against any account and region at the click of a button and see the results, rather than having to fiddle with the command line. API's are very important, but enabling your developers to actually see the cloud infrastructure is also equally important. Monolith systems are still popular because people can see the entire system. Serverless and event-driven systems are not easy in this regard, and we are solving the problem right now of great tools being out there, but just trying to get insight into your system is difficult.


The change over from the "old" Azure portal to the new one was a bit painful at the time - but that was a while ago now and the current one is, I think, pretty good .


If your screen isn't big enough the Azure portal scrolls in 4(!) directions, and even worse, it uses horizontal scrolling for navigating between related objects! It's completely impossible to use if your screen isn't huge and the window maximized.


I'm glad someone finds the interface usable and consistent


I don’t really like the admin interfaces or CLIs of either AWS or Azure. I’ve had an easier time with Azure than AWS for my personal projects, but maybe that’s just from being used to the Microsoft way.

I’m not sure why they are so much harder to use than the smaller services like heroku or digital ocean. You’d think the big guys would have the best and easiest services, but they sure don’t.


DigitalOcean has some of the best APIs, documentation, and support I've ever encountered. I wish more companies were like them.


At this scale, why not invest in your own datacenter, since you already have a lot of servers to maintain? I understand you have to use Office 365, but what is the value prop of Azure to you beyond that ?


Our setup is mixed, all our own servers are virtual on rented space at a local server rental shop. I’m not a hardware tech, so I’m afraid that’s as technical I can get.

So our SQL cluster, and most of our web applications run on our own iron. Anything external as well as a lot of managed services run in Azure where the added security and operations dev-tools are invaluable compared to what we had when we self-hosted.

We do make a conscious choice about everything that moves to the cloud. We’ve spent a lot of time figuring out if buying new rental space for our SQL cluster would be cheaper than moving it to Azure. With the current political climate, we’re also a little more hesitant about moving things to clouds operated by American companies, because there is risk attached, in that we may have to move it back rather suddenly. Not something we expect will ever happen, but we don’t like risk in the public sector.

So my post wasn’t so much a “we run everything in Azure” as it was “well Azure is the obvious choice for the things we do operate in the cloud”. The relationship Microsoft has with enterprise, and the expertise staff has with their products means they become the best choice for a lot of enterprise. Unless Microsoft solutions are radically more expensive than their competitors, they simply have an advantage by already being a big, and typically well liked, player in most large organisation.


I say this as someone who used to firmly agree with this position, but in the last year my mind has been changed without reservation.

The time where this was a good idea has genuinely passed. There is no earthly way that any reasonable organization will be able to provide even the security that is present by relying on a large-scale cloud provider.

There is no way they will come out on top money-wise either. The big cloud players have a _absurd_ number of servers that _teams_ of some of the best CA talent the globe has to offer stressing to improve KPIs for.

10k employees in a municipality is small potatoes to what azure / AWS / GCP have dedicated to their products.


I have literally never seen a setup where cloud came out less than ~twice the cost of dedicated hosting. I have seen instances where dedicated hosting comes out cheaper than putting your own equipment in a colo, but even that depends on being in a location where energy and property prices (and so colo rental costs) are high.

And this does factor in devops - when doing consulting I earned consistently more (because of more hours) from clients that went for cloud setups; often they'd end up spending more times solving problems that generally didn't exist in the first place in a dedicated setup.

I do see lots of people that keep assuming that the cloud players must be cheap because they're so big, but I just have never seen that bear out in practice - in part because of that attitude, the margins they can charge are far higher.

Cloud providers are great for ease and for the number of services they provide, but they are generally an expensive step up.


I suspect you’re doing it wrong, or there’s some niche type of computing you specialize in.

In general purpose IT, looking across an enterprise portfolio of applications, we consistently see customers of Tidal Migrations replatform their applications to cloud and save 95+% in OpEx vs dedicated on-premise hosting.

IMO, The first step to realizing those cost benefits is recognizing that the cloud is not your datacenter and you need to architect differently.

Yes, cloud spend can grow as you open up access to more developers, but that’s why we have a plethora of tools and governance people to help make that manageable. I believe the business benefit of the agility gains that come from instant and decentralized resource provisioning will always trump any cloud bill... especially if you’re in a competitive industry & don’t want to get left behind.


> IMO, The first step to realizing those cost benefits is recognizing that the cloud is not your datacenter and you need to architect differently.

I see you've never setup or had to deal with setting up SAP. There are a ton of legacy line of business applications which, won't be close to "cloud" any time in the near future. And are all run on if you're lucky, vm clusters, if you're unlucky on bare iron due to silly crap like per cpu licensing on where it "might" be run. Or if virtualized, a sum of all the physical hardware cpu's.

"Enterprise" software running on premises is... problematic at best. Good luck replatforming something like this. They ask for your arm, leg, first unborn child, and your great grandkids children for the opportunity to run their software.

I'm avoiding talking about the vendors that require up to or over a month to have a contractor on site helping you "integrate and install" their application on your systems. That crap is so far removed from instant and decentralized resource provisioning its like being in another universe. God help you if you need to change anything.


Parent is probably comparing the cost of on prem to running VMs 24x7. Most IT departments are running software they did not write and don’t have the luxury of even getting access to the source code. If these customers want to do cloud they have to do it ‘wrong’


First of all:

> dedicated on-premise hosting

I did not say on-premise for a reason. Most people are not well placed to host on-premise. For starters it tends to require ops staff on site, which in many countries means a minimum of 3 shifts of a minimum of 2 people. On-premise deployments rarely makes sense.

I said dedicated hosting, which implies renting servers from providers like e.g. Hetzner.

But that said, you can replatform to anything from anything and save money in most organizations, because most organizations tends to be very bad at optimizing cost, so this to me says very little.

Most of the systems I've moved over the years were on the other hand carefully architected to be "cloud friendly" to start with. Some of them started out on cloud platforms and were migrated off to save money.

When you on the other hand start comparing the amount of compute and bandwidth you can get for the same prices, it becomes very clear how overpriced they are.

You can easily find bandwidth at less than 1/10th the price of AWS for example, and in fact I've had clients where their bandwidth bill alone at AWS was bigger than the total hosting bill after I'd moved them elsewhere. No amount of architectural change of their systems will change that - at a minimum you need to reduce the data transfer from their AWS setup. Now, you don't need to move everything out of AWS to fix that - often the savings you can achieve by cutting the AWS bandwidth bill can pay for an entire CDN....

Dedicated hosting also tends to give you far more flexibility in the precise hardware configuration to the point where savings can be similarly huge by substantially reducing the number of instances.

> I believe the business benefit of the agility gains that come from instant and decentralized resource provisioning will always trump any cloud bill...

Nothing prevents you from spinning up cloud instances when needed. Most dedicated hosting providers today also offers cloud instances, so you can typically do that even with a single provider. In practice, the cost difference between dedicated and cloud typically allows a substantial overprovisioning and still saving money, but if you're prepared to use cloud to handle spikes, you can save even more by using dedicated by going closer to the wire, because you know you can spin up cloud instances to take the peaks.

I've set up and operated systems like that which balanced loads over both colo's, dedicated hosting and cloud instances seamlessly several times.


It's amazing how people fail to see that public cloud is the equivalent of a hotel.

It's far cheaper than buying a house... in the short term.

Cloud providers are not getting hardware, datacenters, electricity and labor for free. You are still paying for it.


Public cloud is only the equivalent of renting a hotel if buying a house means hiring the builders full-time for continued maintenance.


I used to provide devops consulting services exactly because nobody but large organizations "hires the builders full-time for continued maintenance" for dedicated servers any more than for cloud, because it takes really large systems before you need hardware intervention very often.

Even when working clients that had multiple racks of hardware they owned, I spent on average a couple of days a year dealing with that.

On the contrary, clients with cloud setups "hired the builder" for far more hours on average than those with dedicated setups. For my billable hours it'd have been far more beneficial if more people went to cloud setups.


Hiring the builders full-time is only the equivalent of building a private data center if building a private data center means buying the entire companies of Intel and Supermicro.


I think this will always be the case when looking at the base cost of infrastructure itself (price of a compute/GB of storage in the cloud vs on prem)

However, the cost of cloud pays off so dramatically (in my past experience across companies) when you can see what new things the company can do with IaaS/PaaS and how quickly its done.

I've been at a large bank and a small startup that was forced to use an external datacenter, but the result was the same until we went to AWS/GCP: Infra needs were highly manual and often required purchase orders to scale that took months. As soon as we moved to the cloud and embraced infra as code things started to move 5x faster and we could focus on building software and products, not fighting legacy IT teams


Nothing stops you from doing infra as code on dedicated hosting. All of my setups for the last decade or so have been built around VMs and/or containers with deployment systems where we spun up containers across multiple datacenters on servers we had full control over.

Many dedicated hosting providers now provide APIs for deploying servers, as well, so you can handle even deployment of the underlying servers in an automated way.

Several have combined cloud deployments with deployments to dedicated servers from the same container images, bound together in a single virtual network. E.g. I had client that hosted across AWS, GCP and Hetzner, and migrated services between them zero-downtime. Eventually they moved everything to Hetzner because it cost them about 1/10th of AWS and GCP given their bandwidth use (at the time outbound bandwidth at AWS cost 50x what it cost at Hetzner).

If organizational dysfunction means you're not allowed to order the resources you need, then that is of course a problem, but a very different one.


Maybe not own dc, but colocation could still make sense? You don't need to be more efficient than Azure & Co. They have pretty solid profit margins, even being 30% less efficient should still be cheaper for you. And beyond a few thousand servers, I'm not even sure if scale matters that much (for server virtualization only).


This varies by case - Office365 phishing breakins have been a bad epidemic for a long time now and the anti-phishing measures have not kept up well enough. I think MS still doesn't support any phishing resistant 2FA method there...


FIDO is supported, which is cred phishing resistant - but Oauth permissions phishing obviously can't be prevented if it's all 'legitimate' traffic to a bad app.


Haha. Here's another one: "At this scale, why not write your own OS, since you already have a lot of servers to maintain?"


Haha!


An potential problem is that now half of your IT recruits go to reinventing this stuff instead of working on domain problems. There aren't that many it staff per 10k municipal employees.

(Of course this is assuming Azure specific hassles take much less staff time than running your own infra, not a given...)


One point(though I will admit this is the cynical part of my brain speaking) immediately jumps to mind.

If the server goes down, you can blame Microsoft. Even the least technical person can’t blame you for that. If you create your own data center however, if it goes down, you may potentially be on the chopping block if service is interrupted.

Self preservation is a strong motivator, perhaps the strongest in a business environment.


> Self preservation is a strong motivator, perhaps the strongest in a business environment.

And it's the most bullshit one. Outsourcing the risk does not mean avoiding it, it means however putting it out of control.

This is a typical manager bullshit attitude "nobody got fired to buy IBM" that generally lead to adopt unadapted, bloated, overpriced solutions to trivial problems. Just because they do not have the balls to do things properly.

Running out of its responsibilities should be a criteria to get fired when things go badly wrong.

The outsource of the 737 MAX MCAS code to India today is a perfect example of that


Agreed that outsourcing risk does not free you from the responsibilities. However, for many many services you have to rely on others to provide the service better than you could do yourself, as you are limited in time, money and other resources. For running production grade databases at startups, it's much more cost-efficient to run on AWS RDS than to hire a systems engineering team with 24/7 standby. For more trivial systems you might be right and managers might be cowards. However, you have to realize it's a spectrum.

Whether you're running it yourself or you outsource it, things will go wrong at some time. When this happens, and you run it yourself, you also have to explain why it went wrong in a post-mortem or RCA. The problem is that your customers have no point of reference for your explanation (how likely is it that this occurs again?). In my experience, the following message goes down a lot smoother: "This is an Azure/AWS/GCP outage which affected not just us, but 1000s of other companies. We rely on {provider} to continuously learn from their mistakes and improve their service and they've shown this in the past. Here's their post-mortem report about this outage.". Note that not all cloud providers fit this bill.


> However, for many many services you have to rely on others to provide the service better than you could do yourself, as you are limited in time, money and other resources.

I do not criticized the fact of outsourcing when an outsourced service does a better job. This is normal and should be like that.

What I am criticizing however, is outsourcing even when local/home made/OSS solutions are a better and cheaper fit even considering SLA.... just to avoid responsibilities.

This is in my experience common, especially if the management has no technical background, no trust in his team and fear its upper layer.


Boeing didn’t outsource MCAS to India. The outsourcing contract was for display software.


Yes, surprisingly enough, people look out for their own self interests.

What makes outsourcing hardware different from the dozens of other software as a service vendors that most companies depend on?

Why spend the time developing competency in managing servers if that doesn’t give you a competitive advantage.

But, going with the biggest most stable vendor is usually good. If you bought IBM hardware in the 70s you can still buy new hardware that supports your software. If you went with their competitors - not so much.


> But, going with the biggest most stable vendor is usually good. If you bought IBM hardware in the 70s you can still buy new hardware that supports your software. If you went with their competitors - not so much.

For 20 times the cost of commodities x86 that you should have bought by doing the right thing.

And this 20 times the cost will very likely also please your own competitors because you will yourself be less competitive.

That's how you finish with entire airline industry or bank still running on COBOL with no possibility of migration.

IBM itself is very happy about it however.


How much will it cost to rewrite everything? What were they suppose to choose in the 70s? Do you think that all of the people still using IBM and running legacy code are dumb or just maybe they did a cost benefit analysis and decided they didn’t need to rewrite everything in Node and React?


Maybe you should ask why some sector that "fear" failures more than anything else (banks, aviations) are still trapped into these systems while everyone else is not.

This is exactly related to what I was saying before.


>that generally lead to adopt unadapted, bloated, overpriced solutions to trivial problems.

There's almost no meaningful business downside for choosing poorly.

Most businesses just don't care that someone else thinks they didn't use the right tool for the job.

Should it be that way? Definitely not. But it won't change unless there's consequences (and when that happens, those products will die off quickly)


Bullshit means something is not true. People choosing to go with a vendor to avoid the risk of being blamed for failure, thereby reducing their chances of being fired, is a true phenomenon. And one that works. So how is it bullshit?


My experience with home-rolled solutions suggests there are plenty of hidden costs. Got a new project, and want to spin up some experimental servers? Gotta wade through a bunch of IT guys. Documentation and training around Azure, etc is also going to be better than homegrown alternatives, most likely. Things like that.

It’s hard to put a monetary figure on the frictions costs, but they are there.


You're comparing one extreme (public cloud) to another extreme (sysadmin-managed infrastructure). The alternative, today, is either a private or a hybrid cloud, which have all the advantages of short-circuiting sysadmins that public clouds have.


Public institutions usually have a lot of systems but not a lot of traffic (if we compare to tech companies), so it makes sense for them to put things in the cloud instead of building everything themselves. Also public institutions rarely get top talent and instead do everything via contractors, do you really think it is cheaper to get government contractors to build and run infrastructure instead of letting Microsoft do it?


Active Directory, Domain Services, Exchange Server and SharePoint Servers (which can use integrated Windows auth).

Also many internal Enterprise apps are integrated with AD for auth and permissions.


It would be a tremendous hassle for little to no payoff as they probably don’t have the skill set to actually run a data center, and probably their size is actually not that large, they just run a lot of software I suppose (that’s usually the case in the public administration). It would be rampant malpractice to open your own data center for something like that.


At this scale why not contact all your neighbouring or peer municipalities and go for a joint cloud migration?


As someone who worked in the very same municipality... one word; bureaucracy.

More specific reasons were. Legal boundaries between ownership and taxpayer money separation. Of cause GDPR is also a big obstacle in joining forces on this scale.

I worked on a project which tried to make a joined venture between several municipalities. I'm not sure if that partnership has ended yet, but overall you could only progress if there were people on both sides collaborating with a "better ask forgiveness, than permission" mindset. As soon as the municipality lawyers or their security officers got involved, all progress stopped immediately.


Or not dealing with infrastructure gatekeepers.


Errm. "Estimated" 300-500 systems?

I worked at a corporate with (at one point) 1891 systems; we had several ways of counting them... Some ways said 44 ( business platforms) But count them we could!

If I was your CIO counting would be my number one mission!


The m not sure why this was downvoted. I think it’s correct and I think we could save a lot of money by collecting this knowledge. I say this, because I’ve witnessed departments where the managers of two teams in the same building had bought two different info-screen display systems. You could literally walk twenty meters and see two different info-displays. Both with hefty expenses, and both completely unnecessary because we have a centralised into system they can use for free, that they just didn’t know about.

Gathering this information and centralising the decision making to make sure things like duplicates don’t happen isn’t on our political agenda however. In fact the political agenda is quite the opposite, favouring even more decentralisation. At least right now. I mean, you say CIO like that’s something we should have, we don’t. We’re 10 techies to run those 300-500 systems, develop our own, navigate the national strategies, implement and manage our national it architecture Rammearkitekturen and support those 10,000 employees. ;)


Perhaps they are part of one team in a larger org, perhaps the org has terrible asset management, perhaps "systems" means "applications" that can be counted different ways and this post wasn't a report to the CIO.


"Stuff like making it easier to hide teams-emails from the company-wide outlook address-book" is not this an implementation thing instead of infrastructure? Say you use LDAP to manage user and permissions etc and it is up to you how you want to set it up. I am curious to find out if you are using some out of box solutions form AZURE for this.


I'm not managing anything near what you, and many people on here, do in your job. But from the perspective of a developer who has worked with several cloud platforms, Azure and Devops make the barrier for entry very easy, and the functionality available keeps growing.

For personal stuff I like using Firebase, but that's just because it's free. I just feel like you get what you pay for with Azure, and even for a small team like mine, someone at MS has always gotten back to us quickly when we need them.


I'm curious why this got down-voted. Was my opinion invalid or something?


What sort of tech-team do you have btw? Inhouse? (Or should I say inmunicipality;)




Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: