Hacker News new | past | comments | ask | show | jobs | submit login
Azure is now bigger, faster, more open, and more secure (microsoft.com)
166 points by deegles on Jan 9, 2015 | hide | past | favorite | 106 comments



Bigger, faster, more open, more secure ... what's missing from this list?

Oh, that's right, how about more uptime?

  Provider               Outages  Total Downtime
  ---------------------- -------- --------------
  Amazon EC2                  12     2.01 hours
  Google Compute Engine       65     3.27 hours
  Microsoft Azure VMs        102    42.80 hours
Source: https://cloudharmony.com/status-1year-of-compute-group-by-re...

And this doesn't show the other Microsoft cloud services that have even worse reliability.

At my company we invested a lot of time trying to make Azure Service Bus and Visual Studio Online work. Both were so dreadfully unreliable as to be unusable.


For IAAS providers, you should care about the uptime of the regions (datacenter) that you are planning to use and the uptime of the specific services that your vms will use within those datacenters. In particular, I recommend paying close attention to the monthly/quarterly uptime trend in addition to aggregate uptime numbers.

And if you really care, try and find out the datacenter(s) your providers will be using and find out the historic performance of their different infrastructure and redundancy providers, and how often they actually test and upgrade their systems.

Look at the difference now when I filter by region and add the region data to the chart... it clearly shows just how hard these comparisons can be:

  Provider               Uptime   Outages  Regions Total Downtime
  ---------------------- -------- ------- -------- --------------
  Google Compute Engine  100%        0     1         0 hours
  Amazon EC2             99.9984%    7     3         0.13 hours
  Microsoft Azure VMs    99.942%     56    6         2.89 hours
https://cloudharmony.com/status-1year-of-compute-in-america_...


Yeah, adding downtime between regions seems like it would mess with the data. Like if you have a single region with 40 hours of downtime, that's much worse than 10 regions with 4 hours of downtime. But represented as the same total.


Hey, at least it's not Veriozon cloud which just told users to power down everything today in preparation for a scheduled 48 hour outage.

http://www.computerworld.com/article/2865802/verizon-warns-e...


In CloudHarmony's assessment of 49 cloud compute providers, Microsoft Azure was literally dead last in total downtime.

Number 49 out of 49 services.

If Verizon is offline for the entire time, maybe they'll be number 50.


Wow. That’s just bizarre.


Can't possibly be accurate. It's missing the major AWS outage from last year that took down the entire US-East zone for more than half a day.


US-East is a region made of multiple availability zones. Not all of those zones went down (none of my stuff was offline in 3 of the AZ's, for example).


You disingenuously left out the Region column from your copy of the table from the article you linked, which entirely alters the meaning of "Total Downtime". Any explanation for that?


That said, I'm a GCP certified engineer. I love GCP, but I can see Google losing the cloud war to Microsoft solely in that Microsoft has a more comprehensible offering (UI and libraries/services) and, most importantly, knows how to market its platform.


AWS is more comprehensive than both by quite a wide margin. MS does indeed have a long history of relying very heavily on marketing. Time will tell if marketing can paper over the large gap in features.


More comprehensive by a wide margin? Can you unpack that? I get the impression that there is no functional equivalent to Google App Engine on the Amazon side. You still have to "reserve" dynos, you still have to define how your app scales. There is no "autoscaling" feature that's like App Engine on the Amazon side. Or at least if it exists, it's not as mature as App Engine.


Is this the certification you are talking about :

https://cloud.google.com/developers/training/exams/

??


Those are the pre-requisite for the 400-series exams. I had to pass all those just to get into the 400-series classes. Then you have to pass the exams and then do a project for each part of the platform. I did all that.


Could you please link me where I can find more information about the 400-series exams / how to become a certified GCP engineer?


I'm not really sure! I was invited into the program through the partner channel at Google because I was working for a company that had already done a lot of development on the app engine platform. I was one of the early adopters of GAE, as I started using it the first year it came out. One of the apps I co-wrote became a case study Google used.

See this is kind of an example of how Google is awesome at engineering but short on communication. I can't even explain how to get into their fold, so to speak.


There was a pretty painful, several-hour Azure outage recently, but 102 outages is over an order of magnitude higher than my Pingdom/Clicky uptime data for a few sites in US East last year.


I have a single instance and other than that big one, my monitor hasn't really gone off, other than during patching. Im not sure what those hours are referring to.


Probably includes partial outages.


I'm going to show the Azure downtime numbers to my boss next to the downtime numbers for our private cloud next time I want to ask for a raise....


Way to set the bar...


When a company trumpets some bold statements for marketing purposes, I immediately think about the worst or most far-fetched interpretation of their language. I wonder if their marketing people are being coy and the honest truth is, in some ways, the opposite or far less than what they are saying.


With 'more' terminology, I always wonder what the original was. "More" than 1% secure is... not terribly specific. "Best ever" doesn't really mean it's good, just that it's always been worse.

As a side note, I'm not implicating Azure here, just a general context.


Maybe an automated corporate marketing speak to normal human language translator with UI like google translate could be cooked up?


You probably know that security includes availability. Just saying...


We run 50+ linux server on azure (west europe) and it's a nightmare, vm reboot unplanned,VHD diseappear. Timeout or unreachable blob files on HDInsight (with PIG)... Slow load balancer not really configurable. Website slow++. I won't recommand it... Very hard for us to keep a professional SLA.

Maybe it's because we run linux server ?

One good point is cheap blob, slow but cheap.


Is it feasible for you to evaluate an AWS deployment? I'd be very interested in the numbers you mention if we could compare them to something directly.


Nop we don't want to deploy on AWS for contract issue. But we are moving on full dedicated OVH. I cannot provide numbers for OVH but for now we are SLA 100% on those server.


I have a handful of issues with this...

* MS is managing said Key Vault, meaning they may well be under pressure from the NSA to provide access, without a warrant, and without a target knowing said access was requested.

* The local SSD storage can really only be used for temporary or cache based workloads... if your image ships to another machine in the case of failover, you'll lose that data. It's not a bad thing, but the High Performance (SSD backed) disk storage is still waiting for general availability, been on the wait list for preview for a while.

* The ready ubuntu+docker VM is cool, but I think it's more cool that CoreOS is generally available in the box now.


We run all our ec2 instances on ephemeral disk instances. Historically running on ebs was a great way to ensure application downtime.

Ephemeral drives mean you need to change the design of your application to be able to withstand full loss of machines. But it's really not that hard. A good replicated database (riak, cassandra) spanning multiple availability zones gets you 95% of the way there.


I wouldn't really trust doing so with less than say 40-50 servers, which C* can do quite well, but this is very much overkill as an expense for most people.


I'm not really sure what you mean. The number of nodes you need depends on the capacity you need and your replication factor.

Most C* deployments are in the 9-15 node range. You could safely deploy with 6 nodes and RF3 if 6 nodes gives you enough capacity in terms of both disk space and IOPS.


True... but given that you can't control how many nodes come down at once, (up to a third typically in azure's case), or what data is on which node, it's less predictable. If I have 24 nodes each across 2 data centers, with a replication factor of 10, then I would consider the data relatively safe... short of that, you have a significant chance of down time should an outage occur.

Let alone in the case of a more significant issue, and again, if the server goes down, that data is effectively lost, since the individual node will no longer be there. You have to have multiple sites and higher replication factors and a good backup system.

That said, Azure storage actually works very well, aside from the relatively recent azure outage.


> ...with a replication factor of 10

RF 10 is insane overkill. There is probably someone out there doing it, but it is completely unnecessary for %99.999 real world deployments.

Getting back to your original comment, Azure persistent storage (or AWS, or Google) isn't giving you anything near RF10 * 2 DCs.


Azure storage, each write is to two local and a third (optionally geo redundant) location... This is data that will still be there if my VM reboots... the local/scratch disk is gone if my VM reboots... if a significant number of those VMs reboot, you will lose data. I do hope that you are at least backing up those nodes regularly to persistant options.


I don't know how Azure works but AWS reboots are fine with ephemeral disks.

We backup all sstables to s3. We've never needed to restore a node from an s3 backup in 5 years of running in production.


"In another embodiment, a data storage system receives a request from a third party to access a user's stored, encrypted data, where the data is stored in the data storage system according to a predefined policy. The encryption on the data prevents the storage system from gaining access to the encrypted data, while the policy allows the encrypted data to be released upon receiving a threshold number of requests from verified third parties." - Cloud key escrow system, Microsoft Corporation

https://www.google.com/patents/US20120321086

Anyone here speak patentese?


I think Microsoft is doing a good job of reinventing themselves. Azure and Office 365 really are good products. The Linux support on Azure is great and Office 365 runs well on my Android and iOS devices, support on OS X is OK, and the web version of Office 365 is sometimes handy on my Linux laptop.

As far as privacy on their key store goes, I tend to trust Microsoft and Google more than average corporations.


I'm paying for Microsoft Office for the first time ever (when I've used it in the past it has been provided by school or work) because I can use Office on non-Microsoft platforms now. I can start an Excel document on Windows 8, edit it later on RHEL, and view it on the go on my iPhone. It's the era of the cloud. That's how things should be. It doesn't matter what system I'm running.


Unless you're doing something extremely complicated, linking to data outside the Excel file or abusing Excel for a purpose where some other tool would be far better, this interoperability has been provided by open/libre-office and the various mobile office suites (read-only) for many years. Microsoft's attempts at vendor lock-in notwithstanding. This cloudiness is the wrong solution, the right one would be for MS to stop actively sabotaging interoperability.


Sorry, but OpenOffice (and so on) is lacking in enough places to make it not worth my time. And believe me, I've put a lot of time into trying to make it work. Saving a document in OpenOffice to Dropbox and editing it on Android with Documents to Go or QuickOffice (both of which I've paid for at ~$20 each) and then trying to open it in Office on the school's computers is a recipe for disaster. What's even worse is if I didn't have the time to open it and save it in Word before the assignment was due. The teacher opens it in Word and finds out that OpenOffice saved it with 1.2" margins and Word interprets it as 1.25", or 12pt Arial font that Word thinks is 11pt Helvetica or that picture isn't lined up properly and now an English professor is giving me a zero on the paper because he doesn't know or care what free software means. He just knows I'm the only one in the class that did not follow the instructions he gave.

Worse still, out in the real world as a client-facing consultant who uses a RHEL-based laptop for day-to-day work. I keep a VM of Windows 7 running for when I need to submit a document to a client, because there is no way I am taking the chance on Word interpreting an OpenOffice document correctly.

Right solution or wrong solution, Microsoft Office is the solution. I have to wonder where people work (and where they went to college) that affords them the luxury of taking a stand on what office suite they use. I would also like to know what open/libre office suite you're using where I can use the same document with the same file format and the same rendering engine on any platform including the web (where the web version is also open/libre). Because honestly I don't think it exists right now, let alone having existed for many years, which makes your entire point moot.


You're clearly doing something wrong if you're running a VM of Windows just to create simple documents. It really sounds like you need to take inventory of how you're doing things. Since we're talking anecdote, I have never had a problem editing Dropbox-saved documents in QuickOffice, or having them magically messed up when I email them. I went to a couple of different schools that allowed students to submit documents in formats supported by open source formats. You're right that in the first half of the 2000s using proprietary pay software formats was a requirement, but in the last ten years schools and instructors have started accepting others, and the last couple of years I went to school I didn't submit any document in a proprietary format. The fact is that Microsoft Office is an enormous cost to consider versus something like the equally powerful and less obtuse LibreOffice. Also, your "Microsoft Office is the solution" makes you sound like a shill, seriously.


His example was creating a document in OpenOffice, editing it later in QuickOffice, and then having the recipient open in it Microsoft Word. He is absolutely correct that what the recipient opens in Word will have formatting discrepancies and not look like what he thinks he submitted. It might not even be the fault of QuickOffice, it's more of an issue that it wasn't created and edited in the target/destination environment (Word). This isn't even a new or unforeseen issue for people who work with a lot of office documents.

More broadly, in the business world, Microsoft Office is still very dominant. If your clients & business partners collaborate with Word documents, or Excel workbooks, or PowerPoint presentations, or whatever the case...running a VM so you can run MS Office and collaborate with them isn't "wrong" or something the parent needs to reevaluate- It's a business requirement and to ensure professionalism! It isn't nearly as uncommon of a use case as you think. I don't think he's shilling, he's being a realist given his needs.


If you think I'm doing it wrong at work, you should talk to my boss. I'm sure you can either convince him that Office (including Project and Visio) runs just fine on RHEL or that all the security consultants should be using Windows. He's gonna feel silly when he finds out he's been buying Windows and Office licenses for nothing.

But there's one thing he does know: sending an OpenOffice document to a client that is going to open it in Word is the fastest way to lose that client's business.


The issue is that you have to submit your papers as a Word file. When I was in college we would just turn in a physical paper, so it didn't matter what software we used. I'm surprised they don't let you submit your assignment in PDF form.

It's very bizarre to me that you're required to submit your assignment in a format which is only (truly) supported by paid software, as that would require all students buy said software. Certainly you could make the case that they could use the software available in libraries and such, but I see no reason that they wouldn't embrace more equality if not given the opportunity.


Why is that an issue? Word is the standard format for this type of submitting papers in and not just educational but at many companies of different sizes as well. There is no other format that works as well as Word, especially with change tracking that works when you're working in a group.

When I was in college, we didn't have to pay for anything to get MS products. The college paid for a "MSDN" library that provides students with access to many tools in MS's collection. MSDN is in quotes because that's effectively what it was but I recall it under a different brand for our college.

For IT students, we also had free access to all servers and client OSes, Office, Visio, etc etc etc. We never had to pay for anything.

This was true for the faculty, they also had free access to everything and that's why they require the students to submit specific formats.


Word can not even open their old Word version documents correctly. LibreOffice opens them better.

Therefore "There is no other format that works as well as Word" is completely wrong. Relying on closed document formats and proprietary Software is shortsighted.


I had the opposite experience in college. Most of my professors were pretty adamant that submitting .doc or .docx (or .pages or .rtf or .tex) files wasn't acceptable. You were supposed to export whatever your source file was to PDF, and submit that. Especially the case in my CS courses. Word was more common in the humanities courses, but I never had a problem submitting a PDF in those either, it just wasn't required.

Besides accommodating profs who don't use Word (the CS dept had a small but vocal minority of Unix graybeards), the other main motivation was to avoid having to deal with version compatibility issues. Opening a document created on one version of Word in another version would often (depending on the version pair) mangle some things, especially references and cross-references. And some files just wouldn't open at all. Things might be better with that now; I haven't had occasion to investigate in a few years. In the 2000s, there was a huge mess with the .doc/.docx transition, as well as compatibility issues between the Windows and Mac versions of Office.


when in school, Office is $25 dollars. Virtually every teaching assistant wants a word document and thats it. they don't want to give a list acceptable formats, they want it standardized.


When I was in school ~ 5 years ago, handing in anything not produced in LaTeX was an automatic fail for the assignment. This was a department rule. I majored in physics though. Never paid a dollar for Office nor Windows (since I don't use them), and I don't ever plan to.


you didnt have classes outside the physics department?


I use latex and create pdf's. They look exactly the same everywhere.


I'm interested to hear you've had a good run with O365 - or organisation (100 people) have been using it for 3 years and it's a nightmare, weekly outages, very poor performance, browser compatibility problems with anything that's not IE, missing emails and to top it off absolutely shocking support.


Ouch, that does not sound good. I don't use their email service. I signed up to get the 1 terrabyte of OneDrive storage per member of my household, and almost as an afterthought then started using OneNote, Word, and Excel.


Especially considering Microsoft is providing you $150 a month in Azure services for 3 years as part of your participation in their BizSpark program.


Yes, BizApark is pretty nice. Years ago, when Amazon rolled out EC2s, one of their marketing people gave me a $1000 credit to play with AWS. Microsoft BizSpark is even more generous.


The truth is that a lot of you refuse to learn how windows works at a deep level, at least as deep as you know UNIX. Then it doesn't work like UNIX... and then you're angry.

I have seen similar horror stories from AWS customers. They probably weren't early adopters like people on HN...who now have the kinks worked out.

MS should stop trying to impress the HN crowd. And unless you own stock in these companies you need to stop investing so much personal emotion in how they are doing compared to each other. AWS isn't some scrappy upstart from a Horatio Alger novel. MS isn't the Empire from Star Wars. And none of them give a damn about you.


A good example of openwashing. You can call anything open these days, even proprietary software!


The term "open" only appears once in the article on this line (aside from the title):

> Building on our openness with the availability of the first Docker Image in the Microsoft Azure Marketplace

What the hell does "building on our openness" even mean in that context? I've read it several times and it makes no sense. It is great that they added Docker images (really) but maybe someone technical at Microsoft should start to proofread what nonsense that the marketing department spews out.


You mean aside from the title that every other news site will reproduce and which will be everything most people remember?


... and the Microsoft perestroika continues. :)



What do you mean? Do you mean this:

perestroika: ORIGIN Russian, literally ‘restructuring.’


Perestroika is the era in Soviet history when the USSR began to 'thaw' in its relationship to the West and reform its economic policy to be less collectivized and more capitalistic.

In this case, Microsoft would be the USSR and its policy of closed-source Windows/.NET domination would be the old Communist Party hegemonic philosophy. That would make Satya Nadella Microsoft's Mikhail Gorbachev.


Does anyone have any opinions on Google Cloud? We are investigating moving to Google Cloud but I personally am a bit skeptical because they appear to be a distinct #4 behind AWS, Azure and Rackspace. I'm worried that Google Cloud will not get the revenues they want and they will close up shop, like they've done with their other products that were wildly successful. I don't see Google having the same level of commitment as Bezos, who will believe in something and then see it through come hell or high water.

Does Google Cloud have the same functionality and flexibility that AWS or Azure have?


I really don't see how Google would exit the cloud service business. It's a core competence of theirs.

People have been saying "I'd rather stick with Evernote than Keep" as well. And less than 2 years later, Keep is still there and being updated (with not that many users I think), while Evernote had just laid off 20 people.

I also think most of Google's "spring cleaning" projects have been small projects that made no money - as in they didn't even have a business model (such as Reader). The cloud business seems to be pretty straightforward - we give you this, you pay us that.


Why not build your system out to be cloud agnostic and test out different providers?


The major problem with Google Cloud is you don't when they will be blocked in China (that is 20% of the Internet population, or if you don't care). On the other hand, Microsoft and Amazon have better communication with the China gov.


[deleted]


https://cloud.google.com/terms/

"5.1 Intellectual Property Rights. Except as expressly set forth in this Agreement, this Agreement does not grant either party any rights, implied or otherwise, to the other’s content or any of the other’s intellectual property. As between the parties, Customer owns all Intellectual Property Rights in Customer Data and the Application or Project (if applicable), and Google owns all Intellectual Property Rights in the Services and Software."


This has never been the case.


I actually think Azure is nice platform, but they've effectively priced me out.

What I mean is that if I want a basic VPS on Azure it costs ~10€/mo to run the server for the month, but there are many VPS providers who offer a lot better hardware for same price.

I guess Azure is meant for bigger needs than mine where you can run 100-200€/mo by default and then scale up when needed, but since my little blog + test/dev server won't need to be scaled it just seems too expensive.


If you plan on building something, but your needs are not there yet you may want to apply for a BizSpark account that covers $150/mo if I'm not wrong. OTOH for a blog you may want to check the PaaS offerings (e.g. Azure Websites).


Am I comparing the right things for the pricing on AWS Cloud HSM vs Azure Key Vault?

ASW Cloud HSM - http://aws.amazon.com/cloudhsm/pricing/

Azure Key Vault - http://azure.microsoft.com/en-us/pricing/details/key-vault/

The metrics they use are not the same, so I am not sure if the AWS option is something dedicated vs the MSFT one is something you share? Is there something different from AWS that is more comparable?


I think the AWS Cloud HSM is dedicated, but not sure. They look to be about the same. If you don't need FIPS, AWS also has the new KMS service which is way cheaper than Cloud HSM.


The pricing model looks much more similar for KMS: http://aws.amazon.com/kms/


I've been using Azure Websites to host 10 web apps monitoring with nodeping and have not had noticeable downtime in 2014. Maybe it's just their IaaS and not the PaaS


Whenever I hear about Microsoft, I just think irrelevant. Am I bad? Sometimes I think I might have missed out on something, but in 30+ years of programming I've never done any real development on it (unless you count java), and I often go months without encountering it (except for remote desktop occasionally).

I think it was good to have competition Apple/Linux/Google, but it doesn't seem like they've kept up.


I recently had to use the Azure various APIS like ServiceBus.

It's a complete failure once you get lost in bugs, missing or hard to find API documentation or examples like `var serviceBusService = azure.createServiceBusService();` WELL Mister API Designer you failed!


It's really unreliable though, it seems they're battling outages every few days.


A very important issue with Key Vault is: what to do when the Hardware Security Module dies? All electronics fail or stop working at some point. How do you make backups of keys that were on the HSM?


You don't. Keys on an HSM never leave the HSM, is how I think it should work. But your keys in the HSM can encrypt secrets, separate from the HSM's keys, but stored with the same service. You could potentially distribute secrets to multiple HSM-backed services. It's equally possible that the service itself distributes your secrets amongst multiple HSMs.

YubiHSM back in the day, I recall reading, was designed so that you'd want two HSMs, one generates random secrets, the other stores the secrets using keys that never leave the device, if I recall correctly. And the reason it needed two is that the generator would leak parts of its keys with the random data it produced, I think, and so to securely store them, you needed a second device with key generating turned off. I could be out to lunch here, never bought a YubiHSM nor do I have experience with corporate ones. My point, is that there are different uses for HSMs, and it's easy enough to have an insecure use of HSMs, even as simple as generating secrets and storing secrets on the same device.

As to what to do if the key is lost, I suppose it's time to re-issue. :) The goal is to not make too many backups: keeping a key secret is more important than ensuring the key is widely available, right? So it's a balance....


I don't know about a device operating at the scale that Azure is using, but the key stores on smaller Thales HSMs can absolutely be backed up to smart cards.

Security of key material is all about procedures. With a private CA I helped to setup, we used a quorum based authorization scheme, and the collection of smart cards was distributed among different reporting lines to make collusion between employees difficult.


Makes sense. At that point it's probably easier to find another part of the software stack to attack instead of the secrets itself. E.g. instead of getting the keys to the kingdom, just exploit a weakness in some signing software. Reminds me of that Microsoft certificate signing service for remote desktop (or something like that) for the feds (okay, maybe not but still...) that ended up generating certificates that would pass Windows Update checks for from-Microsoft validity. Google reminds me it was called "Flame". Ah, here it is: http://www.securityweek.com/microsoft-unauthorized-certifica... And it was revealed roughly a year before we learned about PRISM and such.


The AWS HSM service supports backing to your own HSM: http://aws.amazon.com/cloudhsm/faqs/


I'm not sure I like what MS is doing. They're innovating (or at least fixing things) almost exclusively only in cloud/online services/products.


Board-level changes at Microsoft over the last year or two have placed a higher priority on platform-agnostic cloud services, vs. Windows. Hence new versions of Windows that are free, but collect usage data from bundled services, like Google does.


Windows Server will let you run your own Azure, but you can't run your own EC2 or GCE.


Amazon themselves don't provide a way to run your own EC2, but Eucalyptus [1] is an open-source implementation that works for some use-cases. I believe OpenStack and CloudStack also implement a good portion of the AWS APIs.

[1] https://www.eucalyptus.com/


Last I checked, Eucalyptus is missing an number of the APIs that customers really use once they are doing more than just hosting a few VMs. Access control, security, VPN... The service providers are actually quite different and any attempt to standardize is just doing to be the least-common-denominator, which is going to be missing a great number of useful features.


Azure needs to provide SSD backed options for all blobs. As is, they encourage you to use local SSD storage, which gets wiped on reboot.


I assume Microsoft itself has access to the keys in the Key Vault?


Sure, you can make that assumption. Even with an HSM protecting things, they still own and manage the HSM, not you. But then the same can be said for everything else you run in Microsoft's cloud or any other service provider, really. Once I read that the NSA would sometimes take computers from transport and modify them, I realized the NSA is the type of persistent threat you simply can't avoid. It can't be helped in this day and age.

For more on the HSM service and how it works: http://blogs.technet.com/b/kv/


Can't be helped technically. The way you solve problems like this is with strong laws and vigorous enforcement, not more/different/better tech.


That's completely backwards when people break the laws with impunity and/or the threat to security is law enforcement and the rest of the legal system. At best, laws are a temporary stop-gap measure until the technology is capable and verifiable.


Why can't the keys be managed in an end-to-end fashion? Wasn't it Cloudflare announcing something like that a few months ago, with clients having their own key-servers that Cloudflare itself can't access?


They can be, but this avoids the round-trip time entirely. Microsoft's not forcing you to keep your secrets in the cloud if you don't want to, what they're saying is, "you don't have to run it all yourself if you don't want to" or for those already in the cloud, "it's more secure (or audit-able, at least) to store and share secrets using an HSM than to use plain-text on a hard drive". Of course, nothing's perfect, and even your secrets will eventually end up in RAM, but that's why they call it "defense-in-depth" right? Plus, it means if you're encrypting something, you can use the HSM to do it and know that only the HSM box has the keys to what you're encrypting, and it's dedicated and designed for that task. I personally like HSMs as a concept and look forward to lower cost options as we rely more on encryption in the cloud.


I realize the "it's easier" part. That's why most of us use email over TLS/STARTTLS instead of PGP. However, I don't think Microsoft is going to address the "trust" issue foreign governments and companies have with American clouds right now.

Granted, I'm only picking on Microsoft because they are announcing this now, and I think they could've done better. But I assume Amazon and Google's encryption also relies on "trusting them" (+ the US gov).

They all need to adopt more end-to-end solutions from end-user services to enterprise cloud services. Perhaps especially for enterprise cloud services, since I think they have more to lose by putting their trust in the cloud providers instead of building their own clouds, and they could be more reluctant to adopt their services because of that.

Maybe the cloud companies aren't feeling this as much now since there seems to be "growth" coming in anyway, but when the market will stabilize a bit, they will probably start feeling it. It's kind of how Blackberry didn't feel the they are banking on a bad strategy in the post-iPhone years, because they were still seeing "growth" during that time, mostly from other markets, hiding the fact they were using a bad strategy, and they were only growing because of brand inertia from previous years.


"and even your secrets will eventually end up in RAM" Maybe not necessarily in future: https://www.usenix.org/conference/osdi14/technical-sessions/...


I suppose nothing on Azure would stop you from connecting your Azure instances to your own HSM outside of their DCs (although that would make maintenance your problem instead of Microsoft's).

Granted, that solution wouldn't be as nicely integrated with their other services. I guess from a business POV, making it easier to be compliant with various security standards that require practices like encrypt-at-rest > building a solution that's secure even against state actors.


Why wouldn't it?

Very few businesses really have this requirement. End of the day, if the Feds show up with a warrant or warrant-like paper, I'm not going to jail for my employer. Hell, if I was locked up defending their data, I'd probably be expected to charge my accruals for my absence.


"It IS faster! Over five million..."


Bigger, fast, more open, more secure. Where does that leave Azure, given where it started? It means it's still small, slow, closed, and insecure.


I'm pretty sure they haven't addressed the old issue of having keeping your FTP credentials in plain text. To me, that's not very secure at all.

Source: http://weblogs.asp.net/bleroy/azure-web-sites-ftp-credential...

"Notice how the password looks encrypted. Well, it’s not really encrypted in fact. This is your password in clear text. It’s just crypto-random gibberish, which is the best kind of password."

What exactly is "crypto-gibberish"?


  > What exactly is "crypto-gibberish"?
You generate a random password from the set of inputs that the system allows, usually printable ASCII characters. So instead of a non-gibberish password like "correcthorsebatterystaple" you end up with a gibberish password like "]'gf2~B;](0EnxW>/n%+b*q4{".

  > I'm pretty sure they haven't addressed the old issue of
  > having keeping your FTP credentials in plain text.
Would you complain if it were an API key that was provided to you in plain text?


Yes, because FTP by design is unencrypted and can be easily sniffed.


You're addressing a different issue; at-fates-hands was addressing the issue of "keeping your FTP credentials in plain text" whereas you are discussing that plain FTP is an insecure protocol (a point I fully agree with). The site does say:

  "The Azure dashboard doesn’t seem to give easy access to your FTP
   credentials, and they are not the login and password you use everywhere else."
Likely the difficulty of finding FTP credentials is because FTP isn't the preferred method of publishing your site.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: