Also, by having the SQLPAL be directly descended from the Windows SQLOS should mean that Windows and Linux SQL Server have similar performance, and they're not paying a significant overhead that's typical of platform emulation.
This is a great thing for customers too. The systems will work very similarly and less code means fewer bugs and security risks.
It did come out of the UNIX-based Sybase SQL Server, though:
I wonder if there was an OS abstraction layer at that time (Sybase ported it to OS/2 together with Microsoft first) that was then thrown away by Microsoft after they licensed the source code and went on their own?
Encryption is easier to manage and it has built in solutions for HIPPA and PCI.
SQL reporting services and the entire ecosystem of apps that run on top of MSSQL..
Better clustering, ha, revision control and backup.
Most likely better enterprise support, and you'll be able to port "legacy" applications that were designed with win/sql in mind without having to rewrite your entire DB access and query code.
What? HIPPA, point by point .
- Do periodic risk assessments and be able to produce the reports if demanded.
- Have a designated HIPAA compliance officer.
- Have a security policy, train employees on it, and ensure it has sufficient teeth to terminate violators.
- Follow the principle of least privilege when granting access to PHI.
- Limit physical access to facilities.
- No shared accounts, and lock workstations when idle.
- Encrypt data at rest and in transit.
- Log data access.
- Protect data against corruption.
I suppose an RDBMS can help protect data integrity through foreign key constraints and other validation tactics, but the rest of this can't really be performed at database engine layer (unless your users are logging in to MSSQL directly?) so much as the EMR application and the physical, config management (full disk encryption + screensaver password), and bureaucratic environment it's deployed in.
> - Log data access.
> - Protect data against corruption.
It's these three that I imagine MSSQL can help with. I believe it's got a built in storage encryption setup for tables/databases and can definitely be setup to log access to the database and queries run against HIPAA data.
All the key management, split knowledge, key rotation and most importantly access management and a full audit trail are also a "few clicks" away.
Having a turn key solution for one of the most annoying things to implement in a correct manner should not be under-estimated.
Anyone who done any compliance work knows just how many weirdly stitched solutions you can have.
Something along the lines of "we manage split knowledge to our keys by splitting the key and keepass file into 2 files and storing them on USB keys in separate vaults"....
Now technically this can pass a compliance check but this is a pretty darn silly situation simply because there isn't any better solution that works out of the box, this also quite often means that anytime you restart your DB you have an entire process of getting the keys, decrypting the tables copying their content into temporary views, re-encrypting the files and remove the keys from the server..
For MSSQL it wasn't that long ago that you have to do it "by hand" also, usually have a stored procedure that would handle the encryption and have a key which is then encrypted by several KEK's whilst it did afford some split knowledge it wasn't a true m of n which also meant that if you had more than 2 DBA's you had to make sure that the DBA's are know all of the KEK's and not be in a situation where all 3 DBA's that are not off during that week know the exact same password for the KEK...
FDE won't even work for data at rest unless you can do split keys and auditing.
This isn't about security this is about complience.
But in general again this isn't about security it's about complience so it's more about doing things by the book and a very specific one.
Edit: I am not familar with much of anything microsoft related.
Windows automation kinda died when system admins stopped needing to know how to work with JET DB when needing to do one serious ad work.
I don't can't think of a single CI or automation thing that works with Linux that you can't do in Windows if pushed I can probably come with considerably more things which are opposite.
Licensing is the only thing really that makes windows machines a pain to work with unless you are big enough for a sell reporting volume licensing scheme.
> It's really not, it's that most devops engineers don't want to learn or even bother and most windows sysadmins simply don't know how.
From a hiring/project/management perspective, this is my definition of "hard".
From a sysadmin perspective, a huge problem is the amount of knowledge needed to be an effective windows sysadmin absolutely dwarfs -- by several orders of magnitude -- the amount of knowledge needed to be effective on Linux.
How many sysadmins have you heard suggest, with all credulity that the solution is downtime and a reinstall on Windows? on Linux? Once a sysadmin knows how to move files around, run strace and grep, there is very little hidden from them on Linux, while on Windows they still don't even know how to take a backup, which is helped less by Microsoft refusing to publish any documentation that makes them look stupid -- like it being impossible to take an on-line backup of a server running Microsoft Exchange 2007 prior to SP2.
> I don't can't think of a single CI or automation thing that works with Linux that you can't do in Windows if pushed I can probably come with considerably more things which are opposite.
I agree wholeheartedly with that statement.
I hope that UNIX sysadmins want to learn, and bother to learn well the Windows environment better: UNIX sysadmins already know how a system is supposed to work, and it is by large the sheer number of inexperienced Windows sysadmins that make Windows look stupid. UNIX sysadmins have a better chance of running Windows well simply because they won't put up with "have you tried reinstalling everything"?
There's also some really good ideas in the Windows ecosystem that Linux (and even OSX) could really benefit from.
> Licensing is the only thing really that makes windows machines a pain to work with unless you are big enough for a sell reporting volume licensing scheme.
An aside: I'm not so sure this is true anymore, at least not with Azure for infrastructure, and Open programs for desktops/laptops.
I hope that UNIX sysadmins want to learn, and bother to learn well the Windows environment better: UNIX sysadmins already know how a system is supposed to work, and it is by large the sheer number of inexperienced Windows sysadmins that make Windows look stupid. UNIX sysadmins have a better chance of running Windows well simply because they won't put up with "have you tried reinstalling everything"?"
Powershell is the key that unlocks Windows administration, I think. Certainly, as somebody that does both platforms, learning Powershell made a big difference to how I work and think when using Windows.
Having said that, Windows Server is still fundamentally bad. The logging is a mess, remote access is awkward compared to SSH on Linux, the failure to properly integrate package management hurts every day, the Registry is still...there, and the list goes on. The disparity of DevOps tooling is another issue. I would agree with the comment that automating Windows takes several times as long as Linux. It's just not happy fun.
At this point, Windows is probably not retrievable. The Windows Server folks will keep improving it bit by bit, but I can't see an argument for continuing to flog the horse and use Windows Server outside of full-dress Microsoft corporate environments.
Also wmi always worked and still does.
Logging is great you just need to configure it and register your own triggers and monitors really depends on what you want to log.
It's very easy to do logging of every action and even completely custom and send it to the event log, you can also register custom events if you want to add your own application logging or use the stubs and just push the information into the text fields.
Again this simply comes to 2 things tooling and knowing windows internals well, sadly because there are usually very little negative side effects to rebooting a windows machine and that the market is flooded GUI administrators with mcsa/mcse certs gained through memorizing exam dumps it's hard to find people that know or want to know how to administrate windows machines to their full potential.
To me, that's sort of where Windows is now. It has some really impressive stuff: Windows 2000 was an avalanche of innovations, but most of it was built for the corporate LANs of a decade ago. Put Windows Server on a cloud, and most of this stuff is not relevant or useful, and Windows is not great at some fundamentals.
Logging is one of those. If you are interested in this kind of thing, I'd genuinely encourage you to spin up a Fedora or Debian system and look at the thing work. We are currently in transition between logging systems, but the feature sets are impressive and the error messages are useful. Windows logging is even more patchwork, and the (Microsoft) products that bother with the core logging system tend to fill it with chaff. It might get better, one day, but it's not a good experience today, and hasn't ever been.
Another is software management. Package-based software management has to be deeply integrated into the OS for it to provide it's full value. Debian have done this for years, and Red Hat have caught up. Microsoft have a muddle of multiple software management systems, and individual admins can't fix it. We just have to work around it and do the best that we can, or decide that the platform is what it is, and move on.
In an ideal world you wouldn't have to deal with 32-bit versions of Windows, or EOL'd 64-bit versions. The real world with lots of legacy stuff doesn't work that way. Meanwhile EOL'd versions of Linux chugged away just fine.
Iteration on the Windows instances was also quite a bit slower than with the Linux ones. Applying all the requisite compliance stuff resulted in a number of additional reboot cycles. I could get a Linux instance bootstrapped in half the time, without having to bake a custom image.
This is a tooling issue, dont expect to use a tool that was hacked together for all intents a purposes rather than a native solution and get good results.
And I'm seeing three things:
1.) SCCM is not nearly as easy to automate as the alternatives.
2.) SCCM is expensive.
3.) SCCM has mediocre support for Linux.
And there ya go. Meanwhile things like Ansible, Chef, and Puppet can handle your Linux, BSD, and OSX servers with aplomb. Where Ansible, Chef, and Puppet use mature, cross-platform scripting languages SCCM uses Powershell. Windows is the odd man out here, and skills from nearly every other OS don't really transfer well to dealing with Windows. That's why Windows is seen as more tedious.
And, sure, our Windows guys were dropping to Powershell constantly from their Chef cookbooks. But then you get to deal with maintaining PS 2 and PS 4 scripts. The versions of Ruby and Python you'll find with Ansible/Chef/Puppet are significantly better with backwards compatibility in this regard.
But, let's say that we had bought into SCCM. That doesn't really fix the problems with the other parts of the stack, nor does it fix the inconsistencies across Windows versions.
And you got this from what exactly?
Windows does automation just fine, you are throwing this word around without even trying to explain what is it you want to automate.
SCCM has ok to decent support for Linux; again depending on your use cases and distro.
I don't understand why are you even bringing random Reddit and forum posts for a 5 year old version of SCCM.
SCCM isn't expensive it's usually "free" when you buy enough licenses.
Also you missed a very critical point; Chef, Puppet and Ansible can invoke the SCCM agent.
Their built-in agents for Windows suck, so you either write your own automation (e.g. Powershell) and invoke it via Chef/Puppet/w/e or you use a decent agent that was actually built for purpose.
You are constantly complaining about things that at the end stem from a single source; incorrect tooling.
Also tools like puppet and chef for example have an easier time provisioning Unix like machines than Windows (although this has been changing, and the situation could be much better I haven't dealt with it for a bit).
Also you don't have to deal with licenses for Windows server instances which can be another huge painpoint.
Oracle support is being dropped left and right MSSQL is there because it was the defacto enterprise DB for ages and it also has a free version via MSSQL express.
And lastly having more options is never a bad thing.
By whom? I think "big" enterprise software (SAP, Peoplesoft, etc.) has and will support Oracle for a long time. Many of their customers have huge investments in Oracle, not only financial but training and institutional knowledge.
I'm genuinely curious about what these products, are that run on Linux and use SQL Server.
Although if you're virtualizing Windows Server, it may be effectively free to run an additional Windows Server instance (since you pay per core regardless); but if your hypervisor's resources are already saturated then you could take SQL Server off of it and put it onto a inexpensive Linux Server.
Against which other product did you compare against? There are many database systems supporting Linux. IBM DB2, Oracle Database, to mention a few.
Anyone who has to do that is doing (or did, in the past) something seriously wrong. For me, the biggest reason not to change database after you accumulate a couple dozen terabytes of data is the inconvenience of dumping and restoring it and the downtime it'll cause.
Publishing that on a blog somewhere would make things easier for a lot of people. Have you considered it?
That's the argument against those things - it makes it difficult to switch your database. But there are also some compelling performance arguments in some cases, so in a well-designed system those parts of the code that live in the DB also tend to be important ones.
But if they only exist for the critical parts then porting is a lot easier.
In this case, it's still wrong, but faster than doing it right.
In fact, even SQL Server Management Studio is a Windows-only application.
The app stack is usually more platform agnostic than you think. Even if it's asp.net you most likely going to be able to run it or easily port it to .net core.
Basically having cross platform .net is pointless without having cross platform everything else especially MSSQL.
I've moved more away from the MS environments the past few years now, but have missed some of the things that MSSQL makes very easy, and by comparison PostgreSQL makes relatively hard. Of course you're paying for it. Having the option to run under Linux is really nice though imho.
Could you give some examples?
IIRC geo indexing was easier in mssql, although much more configurable in pgsql.
Not on pg, but linq to entities for .net is really nice, and the pg adapter (iirc) wasn't nearly as good. The node adapters for each are about equal though.
Out of the box performance on mssql, and query performance in general was better last time I compared (admittedly about 2 years ago).
The installs are on separate ISO's now if I'm not mistaken...
Still missing several things that SSMS has, but it's a nice start!
Also of note, In November MS added a number of previously enterprise-only features to all versions of SQL Server 2016 (Service Pack 1 and later). Examples: In-memory OLTP, ColumnStore indexes, are available even in the free Express edition now. Native Postgres, to my knowledge, doesn't have comparables for these yet.
It has surprisingly little effect on many workloads though - then engine is fairly conservative on when it will use it too (and there are easy ways for you to accidentally make it not consider parallelism).
For many workloads you are often better off using the many CPU cores to run different queries than cooperate on a smaller number.
Data warehousing and "big data" are a place where this isn't the case though, and other features MS SQL Server has over PG and others shine here too: the column store indexes, in-memory processing, and so forth. And you get these in the free "express" editions too, as of 2016sp1, though that is only really useful for small projects and prototypes due to other limitations in that edition (10Gb/database, max 4 CPU cores & 1.5Gb RAM put to use per instance) so you'll be paying for standard edition at least for real work.
Some of the analytical tooling around MSSQL is pretty good too. Have had to track performance issues in the past. Even the general query analyzer integrated with Management Studio is great.
I do hope they consider going the electron route for cross-platform management studio, it's worked really well for VS Code, and can imagine it would be a great fit for this as well. Even if quite a bit outside their current tooling.
I might not mind paying for MSSQL if I also did also not have to pay for Windows.
PG is still very lacking when it comes to multi-node clusters for high availability for example.
We cannot run our BI on Postgres or we would. But with Microsoft's latest work, even products like Oracle Hyperion can run against SQL Server running on Linux.
Very interested to hear how to made the decision to use MS SQL and the test cases that drove that decision.
And before you ask, we spent A TON if time and money on configuration, paying for multiple postgres experts to help with optimization.
I can't speak to their specific use cases though. I've been a bit of a RethinkDB fan myself in terms of straddling paradigms. I hope they shake out into a foundation structure with as much as possible in tact.
I do think that once PostgreSQL gets some of the clustering/HA features in the box and shaken out, it will become the default option for most uses. As it stands MSSQL does have some compelling advantages, some of which you pay for.
Or even a add-on that's not so difficult to configure. EnterpriseDB has made strides here, but justifying the license cost is harder for me now that they've moved to their new Unicore model.
`repmgr` from 2ndQuadrant is getting really close to ideal and it's included in the PGDG yum and apt repositories, my biggest issue is the manual plumbing involved to get clients to automatically pick up the new master after a failover and automatically rewinding a failed master and bringing it back up as a standby (both are done out of the box by EDB's solution).
Do I know you? :) Did you actually try HFM or Planning with realistic workloads? I'm very curious... I expect Oracle will soon say SQLServer on Linux is not supported, any day from now. Their Linux support is mostly geared towards Exalytics, which is geared towards Oracle licenses. Then again, these days they're so desperate to sell extra licenses, they might even accept it.
It doesn't matter anyway, the whole suite is slowly being strangled in favour of crappy cloud versions.
The sql server integration services seem interesting.
I did run into several annoying errors when trying to import from excel.
SQL server came out on top.
Queries returned results faster in sql server, after checking we had set everything up optimally in everything.
It doesn't surprise me though... MSSQL tends to do very well out of the box for most workloads.
It would have been nice if some were able to explain why the above reasoning of selling licenses is incorrect? There is no other reason for MS to do this besides money.
I keep thinking it would be cool to see an interactive graph of connection latency, and throughput for 1k, 1mb, 1gb xfers between the major data centers for the big cloud providers. If I could get compute from DO/Linode and use services from Azure and have under 20ms latency for data queries, I might use Azure for services, and Linode or DO for compute.
I know that's too much for some things, but may be work the cost in latency for many use cases... saving a few hundred a month on web/service vm's and leverage Azure beyond that.
(as for downvotes, probably mickeysoft)
and for the people doing docker
Automating the installation of PostgreSQL on Linux with ansible is a couple lines of tasks in a playbook + a template for postgresql.conf and pg_hba.conf. Automating the installation of SQL Server on Windows (as well as the installation of Windows in general if you aren't using System Center) is a huge pain in the ass.
That's enough less "care and feeding" for me.
about the most annoying thing about running windows is updates as they mostly want to restart the box.
If you delay your updates, it can take 40 + minutes to restart once you decide to install them
Also if you're installing it manually (and thus having to configure it manually) the odds are good that you're a developer in which case it'll also only be accessed locally.
It's a sane default to not listen to the network. A lot of users won't need it. Those that do can enable it.
The port thing is handled automatically when you have your network clients working correctly.
For the record I'm a SQLite/Postgres guy, the only reason I even know about the port thing is having to help a customer troubleshoot some software that uses SQL express a few weeks ago. Definitely not a MS defender normally, but your complaints are really silly.