What are the reasons for exposing an SQL server directly to the internet? Obviously there's a bunch of people who have mis-configured and have accidently exposed their systems, but are there real-world reasons to expose a SQL server directly to the net?
I would think most people who are sharing datasets would do so with some sort of API - and there'd need to be room for rate limiting / blacklisting IPs and so on. I would never expose a SQL server directly to the net, and I'm really curious if there are reasons to do so?
It is just easier to log in and manage an SQL server if it is available on the internet. I know I'm certainly guilty of doing so for short periods when I'm off-site with no VPN and need to manage a server.
Some sysadmins just get lazy and leave it available on the internet with strong passwords 24/7, just so once in a blue moon they can log in and manage it. Or are doing some kind of site to site data migration and are tired of the VPN connection dropping.
I've done a site to site SQL migration across two Azure zones because doing it the "right" way was too complicated/time consuming/expensive. However once the data was copied across I changed the End Point config to protect the SQL server again.
I won't make excuses, it is just laziness/expedient. But that is human nature for you...
We do here where I work. We're running Bitvise SSH server on Server R2/2012 and you can use bitvise tunnelier or putty as the client(maybe others, just have used those two myself) and then connect via management studio once the SSH tunnel is connected. We also require both key and password auth. We also do RDP over the SSH connection as you can't remote to those machines directly.
Bitvise server is $100, but now you want both simple and free? It's well worth the $100 IMO as it's pretty easy to setup and manage accounts and keys. You only said "simple" before, and I consider it pretty simple to get up and running. I didn't pay for it, but $100 one time is pretty negligible for what we use it for.
i did this with an AWS system because we never configured security keys properly -- basically got tired of whitelisting 1 IP at a time for development tasks, especially since I had issues accessing from behind corp firewalls.
client told me they got a big IO bill from AWS & i was deathly afraid it was from bots just probing the SQL server ports. thankfully it wasn't (it was from s3, they are a media heavy site with many users) but it reminded me not to just leave things accessible via internet, cuz even though it seems irrational to think someone would just probe you over & over, the fact is you have to pay an AWS bill, & attackers are just operating from their laptop.... if the amplification is right they can cost you looooots of $$ with no cost to them.
In the case of hosted SQL solutions (Heroku, AWS, Azure) it makes sense if you don't want to have to place your servers inside the same network/provider or if you want developers to have access to the DB directly from their machines.
But regardless you should be locking down access to the SQL host by network ranges, but I'm not sure if all providers have that ability.
Agreed. As I see it there are two main design methods for overcoming this issue.
The first is better for 'anonymous' services; require the client to send a packet of the same size as the buffer they want to receive. It's network in-efficient but for small requests better than the delay in setting up an actual session. It eliminates the chance of amplification attacks.
The second is to establish /some/ kind of session. This might be as 'simple' as logging in, or it could just involve a few round trip communications that indicate the client /is/ listening to server replies. That eliminates every DDoS style except for a man in the middle capable amplification vectors; yet if MitM is possible then why bother with DDoS.
Isn't this service disabled by default? I've never needed to enable it and I install new SQL Server instances just about every week. It's my understanding that nobody really needs a live list of all SQL Server instances on a server to be made available on the network, nevermind the Internet at large.
This is one of those legacy services that Microsoft keeps around just in case somebody wants it but it's been a long time since it had any changes or updates. What they really need to do is have it not even included with the product. It should be a download-only utility, for those that really, really want this on their systems.
What are the reasons for exposing an SQL server directly to the internet? Obviously there's a bunch of people who have mis-configured and have accidently exposed their systems, but are there real-world reasons to expose a SQL server directly to the net?
I would think most people who are sharing datasets would do so with some sort of API - and there'd need to be room for rate limiting / blacklisting IPs and so on. I would never expose a SQL server directly to the net, and I'm really curious if there are reasons to do so?