Hard to make a B To B Amazon tool these days.
There was a flaw in our product that had one of our customers pushing to use Amazon's offering. Turns out Amazon's service cost about 4x what it cost us to do it ourselves. It wasn't obvious at the time given Amazon's purposefully obtuse pricing.
Eventually we fixed the bug, brought our customer back in line with the rest, saved some money, and have continued providing this service much cheaper than Amazon. I think it works for us because of a few reasons:
1. The service we offer is challenging enough that most dev teams won't want to do it themselves, they'll outsource it
2. Amazon has little incentive to charge less (see #1), little competition
3. We're small enough that we can still provide that face-to-face level of service and hand holding that's nearly impossible to get from a larger org
Amazon/Microsoft/Google may come into a particular market, but it doesn't automatically imply that they can (or even will) do it better/cheaper/faster.
Sometimes developers choose a niche that's either directly in the path of the vendor, or even worse, on the roadmap of the vendor. In those cases, they don't really deserve our sympathy. It's almost like a game of PR, there's no way you're not going to have a fight on your hands.
Further discussed by Joel Spolsky
"A good platform always has opportunities for applications that aren’t just gap-fillers. These are the kind of application that the vendor is unlikely ever to consider a core feature, usually because it’s vertical — it’s not something everyone is going to want. There is exactly zero chance that Apple is ever going to add a feature to the iPhone for dentists. Zero."
This is why I’m bullish on cloud agnostic tech. These practices don’t typically fair well in the enterprise space. This is why companies like MSFT are interesting to me. They partner and rarely kill. Amazon is the complete opposite.
The shape of the microphone suggests it's a directional microphone (and if you read the reviews, people think it is directional). That's perhaps not a scam, but certainly deceptive.
How the times have changed.
Example: Do you really think that the warm embrace of VMware Horizon and Citrix on Azure is the beginning of a long and fruitful relationship?
Specifically within healthcare. You think large Pharma companies will use AWS after the pill pocket acquisition? Do you think once Amazon.com starts listing prosthetics large life sciences companies will run on AWS? What about providers? Do you think health systems will choose AWS as their cloud once Amazon launches their version of KP?
I don’t see Azure or GCP getting into these specific markets.
More likely would be that Google, Amazon, Microsoft and some affiliated company would be competing in a space where telemetry from something like a prosthetic was reporting information that had some value.
In any situation, the downside of renting something is that you lose control. It is something that you need to think about and incorporate into your business strategy in some scenarios.
Yeah, this is so typical of AWS to do this, and I know we're not alone.
A company I work for implemented an SFTP service where every operation simply translates to some SQL DB lookup. And a file download kicks off a larger SQL query and generates the report on the fly, streaming the result straight through to the SFTP client.
Works great! SFTP can be an API just like HTTP. Under the hood the protocol is reasonably contained and doesn't require a filesystem backend at all.
Depends a lot on the usecase of course.
The usecase that I see most often of SFTP (and hinted at in the parent's problem description) is generating one-off reports for third parties, or passing data to vendors who are stuck in the 90s, like financial services companies.
It's almost always read only (or read and delete), in which case implementing an API like this is pretty straightforward. Log unsupported commands perhaps and decide if you want to implement them later.
Yes,.... Legacy apps... because no one would choose SFTP for system that designed in 2017.
Seriously this is great, so many solution rely on SFTP, but so many companies fail at managing the service. Having an SFTP service that just works and is secure (hopefully) will help a ton of compnies.
That said, I wouldn’t be surprised if modern networking gear can handle CNAMEs but there’s no guarantee that they’re using modern gear or if they are that the questionable outsourced team even knows how to deal with the modern capabilities.
This will certainly help a lot of use cases though.
A network firewall doesn't see the DNS name that an internal system looked up in order to make an outbound connection. It just sees the source/destination IP/port. Processing a rule based on source/destination IP or CIDR and port is very fast, and all happens locally. Trying to make that device handle rules by IP address is pretty tricky. Does it do a reverse lookup on the destination IP? That may not give a result that's even remotely like what the client used, especially for cloud-hosted destinations.
For a lot of applications (probably including this one), a proxy is a good approach, because DNS resolution can be delegated to the proxy, and therefore the proxy can easily apply DNS-based rules as well as IP/CIDR-based rules. However, proxies tend to make people unhappy because they generally require at least some configuration on the client side. Microsoft used to sell a product that made this transparent for Windows clients, but obviously that doesn't help for most modern shops where a lot of the systems are Linux, MacOS, etc.
 Internet Security and Acceleration Server ("ISA"), later renamed to Threat Management Gateway ("TMG"), now deprecated and approaching EOL.
 It hooked into the network stack and rerouted requests based on a proxy routing rule table. Imagine a centrally-managed proxychains, but with the system configured to default to check the proxychains config file for every outbound TCP connection.
Renaming a folder than has a million files/folders inside is a single operation in SFTP, but 2 million operations on S3.
Does it handle writing at arbitrary offsets within a file? Does it download the file first then let you start writing?
What about just writing a few bytes at the beginning of a large existing file and then closing your SFTP handle?
How about 2 users accessing same file via SFTP at the same time?
The thing I love about S3 and cloud services in general is when I pay per request and can scale through the roof.
Whenever a services is meter by number of instances my interest fades, and I look for other solutions..
S3 has this very handsoff feeling to it :)
But I immediately extrapolate that this also means it has bandwidth limits and limits on concurrency, etc.
My eyes lit up when I saw this. We're an azure shop, but I'm not afraid to use AWS for limited cases. Then I saw - $.30/hr (so, $214/mo). Really? REALLY?
Wouldn't it be comically easy to just the add SFTP as a protocol option for S3? Why does this need a dedicated VM to run it? (Yes, I know this is PaaS and you don't manage the VM, but they're essentially pricing it that way)
Edit: I stand corrected on this, AWS no longer requires dedicated hardware for BAA HIPAA: Sorry I didn't look this up, I had old information.
OH I stand corrected:
Alternatively, if trusted users enough, you could use an Azure blob and use CloudBerry. That one is probably not HIPPA compliant though.
I don't even know if this new AWS SFTP plan is HIPPA compliant, don't you have to have a log of file check in/outs? And user login logs?
At the same time, Amazon isn't going to price it so it's attractive to everybody, because it sounds like they'd rather people not use it if possible. Sounds sensible to me, legacy stuff is always going to cost you one way or another.
Frankly, Amazon let a lot of revenue go here — I could think of a few orgs that wouldn’t skip a beat about spending 100x.
$214 (even $100) is a really beefy vm, though. I wonder whats provisioned and why.
"I'm HIPAA, the rest aren't, just pay for it".
It is uncommon for someone to charge you for signing a BAA. It is very common to tie these plans into enterprise only pricing. This is terrible because it adds unnecessary costs to the medical system (which get passed onto consumers) and it completely shuts out smaller players from entering the marketplace. (Cue me side eyeing every single error tracking software SaaS currently on the market who wants to start at 5K a year for their 'small' plan - get real guys)
"Pricing for Box Enterprise or Elite plans as well as the DICOM Viewer additional seat surcharge can be handled by our Box sales team once we know how many seats your are looking for across your company and what types of collaboration use cases you need. "
“HIPAA seals”, which several vendors offer, are mostly BS and have no actual meaning under HIPAA.
I don't think any big vendor has them (Google's HIPAA page explicitly notes that no certifications are recognized by the government.)
Although this is much more expensive than Lightsail, the man hours saved will make it worthwhile.
And each? Surely chrooting users would let you consolidate all of those servers into one (or one cluster for HA I suppose).
Think about the operational costs: someone needs to manage keys, logging, security updates, when S3FS coughs a lung and hangs you need to catch that problem and remount it to restore service, etc. This service reuses the existing authentication systems so you don't need to spend time configuring and managing integration with your customers’ LDAP/AD infrastructure, etc. If you deal with anything which hits PCI, HIPAA, etc. you need to be able to certify that your custom design meets those requirements as well.
That's not to say you can't do it yourself but for many places there's a fairly significant amount of work where the cost of doing it yourself is greater than 5+ years of managed service costs.
We're more interested in what happens when things break (and who's responsibility it is) than minor cost savings in calm waters.
They're cheap, stable and dead simple to set up. This offering from AWS looks attractive, but at $.30/hour for the server makes it $219/mo vs $25/mo.
edit: just a satisfied customer
Somehow people don't use self-signed certificates all over the web but for sftp it's "fine" apparently.
 - https://tinyvpn.org/sftp/#lftp
For regular tasks you could also look at “rclone” which is like rsync in many ways but can upload to s3, backblaze b2, sftp and any more directly. Without remote support.
My only problem with the managed service (which I'd LOVE to switch to tbh) is I can't for the life of me get it to actually connect and upload a file. I suspect I'm doing something wrong in IAM, but the tutorials suck and it looks like IAM isn't even ready for this service yet. I can get a user authenticated, but it's like it's trying to figure out where "home" is and crapping out, connection closed. Nothing helpful in the verbose output, either. Bummer.
And they finally budged. This would've been so much easier.
No FUSE. Pure Go so it's low on resource usage and high in platform compatibility. No OpenSSH. No screwing around with Linux users or whatever. Just a single declarative configuration file. You can run this baby in a Docker container with some adjustments to the host if you want this on port 22.
I had to sourcegraph GitHub a bit to find this thing. SEO is so bad on this implementation. I don't know why.
I can see this being valuable for apps to get user content into S3 more efficiently from the server-side rather than funneling it through hosted servers. The one caveat is programmatic user management, which I'm sure is possible.
It's SSH File Transfer Protocol. When you say Secure File Transfer Protocol many people think about FTP over SSL if you don't emphasize it's about SSH.
Huh? Sure there's always potential for confusion but every time I heard anything about FTP over SSL (which no one seems to actually use) it's been called "FTPS"
In computing, the SSH File Transfer Protocol (also Secure File Transfer Protocol, or SFTP) is a network protocol that provides file access, file transfer, and file management over any reliable data stream.
Another issue is that if your have to support a partner with SFTP data transfer requirements you may have to support one with FTP/FTPS requirements as well. At this point you will have to go to a dedicated FTP server (or outsource it to another company) anyway, and AWS SFTP service will be redundant in this scheme.
Lambda dumps mongo data to an s3 bucket which triggers another lambda to create a csv.
> You can write AWS Lambda functions to to build an “intelligent” FTP site that processes incoming files as soon as they are uploaded, query the files in situ using Amazon Athena, and easily connect to your existing data ingestion process.
Will definitely adopt this!
My solution took a little more human-time to setup than the AWS service might, but once setup, it saves about $200/month.
A small company has even more of reason to want as many managed services as possible. You can avoid hiring netops if you both have a third party managed service provider to manage your network and you have developers/architects who know enough to fill in the gaps.
Yes, $200/month is probably not any more than a couple hours/month of even a very lowly paid developer or ops person, once you account for benefits and overhead.
But once you needed to hire that person for any reason... their annual salary is already on the books. Giving them more work to do doesn't affect your budget. But another $2400 a year might. Yeah, if you can avoid hiring that person _at all_... but you probably had some reason you did have to hire a person or three already, and now you've got them.
The actual experience of working in a small under-resourced organization, in my experience, often looks like this.
When that one netops person leaves, it usually falls on the developers to manage it.
I literally can't parse this.
Baremetal vs Cloud hosting -> resource for resource baremetal will almost always end up being cheaper.
The only way you save money on managed services is the cost of management. Meaning every hour that someone doesn't have to spend maintaining infrastructure is a cost savings to the business. Every minute saved by allowing someone else to do the "undifferentiated heavy lifting" is money saved.
The price is quite high for small projects though: $0.30/hour > $216/m > $2592/y
Many enterprises still use it, would love to see AWS support that as well.
Thanks for the tip, at least it's a step in the right direction.
But that seems kind of Rube Goldbergish. Why not make the change to the file, push it to Git and use CodePipeline with lambda?
What? Did i just became stupid?