Hacker News new | comments | show | ask | jobs | submit | Wilya's comments login

People are talking about the cost of Heroku, but it doesn't seem that outrageous to me.

I mean, $180/month for a running app, that is making money, isn't that much. Once the traffic grows, and you're spending $500 or $1000 per month, yes, it becomes worth it to look into alternatives (because $1000 on Heroku can probably be replaced by $150-200 on more powerful hardware). But moving from Heroku to a less managed option just to save $80/month doesn't seem particularly worth it.

-----


As long as you can factor in Heroku's costs on your per customer acquisition and it doesn't eat a lot of your profit, then Heroku is a good choice.

-----


Heroku is not 100% management-free, and if you know how to deploy the same infrastructure they provide, using VMs, dedicated servers and whatever, then there is little reason to use them at all. You get better flexibility, lower costs and very little more management requirements over what you get at Heroku.

Of course if you don't know how to do that, then you pay them to do it for you, until you can hire someone to do it on your own.

-----


Unless it has changed recently, it's opt-out. The instructions are mostly for re-adding the key if you have removed it.

-----


> As opposed to plain text files, which magically survive having random blocks overwritten by hypothetical filesystem bugs or hardware issues without losing a single line of log content?

Neither system will protect you from losing a single line of log. The problem with journald is that a simple corruption can make you lose the entire log file.

-----


We use it to run a bunch of periodic report-building tasks. Compute internal metrics, archive support tickets, that sort of things. Compared to plain cron, we get proper tracking and history of the runs that failed, and a web UI to access the results (generated reports, for us).

I wouldn't use it for production-sensitive crons, but for administrative stuff, it works fine.

-----


Scaling storage? The highest public plan Github allows 125 repositories. Unless you have a lot of huge repositories, 125 repositories fit into a single smallish hard drive. Even with 10x redundancy, that's still cheaper than github.

Frankly, there are hard problems in internal IT, but hosting a bunch of git repositories is a non-issue.

-----


That's not how it works on a decent VPN system.

If you try to do that, the VPN client will notice that the spoofed server isn't presenting a valid certificate or doesn't use a valid key, and refuse to connect. Same reason you can't "just" middleman an HTTPS connection.

Besides, there's no need to spoof. The point of the VPN connection is to protect against the wifi router (even the legitimate one!) reading the traffic. By spoofing, you're just replacing a dodgy wifi router with another dodgy router.

-----


It's extremely easy to middleman a HTTPS connection.

Many PC antivirus/firewall programs do it right now.

-----


Programs running on your PC can do it because they have access to your certificate store, and can tell the system to trust their certificate.

Entities not in control of your PC can't MITM an HTTPS connection, barring a catastrophic bug. And it is catastrophic. If you have a way to do this, please tell everybody because it's going to be the next Heartbleed.

The entire point of HTTPS is to prevent stuff like you're describing. And it does work, for the most part. Bugs happen, but they get fixed as they're discovered. It's definitely not "extremely easy."

Please go read up on this stuff before speaking authoritatively:

https://en.wikipedia.org/wiki/Transport_Layer_Security

-----


That's only because the antivirus/firewall products have access to your machine and install a root certificate on them, or more likely, are just using a browser extension to rewrite the dom on the fly.

More succinctly, the phrase "man in the middle" kinda loses meaning when the man in question is your own computer.

-----


Reading the bugs and threads linked, the biggest issue seems to be that some people from the debian installer team decided to write their own live cd and declared that Debian Live was obsolete, without ever speaking to the people from Debian Live.

So, it is probable that a live-cd will still exist, but it won't be the one called "Debian Live".

-----


Last spring, France passed a quite intrusive bill that pretty much legalized mass-interception of communications. The general public didn't really care much.

-----


In theory, the point of Google Fonts is that it does user-agent sniffing to adapt the font and css to the user's browser, to get the best rendering. You would lose that advantage by hosting the fonts yourself.

-----


In practice, I'd set up the CSS such that modern browsers render it beautifully and give a crap about older browsers. Or make a CSS for older browsers without that font.

-----


Everything related to Glacier is ridiculously complicated.

The pricing is tricky (the per-GB price is cheap, but the retrieval can get horribly expensive). There's the fixed 4-hour delay for all actions (including listing stored files), which makes any interaction a pain. And there aren't really any good clients or high-level libraries that abstract away this complexity.

For a disaster recovery, I would certainly go for something simpler and easier to use. When everything is one fire, the last thing I need is dealing with a tricky API to restore the company files.

-----


I've used it a bit, and I have to say I agree.

Look Glacier is great and the prices are really good. But it isn't something an SMB wants to be using directly. Now a large enterprise who can dedicate engineers to this, sure, but an SMB really wants to be utilising Glacier by means of a third party service in my opinion.

I think it is wise to think of Glacier as cold storage. So if you need recoveries RIGHT NOW, well, it may not be for you. If you can wait 24 hours? Sure (and, yes, I realise you can recover faster than that, but between transfer times, and actually starting the transfer it can take a while).

-----


There are some companies that sell things to IT people that use glacier natively. Veeam makes a very good backup product to backup VMs and uses Glacier natively. No need to use APIs you just select the VM you want to restore and it brings it down from your Glacier store. You have to remember that most IT people have no idea how to use an API, and aren't on Hacker News.

-----


The Glacier API is a miserable piece of crap. A couple months ago, something regressed on Amazon's end causing uploads to fail for no good reason. We could have worked around it client-side with some changes to Boto, but that would have been painful enough that it literally would have been less work to start from scratch on Google Nearline Storage.

For better or for worse, Amazon fixed it, so we're still using Glacier.

-----


There are Glacier clients that allow you to manage the "Restore Speed" so you don't get hit with ridiculous price hikes.

Glacier is PERFECT if you just need to restore a photo or document, and not the entire repo.

-----


But you have to store things in larger archives or the per-object overhead hurts your pricing. When Glacier first came out I really wanted to use it, but it had so much complexity over just treating it like an object store that I didn't use it. Then add the fact that S3 Standard kept coming down in price and Glacier just stood still (thus the name).

-----


I'm currently storing my entire photo collection of about 50GB and ~20,000 photos. It cost me about $3 to upload the entire library. I pay around 50 cents a month for storage. YMMV, but I'm very happy with it.

-----


Not that large. 25MB objects will cost you $.002 per GB to transfer, much lower than the storage and bandwidth costs. 5-10MB objects are perfectly feasible at low cost.

-----

More

Applications are open for YC Summer 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: