Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Best free compute and other resources for startups?
389 points by spdustin 27 days ago | hide | past | web | favorite | 143 comments
I'm interested in pursuing a few new concepts, and I'm looking for the best free resources for startups that aren't going through an incubator. Specifically looking for free tier or startup credits beyond what AWS / DO / GCP has to offer, or for off-the-beaten-path programs that single-founder startups might be able to take advantage of. I'll be using Stripe for processing any payments.

What do you folks use when you're looking to stand up your MVPs?




I'm something of a free tier connoisseur, and recently built an entire project with 13 forever-free tiers. It was more of a personal challenge than a business decision, but it might be a good reference for what's out there: https://www.simonmweber.com/2018/07/09/running-kleroteria-fo...

Some other services not mentioned there that I've used for free recently:

- humio cloud for log aggregation

- nodequery for server monitoring (though its future is unclear)

- braintree used to waive a big chunk of payment fees ($50k?) for startups, but I'm not sure the program exists anymore

Beyond things that are literally free, I'm a big fan of cheap vps providers if your uptime can take it and you can do your own ops. One of my projects makes ~$200/month and is hosted for $6/year on a 256MB vps. Lowendtalk/Lowendbox is the usual place I find these.


> Was it worth it? Probably not, at least in the context of a side project. Reading pricing docs, planning Dynamo capacity, and setting up a local environment added days to what should have been a weekend project. That said, it was a fun challenge and the result is more robust than my usual vps setups.

I disagree! Learning experience and posting about it aside, almost a year later it's still running at $0pcm?

You, what, doubled to tripled the setup time? And cut annual running cost from let's say $5*12 = $60 (plus taxes over there?) for some cheap but not free tier hunting setup.

Seems that it was worth your while to me. Obviously increasingly so for as long as it can survive (the companies don't fold, or stop the free tier).


Yeah, I suppose it's true that I'm more willing to keep these projects around when they aren't costing me each month. It also makes it easier to toss ideas out and see what's worth actually investing in.

On other hand, I think I'm much lazier when it comes to marketing them since there's no urgency to make a profit.


What OS and distro do you use for a cheap 256 MB vps? Wondering and wishing to play hehe


Have heard horror stories about the cheap VPS offered at LowEndTalk and similar outfits. What's your average up time? and can you recommend any service provider?


I once got a super deal for a VPS that had little resources but 4 (FOUR!) public ipv4 for $12/year.

The company went bankrupt in less than two months. I didn't even bother to wonder why.


It's hit and miss. I got lucky with mine, excellent uptime and good connectivity for $8/year (this offer no longer exists).

They do lag behind in terms of security updates, however. Not something I'd use to store anything of real value in these days of SPECTRE, ROWHAMMER, ZombieLoad etc.


Pings from instances to nodequery are 98% on the low end and 99.9% on the high end. It looks like I'm at 500+ days up at impactvps.com, though I used to see cpu and disk degradations somewhat regularly with them. I haven't had any major outages in a few years of use, though one provider went bust recently after an overly-aggressive sale (alpharacks).

I generally run one instance + sqlite and accept the downtime. The next step up might be two vms across two providers + a cheap managed db.

I don't think I have any super strong recommendations. I did become a fan of BuyVM recently due to their $1/month shared sql plan. But, they're apparently they're planning to retire it, which puts them at a similar price point as something like vultr.


Thanks for sharing the insights. Do you think one should stick with $5/month droplets or hit and try the low end VPS?


Depends on the use case. I think the main factors are reliability and security: if it's not a crucial service and isn't handling anything sensitive, then there's no reason to pay more.


Simon Weber! You are a legend


Heroku's free tier is good. GCP App Engine is pretty similar, a little more powerful, but also more limiting in how you build your app.

My pick for MVPs is AWS Lambda. It's very easy to get up and running, and you get access to the rich AWS ecosystem. Yeah yeah, the "unlimited scale" benefit will be great when you're accelerating user growth, but that's not why its a good pick for MVPs. Need a cron? Not too bad on App Engine, but pretty hard on Heroku. Cinch on Lambda. Aggregated logs? Easy on most managed platforms. How about distributed tracing? Ridiculously easy on lambda. Need a quick queue? Horrible on heroku, alright on google cloud, but on AWS? Create one on SQS, wire it up to a lambda, done. There's a learning curve, but the power of the AWS platform is really what makes lambda a great choice.


The difficult part with AWS is local development. Waiting on AWS deploys is not a fast process and it kills momentum.

LocalStack tries, but IMO it's not great.


I agree; this is a rough part. There's always a rough part, no matter how you're writing your app.

I think things like the AWS SAM or LocalStack do a decent job at this. But, maybe more importantly, it requires a shift in developer thinking. Screw local development; yeah, I'll invoke locally during prototyping, but once the essential structure is done I'm going to deploy it and test directly on AWS.

I don't find the feedback cycle due to deployment time to be a blocker. SAM can deploy a function in 10 seconds. Its not an instant code reload, its not ideal, but its also not context-breaking like a 2 minute deploy would be (to be fair, SAM is capable of 1+ minute deploys, but that's usually only when you're adding new AWS resources like new buckets or event invocation rules, not when you're just changing the function code).


Since they added local testing support with `now dev`, I've enjoyed working with Zeit Now (https://zeit.co/now) pretty well. They have the easiest testing story amongst lambda platforms, in my opinion.


Cleanly separate your logic from AWS dependencies using interfaces / protocols, summarily mock the required service by implementing said protocols (sparing you the pain of mocking the AWS services themselves), and voilà, you're running everything locally.

It's easy. But if that's too much for you, the serverless framework has some nice plugins for all of it.


One of the smartest, easier things you can do early on during the development of a function is to treat the "core" of the function body as interface-agnostic. Write a function that just takes an object and returns an object, then write an Adapter function which accepts, say, a lambda event, calls your core handler, and returns whatever your lambda function needs (API Gateway response, etc).

This enables you to, with a little more effort, swap out the Interface Adapter part with, say, a CLI. Or, if you ever want to get off lambda, its a bit easier as well.

Mocking out dependencies, like S3 buckets, isn't worth it during a prototype/MVP. As time goes on, sure, go for it. But early on, just use AWS. Don't use localstack or any of the other various tools that try to replicate AWS. They're all going to get it wrong, hopefully in obvious ways, but usually in subtle ways that'll crash production, and you're just creating more work for yourself. Just use AWS. Just use AWS. Just use AWS.


The agnostic core part is pretty much what I do. Mocking the interfaces is just done to put together the different cores in integration tests and check that the system itself works, independently of what it relies upon.

Then, everything is once more tested using AWS. I stay away from replicating AWS services locally.

It greatly simplifies refactoring and overhauling the core itself, as well as trying out new approaches.


I understand what one can do to create a leaky abstraction for local development. And it is leaky, and it can't be relied upon for correctness, and so while it makes "getting things done" possible, it makes getting things done correctly much more difficult than it needs to be.

And local development when you've opted to use systems like Cognito are well beyond its scope, unless someone has something very clever that I haven't seen.

From a lot of experience doing this on AWS and on other clouds, I have learned that it is better to use systems that you can operate, and then use hosted versions where applicable. RDS is great, but it's great mostly because you can run PostgreSQL locally and inspect its brain. DynamoDB, SQS, etc. tend to be untrustworthy and should be avoided unless you have a bulletproof story for local testing (and none of the fake implementations are bulletproof).


I avoid systems like Cognito for that reason! SQS I've found to be ok when used as a backend for an abstraction. E.g. Laravel has a generic queue library that can be backed by file or redis or sql, ans using redis locally with SQS in production worked quite well with this.


SQS has different characteristics from a Redis queue or a RabbitMQ queue. That's the source of a lot of my nervousness around it: when those abstractions break and somebody-who-isn't-me has to debug it.

(I actually have an answer for local dev with Cognito because my current employer already had Cognito when I showed up, but it amounts to "have configurable signing keys and sign your own JWT in dev".)


I don't use Cognito, and all authentication / authorization takes place upstream, in API Gateway.

Our local tests' leaky abstractions don't care about anything happening upstream. They only care about testing our core "agnostic" logic.


Serverless framework has an "invoke local" feature that solved this for me. I too was annoyed at the speed of deploy.


It's not the running that's the problem--it's the dependencies. You can run Serverless on Localstack, there's a plugin for it, and you can run locally while pointing at Localstack for your "AWS" dependencies, but the impedance mismatch remains pretty high.

(Also, getting a bit further away from that, I find API Gateway kinda shifty and I dislike that it's so hard to run something like NestJS inside of Serverless. Doable, there are examples, but it kinda sucks.)


But there is no deployment delay for the dependencies, so why can't calling the dependencies live in the cloud work?

Are you saying you need an environment to test in that is 100% disconnected from the internet? Given the interconnectedness of APIs in 2019, that seems like making things harder than necessary.


For at-work projects, I don't want to/don't have the bandwidth to be responsible for developers not cleaning up after themselves or exploding DynamoDB with a million write requests or whatever. My team has not yet demonstrated that they can self-manage cloud dependencies and I don't have time to do it for them.

For personal projects, having to futz with Pulumi--and, I should note, Pulumi is the one I like--just to write some code on top of a web server just really sucks. Iterating on a heckin' cloud template just to be able to write some code sucks. Waiting (and as somebody who works on devops projects primarily at work, I am very familiar with how long one waits) destroys motivation to work on anything, both at the start of a session and in the middle--like, blowing away all those dependencies when I need to reset takes way longer than `docker-compose down && docker-compose up`. I am also incredibly cheap for personal projects, because I cannot rationalize spending money on something I'm not certain I'm going to ship, and so I can't adequately ensure that AWS is not going to start dinging my credit card for resources.


The issues highlighted in this thread around the need to develop locally against cloud resources while dealing with a litany of dependencies are one of the things we are trying to solve at Stackery.

This post talks about how our team experienced much of what’s in this thread and what we built to solve it: https://www.stackery.io/blog/how-do-we-setup-a-proper-server...

VS Code folks seem to be pretty excited about our plugin https://marketplace.visualstudio.com/items?itemName=stackery...

We’d love to have you give it a try and let us know how it works for you.


This is, unlike most plugs, actually not bad; the tool looks neat. But I am not your audience. I already have a strong grasp of AWS offerings and tools and know exactly what I want; what I don't have is the inclination to do it for development. If I did, I have Pulumi when I need to build cloud infrastructure and I'm allergic to visual tools at pretty much every level that isn't literally making a GUI.

The problem that I have, and it is probably intractable is 1) I don't want to manage developers YOLOing dev stuff around at work and having a gigantic bill show up because a developer randomed something expensive and didn't know/didn't care to monitor it, and 2) I don't want to deal with the expense or the slowness (of deploys and redeploys, and that includes the AWS resources--ever gone "ugh, I need to burn this down and start fresh" on a dev environment in AWS? it takes forever) of using AWS for development on personal projects.


Thanks for replying with your initial reaction. It's good to get your perspective.

Visual tools: If you are proficient at writing CFN templates or have found another tool to do that, cool. FWIW, our customers tell us they find the visualization useful when onboarding other engineers onto a project and then when they have a split screen between the template and the visualization to see how the CFN template is built.

Yoloing Developers: Heh. I'm stealing that phrase. But seriously, this is a common concern and a reason why managing accounts and namespacing for dev/test/staging/prod environments is a big deal. (plug - we do that too) At AWS dev managers get daily reports on the cost of each developer's accounts. While serverless tends to be much cheaper for dev environments than containers or EC2 instances, there are indeed ways to run costs up.

Deploying and redeploying: That's precisely why our CLI enables development against live cloud resources. When you use stackery local invoke, it assumes the function’s IAM role and fetches the function’s env var values. This enables rapid, and accurate, local function development. On the infrastructure side, it's common for our users to make up to 5 or more changes to their architecture in a day. The more you can do with the building blocks, the less code you need.

Thanks for taking a look.


Serverless framework offline plugin makes this fairy easy if you are going fully down the lambda route


Deploying to lambda is literally uploading a zip.


Yes. Do you know what deploying in my local environment, for my own projects, is? It's "control-S". (Because a file watcher reloads the application.)

This is acutely top-of-mind for me because I used to run a devops consultancy and I currently head devops, including release management, for a startup built heavily on AWS Lambda. I am doing my level best to reduce the loop as much as I can for the dev team.

It's still way, way slower than control-S. And always will be.


What is it that makes this a problem? It has literally never been an issue for me.

I test the lambda body locally with a single command. Only upload a new zip when you hit the release candidate stage and that's not frequent enough for the few seconds it takes to upload to lambda to be a problem.


Until you need a database.

And an API.

And a website.

And a CDN.

And...

And if you get as far as CloudFormation I just have one word of advice. Run!


I used terraform instead of Cloudformation but didn't find it too terrible for what I needed. I realize that it doesn't scale very well to very complicated stuff, but we're talking about using the free tier to prototype things. I'm also not convinced that it's significantly easier to develop locally if you need all that stuff.


CF isn't that bad... if you use one of the many abstraction tools like the CDK or SAM.

It's ugly. It has a TON of powerful functionality that the documentation dives into way too early for most people. It has some weird behavior, especially when you get into networking or stateful resources. It's not perfect. But it gets the job done.


I worked at AWS on a service team, and did a lot of work with CF.

CF's limitations are in part due to the fact that AWS services are all independent entities and the interoperability services have tremendous difficulty getting other service teams to do their part.

The CloudTrail team, for instance, went to great lengths to furnish documentation for other teams to configure their API logging; I know our team fucked it up repeatedly because it was always an afterthought.

You can see in CF that some resources are basically useless, and are plainly provided by CF so that the service owner could check a box. Read "Updating DB Instances" here[1] and then look at all the properties that require replacement.

CF's design reflects this reality. It does not try to understand how services interact or what objects are. Rather, a "resource" knows how to set up and teardown and a few other operations.

If you manage resources exclusively through CF, it can handle most simple use cases. Which is the other limit of CF: by design, they're trying to keep their core invariants simple.

For instance, CF offers no mechanism to stage deploys. As an example, the popular Serverless tool has to initially construct a stack with just an S3 bucket, and then issue an update to that to include your actual stack.

Internally, we proposed doing staged deploys (because wound up hacking them together) and they weren't interested. And I think that's because they would rather keep the API simple even if it means users move away from CF; possibly a consequence of a loss leader business model.

So, yes, CF does get the job done at first, but you will eventually find yourself outgrowing it.

[1]: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGui...


Why? Cloud formation is awesome.


Heroku has a nice cron addon you should investigate. It costs nothing (except it spins up a dyno to run your cron task so you will pay the prorated dyno rate). Very simple.


Heroku is my choice too. The free tier is good enough, with backend, services like SQL DB, MongoDb, Redis. And of course the scheduler.


App engine isnt limiting to how you build your app, the old version was.


The new version also doesn't have a free tier.


If you publish a free Alexa Skill (easy to make a basic one) you can apply for $100 in monthly free AWS credits: https://developer.amazon.com/en-US/alexa/alexa-skills-kit/al...


Same thing for Google Assistant actions, which give $200 in monthly credits.


Even easier with Jovo and cross platform: https://jovo.tech


Great deal, thanks.


I think a combination of hardware and cloud services could give you a cheap way to run pretty much anything you want. I will try to share what I am using :)

Tooling

For CI I bought a cheap Intel NUC that just sits on my table and runs Drone, Node-RED, my testing environment server.

1. Here from Drone I get all the test pipelines and most of my executable/docker image builds at a much better speed than I would get with Travis, CircleCI and so on. My desktop runs a Drone agent too, so never waiting too long for new builds. 2. Node-RED is used for various automation tasks such as healthchecks for all my services with rate-limits, a bit more complex checks (rather than just simple /health or /status endpoint ping), mailgun webhook handling, github brew formula updates and so on. It does control my TV too :D 3. I also run some tasks on Google Cloud builder which is free and fast. 4. Uptimerobot is also used (http://uptimerobot.com/) as it provides free, second opinion whether my services are down or not.

Compute

For compute I opted for GKE (at the time it was the best managed k8s you can get (and it's still probably is)) on Google Cloud with 3 1vCPU 3.75GB RAM VMs. That gives me plenty of resources to run my stack and any additional side-side-project on it. I tend to code in Go so I don't need much resources. I use managed postgres as a database so I can sleep peacefully.

For additional services I ended up choosing Vultr (https://www.vultr.com/) as they have many regions available and their pricing is really good. There is a nice, non-official CLI and so far their uptime was quite good (few issues but nothing major).

It really depends how much time you want to spend on ops. If you are fine with doing quite a bit of ops, go with Vultr or any other cheaper cloud provider. If you want to spend more time on features, go with GKE and fully automate deployment. Git push -> Google Cloud Builder -> Keel (https://keel.sh) -> new version deployed :)


Software Engineering Daily had a recent discussion with the founder of render.com, Anurag Goel, and I learned that it provides static sites for free, and services starting at $5 so it's what I'm reviewing now. Render appears to be using K8s and Docker containers to create instances from Github repos. Anurag gave a reasonable explanation of why he thinks render.com is superior to Heroku at 29:15:5 in the transcript. https://softwareengineeringdaily.com/wp-content/uploads/2019...


Render founder here. Happy to answer questions.


I'll tell you from personal experience--dont let free tiers influence your tech decisions. your early lock in decisions are the most costly and you will pay dearly long after the free tiers run out


This is worth considering, especially when you can get a DO droplet for $5/month and run whatever free software you like on there on Ubuntu. That has to be as un-locked-in as it gets. You could run the same thing from a Rasb-Pi at home!

I am happy to use Netlify as a static file host as that isn't much of a lock in: I can move to S3 later, or even the DO droplet and Nginx!


Agreed. Unless you're intentionally building some low-margin business model that requires cutting these costs, you shouldn't box yourself in and ignore potential growth because of hosting costs.


Linode is $5/mo. and I got a $50 credit for just being at a software conference; I also don't think that expires (which is good because I'm not planning on using that credit atm). That's 10 months of free service.

It's also pretty barebones in terms of upsold services, but apparently the customer support is pretty top-notch. I'd take a look at it.


All my credits seemed to expire after two months.


This comment spooked me so I went back to my emails and checked what Linode said:

> The $50 credit can give you up to 10 months of hosting on our 1GB plan. You can also use the credit towards our add-on services such as backups, Block Storage, or NodeBalancers. Use the promo code today!

So they mentioned 10 months in the email, and I searched a bit further and didn't see anything about service credit expiration. You can also get service credits using referrals, coupon sites, and even blog posts:

https://www.linode.com/community/questions/8788/share-your-l...

Airbnb cereal and all that. https://medium.com/@austincoleschafer/a-short-story-about-ho...


Linode's credits don't expire once they're applied to an account. Promo codes themselves can expire, though, so you'll miss certain promotions if you don't sign up in time. Open up a Support ticket if you think credits haven't been applied or disappeared without being used up by services. (Linode employee here.)


Check out https://github.com/255kb/stack-on-a-budget for a great list of services/resources that offer free tiers to get you started.


Best compilation of free resources I've come across: https://github.com/ripienaar/free-for-dev

Also: http://www.lowendtalk.com


Linode has a $5 “nanode” VPS and scales very cheaply compared to AWS.

Namecheap does free ID protect for domains.

PayPal offers a good micro transaction fee rate if you plan to do small ($10 or smaller) transactions.

You can use services like Algo or Streisand to set up a relatively secure auto configured Wireguard VPN.

Nextcloud gives you a HIPPA compliant personal cloud for email, calendar, chat, etc.

GitLab offers unlimited free private repositories and some basic CI.


Sheesh I have been struggling with the last few steps of wireguard/algo. Gotta revisit again


Zeit Now 2.0 has a free serverless tier: https://zeit.co/now

MongoDB Atlas has a free cloud database tier: https://www.mongodb.com/cloud/atlas


I have been running a small scoreboard on Zeit's free tier for almost 3 years now...I set it and forget it, only use it once per year and for a while i couldn't even figure out why it was still working or where it was hosted. Love Zeit!


Awesome list for Free for Dev resources:

https://github.com/ripienaar/free-for-dev#major-cloud-provid...


My entire app is “serverless” and I host it on AWS and netlify using the free tier. So far I have been able to keep resource use below the free tier thresholds.


I'd like to second this. The amount of free compute available to serverless infrastructure is insane! Serverless does have a learning curve but it's worth it to learn.

The next best is Heroku's free tier.


Hasura on Heroku's free-tier for the backend + netlify's free tier sounds like the sweet spot.


I feel like Heroku got left behind in the current k8s craze but they're still a good choice for a lot of use-cases, cheap MVPs being one of them.


I love Heroku but the problem is the price. The free tier is amazing, but once you get above that, the price scales way out of proportion. It's worth it if you're profitable beyond the point of infrastructure costs and just need to outsource operational costs. But if you're strong on operations and short on cash, Heroku is pretty darn expensive.

It's cheaper than hiring an operations team, but it's a heck of a lot more expensive than, say, dokku on a Digital Ocean or Linode VPS.


Heroku is still significantly easier to use than Docker/K8s.


That depends on your mental model. The isolation and reproducibility of Docker is much easier for me than dealing with build pack nonsesne.


You can deploy Docker images to Heroku. https://devcenter.heroku.com/categories/deploying-with-docke...


Google Firebase has a very rich set of features and almost everything is included (but very rate-limited) in their free tier and a flat $25/mo gets you quite a bit further. On the downside, it's very much a PaaS solution so lock-in is a big concern.

https://firebase.google.com/pricing


GCP used to give an additional $200 credit if you register through the GitLab referral, with the total of $500. I don't know if it still working.

https://about.gitlab.com/solutions/google-cloud-platform/


your own computer.


I understand why some are downvoting this but, with consumer internet connections being 1000up/down and a threadripper giving you like 100 cores for a few bucks, this can be a very handy solution, with quicker deploys and easier management.

security might be too much of a concern for some applications


> with consumer internet connections being 1000up/down

Step out of your Silicon Valley bubble and recognize that this doesn't apply to everyone.

I only have 30 megabit up/down with Frontier. I can get faster, but it gets expensive quick, and I don't need more than 30 mbps, even as a gamer. Xfinity only offers 150 megabit down in my neighborhood, with a puny 15 megabit up, and for twice the price.

And as others have mentioned, it creates a single point of failure and if you get hacked, your entire home network is compromised. The last thing you want to have happen is for some unsavory type to decide they don't like you or your services and DDoS your home internet connection. I've had it happen to me and it's not fun.


I disagree, but I do have Frontier w/ 12Mbps down, 768kbps up.

I worked on a startup, out of a dude’s house, he also owned the house next door.

I had three separate FiOS ONTs connected to the rackmount server in the living room.

Lots of live video streams.

The accounts were business, so when the gardener slices one of the fiber cables, it was fixed in a few hours, and the other ONT was still working.

My other buddy lives 2 hours away and has symmetric 1Gbps service for $50/mo in a rural area.

I would love to be able to have a symmetric connection.


"Step out of your Silicon Valley bubble" ... I have Business Fiber Internet package from Bell in my house in Toronto. Residential area far from city core with an old houses, definitely not a Silicon Valley. 1000 up and down. It costs me $80 Canadian pesos. Not bad.


Im in the same boat as you (350/350 for $35 tho) but we are outliers even in the Canadian market. Fast internet is just now getting to everyone at affordable prices, even though ive personally had gigabit for 4 years.


I'm in Texas


It's ok if you're ok with your service being down for days at a time. On vacation and the server breaks? Gotta wait till you get home. Internet down? Mine gigabit was just down for five days, a squirrel ate through the fiber and they had to a pull a new one.

Running a server at home is fraught with peril but if those downtimes are ok, then go for it.


I've had a server made of old desktop parts running in my basement for years. Its rarely gone down, and never during vacations. I've added a UPS two years ago, so unless I have an hours long power outage, everything still hums along.


Yes I too have had multiple Frankenstein servers in my house with UPSs, for over 25 years. And most of the time they work great. Except when they don't.


Still more reliable and orders of magnitude more powerful than any free service.

You can also buy two.


> Still more reliable and orders of magnitude more powerful than any free service.

I don't think so. AWS Lambda has a generous free tier and runs on infrastructure that has constant monitoring by professionals, spread across thousands of machines in multiple datacenters all over the world, with an SLA.

Your one (or two) servers that share a single consumer internet connection (with no SLA) will never be more reliable.

Yes, you can get more powerful, but again, that's a business decision you have to make. Do you want power or reliability? If uptime is important to your business, then running it at home off of consumer internet is not a good idea.


Either way, relying on free services you typically need a backup plan anyway. And I hardly imagine tailoring everything for amazon can be considered cheap nor sensible.


> you typically need a backup plan anyway.

You need a backup plan for paid services too.


So, what happens when the SLA is violated in the free-tier?


It works until your internet connection and electricity are up. They will go down in any unexpected moment.


5 days?

Was this a business or consumer account?

Do you only have one ISP?


I think it’s hard to justify running your own setup in 2019 when you can get Linode/DO for 5$ a month.

Sure, CPUs are cheap and (if you’re lucky enough to live somewhere that’s offers it) consumer internet can be absurdly fast. But if youra app goes down at a critical moment because your consumer-grade ISP has an outage, or someone tripped over the Ethernet cable, or you had a brown out because you live in the UK and everyone put the kettle on at the same time, you’re going to look like a bit of an idiot.


It’s not a bad idea if your ISP gives you a static IP and you’ve got some spare hardware lying around (just in case closing your laptop causes downtime). If you can isolate that box from the rest of your home network, even better.

Personally I’ve been using $3 Lightsail instances on AWS. There’s a lot you can do with one CPU and half a gig of RAM. An MVP Rails app will squeeze into that easily enough.


Even without a static IP, a free dynamic DNS service is good enough to prototype on a local machine as long as you aren’t using enough bandwidth to tip off your ISP. At that point, however, you’ve probably got enough market validation to move to another host.


Cloudflare free tier is really good for that :-) Since all traffic ends in Cloudflare you can change an IP address and it redirects the traffic in seconds whithout having to wait for dns propagation. Try it.


I will! This is a good tip.


I have that stack and security concerns are holding me back. Any suggestions for ways to secure a home network for public hosting?


Put the server on its own DMZ network. If your router doesn't support this, you don't have a good setup for self-hosting.

I did all of this for my family's software company 20 years ago. We were serious about it, too: SDSL line (good at the time!) with an SLA, natural gas standby generator, redundant hardware... never again. I spent all my time futzing with infrastructure rather than producing any business value.

I did learn a ton, and that continues to be valuable (it surprises me how often I'm the only dev in the room that has experience configuring IP routing). But it's a huge time-sink, and I'd never consider doing it to save money (the amount of money saved would be marginal, at best).


Definitely not a money-saver, but being in control and knowing how the underlying pieces fit together is quite valuable.


Cloudflare Argo tunnels. Reverse HTTP tunnel, similar to ngrok and others.


Reasons this might be a bad idea:

* Electricity costs for networking/compute hardware often equal VPS costs

* Massive single points of failure WRT power, compute, networking

* ISPs don't like when people run their businesses over consumer networks and will retaliate (forced to change to business plan, account terminated, etc)

* Vastly increased overhead managing the full infra stack (power, HVAC, networking, compute, firmware, OS, etc)

* Massively increased security liability (your site goes viral and now Hacker News and Reddit are sending traffic to your house, on the same network where your personal laptop with your CC info saved on it lives)

XaaP services abstract away so very many of these concerns for prices that oftentimes are below what you pay for the electricity to run your own infrastructure.


I think this works for university student when 99.99% uptime isn't your highest priority. My university has static IP and near 700 up/down speed to each dormitory. So by investing in an old workstation I can easily work on various ideas and built MVP for it. I finally ended up running a crawler service for months before I migrated to a vps.


You can't really do NAT traversal on a university network though, so it's pretty useless for servers.


Each dorm student at my university had a public IP assigned to them.

I also remember downloading quite a bit of MP3s hosted on an SFTP server in a friend of friend’s dorm in Philadelphia.


Can get a $5 vps and set it up as a reverse proxy and vpn host then vpn to it from your locally hosted server to get around that. Not "free" but this method allows you to use whatever fancy hardware you have which would be significantly cheaper than an equivalent vps.


If you setup a domain and a c-corp/limited company, you can apply for $1,000 of AWS credits under the AWS Activate scheme. Careful tho because if you get the $1k, you won't be eligible for further AWS credits under an incubator worth $100k+

https://aws.amazon.com/activate/

Not sure if they still offer the $1k credits for startups outside of incubators. Here's a thread where someone else got some credits.

https://www.indiehackers.com/forum/i-just-earned-1-000-in-aw...


You can run a decent amount of stuff off OVH/Hetzner/Online.net dedicated server with monthly rental price under $100/mo.

At OVH you can have a Intel Xeon D-1540 (8 c / 16 t - 2 GHz / 2.6 GHz) with 64GB ram for like 90 Euros/month + VAT (US link: https://us.ovhcloud.com/products/servers/infrastructure-serv...)


OP is asking for free tier stuff and you respond with servers that start at $69/month?


StackPath has a startup program [https://www.stackpath.com/resources/propel-startup-program/] and a sandbox for testing out serverless scripts [https://sandbox.edgeengine.io/]


As others have mentioned, Linode is great value. Pair it with Dokku and you've got an even cheaper / more customizable (but not scalable) Heroku.

However, be weary of Linode's "backup" service if you intend to store a lot of files (as opposed to lots of data / large files). It's file-based (not block-based) backup, and I can confirm it does fail. In our case, we weren't storing tiny files either, they were images uploaded to our infrastructure. Granted, don't do that(!), use S3.

Also, regarding AWS/S3, there's a plethora of ways to get into AWS's Activate program (https://aws.amazon.com/activate/) which includes a decent chunk of free credits.


> However, be weary of Linode's "backup" service if you intend to store a lot of files...

Huh? Wouldn't it be a lot better to be wary of that service, thus lowering the risk that you'll be weary of it at some point?


Sorry, I was weary when I wrote 'weary', whilst I meant to write 'wary'. I'll be more wary of being weary next time.


This might come in handy a bit later for you but we were having headaches with the ramp-up pricing of JIRA+Confluence. We explored open source alternatives and found OpenProject to be a great product. Free of charge of-course. Once you get some of the free/cheap VMs mentioned by other posters, feel free to checkout OpenProject. https://aws.amazon.com/marketplace/pp/B00T6OCWRU for EC2 . https://www.openproject.org/ for the main website.


Other people have mentioned GCP/Firebase for the $300 free trial, but I'm going to specifically mention their startup program: https://cloud.google.com/developers/startups/

I'm hosting my game servers today on $1000 of free credit from this program. If you have VCs and whatnot, they'll be able to get more credits out of the same program for you, but this was great as a solo founder.


Same. I got $1300 credit simply for applying.


I don't think Best (or Good) will be Free, after a certain tier (e.g AWS only gives a t2.micro for a month (750 hours to be specific))

Most of the cloud providers have some kind of startup program, where they offer free credits to startup.

We (my company) got credit benefit from IBM, it's been a real savior for us. You can check it out here https://developer.ibm.com/startups/


Having some uncomfortably close experience with IBM Cloud very recently, I would have a hard time recommending it for just about anything.

I'd go to Azure over IBM Cloud, and I have perturbed many electrons about how frustrating Azure has been.


At Userify (https://userify.com), we offer a free tier for SSH key and user management for up to 20 servers free (billing doesn't begin until 21 servers) with our SaaS offering (versus 10 free for our on-premise product). Try it out at https://userify.com, no credit card required.


Quite important question. Indiehackers might be a good place to ask as well. I got introduced to this open-source tool Hasura. It gives you free service for back-end and handles a lot of issues about scalability. Please DM for more details https://hasura.io/


Azure has 25+ "always free" services besides the $200 free credit.

https://azure.microsoft.com/en-us/free/?v=17.39a


You could always use www.evolute.io with free credits and simply shift between the free usage tiers, across platforms. If you develop your software in a “serverless” fashion, you might even find a way to be “free” for the foreseeable future.


What do you need to run? Many live systems can run on pretty cheap hardware.


I see a lot of linode recomendations here, but if you are looking at mostly European audience scaleway.net has very cheap VMs starting at 2€ per month, while not being that much slower than linode


online.net is the sister company of scaleway. scaleway has some ARM bare metal which is cheap but sometimes you see issues if you try to run software only optimized for x86.

kimsufi has 3.99 euro/mo baremetal server but very hard to get (have to keep trying).

hetzner cloud has cheap vps and also has auctions for servers. I got some i7 8core 16GB RAM & 1-4TB HDD for 22 euro/mo. They are closer to 30 euro/mo but likely your best bet for budget dedicated with better specs.

i have a few other dedicated servers (old Xeon) running off a bulgarian service provider


ARM servers at scaleway actually start at 3euro/mo. I find that others have a bit more difficult setup the server. Finding the best auctions isn't the best way to spend your time in a startup. You are better of doing something that improves the quality of your product, and until some point, the server quality/price isn't your concern at all.


Hi, if your business is data driven based on a cloud datawarehouse you can check out repods.io - there you can start out with a full functioning pod for free without expiration


dedicated hetzner


And Cloud Hetzner, starting from 3 euros


Don't try this at $HOME, but I have a friend who ran an early "cloud computing" start-up company in the 80's called "Computer Time Share Corporation", that actually sold time on another company's computers! We affectionately referred to them as "Computer Crime Share Corporation".


buy a .edu email, use google compute till you have an MVP to sell


How do you “buy” a .edu email (without signing up for a long and expensive course or college)? I’ve seen some hacks online to get one for free, but not sure if they work.


Anything for free GPUS?


zoho.com for email hosting.


Bizspark


Bizspark changed and is no longer available. I was in the last round when they shut down early.


They still seem to have something going:

  Available for qualified Startups

  Technical enablement
  Microsoft technology to help your bottom line:

  Up to $120k of free Azure cloud for two years
  Visual Studio Enterprise cloud subscription
  Office 365 Business Premium
  Dynamics 365 for Customer Engagement and Talent
  Enterprise grade Azure support
https://startups.microsoft.com/en-us/benefits/


I think you have to be introduced into the program by an accelerator now.

I´ll miss it, it was a pretty awesome program.


I believe you can email your pitch deck in and get a code to apply.


That´s very interesting to hear. Thank you.


Your own laptop.


I started this list https://gist.github.com/mcapodici/25d225848eda987071a0263b31... - I keep meaning to keep it up to date.


What netiquette have I broken here?


Best guess you said it's not been updated and on visiting it doesn't have many services listed, all of which have already been referred to in the comments elsewhere


First you leverage already existing blockchain technology to create simple program that allows you to access gpu, cpu, network, and storage of a pc it is installed on. Second you adverties it to people telling them they can earn cryptocash you have invented in exchange for resources used on there pc. Third any surplus of access you have sell to a third party or mine a more popular cryptocurrency.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: