
Shutting Down NavHere - jermaustin1
https://jeremyaboyd.com/post/shutting-down-navhere
======
dpcan
The most important words in this are the near-last words:

"What becomes less fun is the after-building part of running the business."

So many of us want to turn our hobbies into side-hustles, or even full
companies. But once the "new" feeling wears off, or we finish building the
product, it's just business. It's ALL business.

When your hobby becomes a job, a problem, a customer support problem day in
and day out, the fun is gone.

I had games in the app stores. I started an escape room business. I tried to
make what I love my job, and that all turned out to be "not fun" anymore.

So, now I'm trying to focus on game making, escape room building, etc, as a
side project that I can share with friends and others for free and then just
shut it down if it's not fun. I'll keep developing websites and being bored
when I need to make money.

~~~
mikestew
As a brew-pub-owning home-brew friend said, "I took a fun hobby and turned it
into work." I'm the same way, I've had some software businesses. Man, sure is
fun building that product. Man, sure does suck doing customer support/business
stuff/dealing with local government/pick your poison. That slurping sound?
Yeah, that's the fun being sucked out of it. So about half the time (probably
less, actually) I go work for The Man...until I once again think that this
time will be different.

------
fencepost
If there are companies paying $thousands per month then you may have just not
been looking at the right potential customers. At 10-20x the price with
service guarantees (maybe spread it across DO, AWS, Azure and emphasize your
independence from problems with any) you might have been taken seriously by
some of the companies paying higher bills. For enterprise customers the
processing cost of paying $5-10/month recurring charges might be higher than
the actual charges.

Edit: that doesn't mean it's bad to serve small clients, but if the concern is
significant revenue....

~~~
jermaustin1
The concern wasn't significant revenue, it was really just to cover the cost
of the more robust infrastructure without a single point of failure.

The initial product was just an nginx config for me and Austin's shortlinks
and domain forwards. And that actually ran just fine for months with no
downtime.

Problem is then you get customers and you want to promise them no down time,
so you have to allow for failovers and high availability, and that adds
complexity that really isn't needed... until it is. And in the almost 3 years,
we had 1 database fail over to a slave that got promoted to master, and
actually stayed that way for months before I manually failed it back to the
bigger server.

~~~
cercatrova
Why not increase your prices and target larger customers then?

~~~
jermaustin1
I had a letter of intent for a large order right before the company froze
budgets and the point of contact left, it was for $9k/year if I built an API
for them to manage their domains inhouse. Currently they had a vendor they
were paying $15k/mo and then on top of that $150 each time they wanted to add
or edit a new link or domain. It was a win/win, our service remained viable, I
could afford to make it better, and they have a 90% reduction in cost for that
service.

Even if it had worked out, it still really wasn't worth it to increase
revenues by $9k unless I was closing this sized deal at least monthly.

~~~
satvikpendem
Interesting. I would think that $9k a month rather than year would've been
better. $9k a month is really undercutting it. I've also seen that higher
prices actually make customers more willing to buy, not less, especially big
ones.

~~~
jermaustin1
This was after many back-and-forths. So we put together a contract that would
be the same price as if they were a customer using our UI, but they would pay
all at once instead of each time they added a domain, and they would get an
API to manage their domains.

------
josh_carterPDX
Thanks for sharing this. Having gone through this myself I can totally relate.

"By the end of 2018, we had hundreds of free-trial users, and were serving
100s of thousands of forwarding requests each day."

This is probably my biggest learning through my own entrepreneurial journey.
The days of "freemium" are over. Long live fast profitability!

~~~
jermaustin1
Yeah, that was my biggest challenge, since people were coming from an already
free (but broken) product, they needed to see it work before they would pay
for it.

~~~
codegeek
True but as a solo or small team, freemium is not sustainable. Instead, double
down on trying to get those previous Free users to actually pay for the
service by providig something much better which solves real pain for them.

~~~
jermaustin1
Even at 100% conversion, we wouldn't have made enough money to make the
service actually WORTH pursuing long term, but that wasn't the goal.

NavHere's losses were a budgetary rounding error across my larger
organization, and it did provide us customers that actually USED the system.

It was never meant to be a profit center or a loss leader, but I had hoped
from the start it would cover the cost to run it.

It lost a total of around $1200/year, and that money would have just gone
toward some other random business expense anyway that wouldn't have captured
customers.

All in all I'd say it was a worthy loss, just not one I want to continue
maintaining.

~~~
codegeek
Take it as a business lesson/course which you learned practically. Now you
know what NOT To do next time :).

------
dan1234
> But we had only converted a few paying customers (it was in the low 10s of
> paying customers), and most of those sales were VERY hands on. I was having
> to explain DNS to small business owners who all they know is their GoDaddy
> stopped working. It would some times take over a week from when they first
> contacted to when they had finally set up their DNS records, and another
> week before they would set up their DNS records correctly, and all of that
> for only $9/year. It wasn't sustainable.

Looks like the real money would've been a premium plan which offered hands on
support and setup for non-technical users.

~~~
jermaustin1
Converting people accustomed to a free service that was already integrated
into their DNS wasn't very easy even with a free trial and only $9/yr cost.

------
gumby
There are two opposing factors: the mass market of products consists of people
who don’t understand the product domain. You used to have to be your own car
mechanic; nowadays people not only don’t know how to change their oil but
don’t know they have to — but the dashboard tells them to go get some service
done.

This is a good thing, but doesn’t favor the new entrant. Go daddy’s service is
aimed at the lowest of the low information level customer. If you help their
customer you’ll need to provide oodles of support (as the poster experienced).

Converting the mass customers would have required an expensive ad campaign
because most people who need the product don’t even know to look for it.

The poster was able to get a tiny number of knowledgeable customers which
proves the value of the product. But getting paid for that value is hard.

~~~
jermaustin1
If I ever do it again, it will be a free service, that has premium offers you
can pay for (domain masking, email, etc)

------
juped
A 301-serving service is an interesting concept for a small service, though. I
would probably assemble it from Kore ([https://kore.io/](https://kore.io/))
and sqlite, or maybe postgres; the hosting requirements would be pretty low.
Maybe I'll do it this afternoon for fun.

Then it would just sit there, despite the fact that it could probably be sold
for $15k/mo, because that's not a skill I have. If someone signs up for
$15k/mo, it's worth a couple extra afternoons to make it more robust for them.

I'm pretty sure you could have done the same, but selling this stuff is a
pretty difficult task!

e: another thing you could do is resell domains through it; you wouldn't be
competing with registrar builtins then.

~~~
jermaustin1
I agree, it was a simple afternoon project on a down day where I didn't have
much to do client wise.

The original code was only a single nginx config to fix the problem on my
sites, when I realized there were others having the issue, I built out an
application.

If I were redoing it today, I'd probably just have a running caddy server
since it does HTTPS as well.

~~~
juped
Still pretty cool. There's probably a lot of easy stuff like this that could
sell.

------
crispyporkbites
I think the problem this product solves falls into the chasm between teams
that can fix this in a few hours themselves and people who don’t know what DNS
is. Unfortunately there’s not many people in that gap.

~~~
jermaustin1
Yeah, we were going after the mom and pops for sure, my misconception was that
if I were willing to pay $9/yr, why wouldn't another small business.

Looking back, I shouldn't have ever scaled it beyond my personal needs, and
kept the architecture simpler, and not "highly available". Cost would have
been only 10% since that service could run on a single $5 droplet running
Caddy server

------
azhenley
We really did start NavHere as just a service to solve our own problem, and
later thought it could help others after seeing all the complaints on the
GoDaddy support forum.

It was quite the adrenaline rush getting our first few signups, but then it
was so sad seeing it not get traction. We learned a lot though!

------
chrismorgan
Speaking purely about the technical side of things, I’m curious about the use
of six servers at $55/month. I’d expect to do such a thing with one and
$5/month, or two and $10/month if I wanted to absolutely minimise downtime.

It’s well documented that making systems distributed to achieve high
availability very often actually _reduces_ availability, because distributed
systems failure modes are gnarly, and so there’s just a lot more _complexity_
in the system as a whole, a lot more moving pieces to go wrong. (Relatedly, it
is also well understood that error-handling code is about the buggiest code
out there.)

For starters, I can’t think of any reason to separate web application server
from forwarding servers, so that’d lop it by one.

Then I’d lop the three database servers and put them on the forwarding servers
as well, with something like a simple streaming replica arrangement if I was
keeping both of the forwarding servers rather than reducing it to one.

This is pretty much guaranteed to be more robust, because now requests are
depending on one machine only, perhaps with a second just hanging out in the
background paying attention to what’s going on; whereas in your distributed
system, there are at least _two_ machines that can take things down by failing
(if the DB server you’re talking to goes down, it’ll still be a short time
before the system sorts itself out and switches to another one).

I would love kernel live patching to be better understood, tooled and used;
then you could safely keep your single machine up indefinitely, just
gracefully restarting your HTTP servers and databases as needed. As it is, if
you go down to a single box, you’ll want to do the occasional quick restart.
Ten seconds of downtime every so often doesn’t feel _too_ terrible to me, and
if you happened to have an HTTP proxy in front of you it might even turn that
into just a slow request. But yeah, the second server might be warranted so
you can avoid this altogether: just switch which server is the primary and
direct all traffic to it, and then freely reboot what is now the standby
machine.

~~~
jermaustin1
The application server was actually a shared box with a bunch of my dotnet
core applications. I used a 3 node cluster database because I've had single
node failures on mongo on $5 droplets, so I did 2 $10 droplets for the slaves
and a $20 for the master, it only failed over once. Then I had the primary
forwarding application that was a $10 droplet, with a secondary $5 droplet
that was just standing by until a failed heartbeat (every 10 seconds), then my
floating IP would switch to the backup.

It was overly complicated for robustness. I tested multiple database crashes,
servers going down in one zone, and it would always fail over just fine,
because application code would connect to a slave if it could not connect to
the master, so forwards would work, unless there were multiple outages in
multiple data centers.

Now switching the floating IP to the backup forwarder did take about 5
seconds, and I could have made that better by putting a proxy in front, but
I'm just moving the point of failure up a level, and it was never needed
anyway.

~~~
chrismorgan
Ah, MongoDB. I was guessing something more like PostgreSQL, which is decidedly
more robust. I haven’t used MongoDB at all for something like six years now,
but the impression I get from the odd thing I hear about it is that although
it’s better than it used to be it’s still decidedly less reliable than the
likes of PostgreSQL.

It sounds like you’ve tested the simple failure modes, machines dropping off
altogether, which is good; but I’d still be sceptical about the system’s
robustness, because those aren’t the _interesting_ failure modes, and the
interesting failure modes are the ones that give the most trouble. Systems are
easy when a node is there or not there, but when a node becomes _unreliable_
(e.g. overloaded or network troubles of a variety of kinds), that’s when the
real trouble happens, and when it’s most common that measures that were
supposed to make things available actually make them unavailable. (This stuff
is _hard_.) That’s why avoiding a distributed architecture until you actually
_need_ it can be well worthwhile.

~~~
jermaustin1
If I were to do this all over again, I don't think I would do it this way at
all.

I would use something like Caddy which manages it's own config and has an API
for live updates.

SQLite for the user management and administration.

I would possibly have 2 Caddy nodes though, with a LB between them (instead of
my potentially flawed heartbeat floating IP mechanism). The application would
do an atomic update to both Caddy servers, if either failed, it would roll it
back.

But you are still looking at around $30-40/mo for hosting, not saving much
money, but making it much simpler and saving headaches.

------
jmuguy
Do you mind sharing some of use cases for the service? Like why would a small
business use something like this?

~~~
jermaustin1
It can be for SEO or just maintaining old links and pointing to their new
location.

Let's say you are trying to capture keywords in a domain and path for luxury
highrise apartments in NYC's Tribeca neighborhood, you might buy the domain
newyorkluxuryhighriseapartments.com and point the link /tribeca-luxury-
apartments/ to a highrise you have in Tribeca.

Or maybe the domain you have wanted forever finally went up for grabs, you buy
it, and move your blog over to it, but now all of the links to your old domain
are broken. Now you need to forward the old requests to the new blog.

But lets say you are a company and have bought another company, and have
folded all of their content under different url structures into your website,
you now need to point their old URLs to the new blog posts/product
pages/knowledge base/etc.

