Most VPS hosters "share" CPU time between users on a host server (Which mean CPU type/speed doesn't mean much).
So it would be great if you could filter for that on serverhunter to paint a fair(er) picture, because a few hosters give dedicated cores by default.
When we tried benchmarks on the Hetzner Cloud we found that some servers had as low as 10% max CPU on a crowded host server, which makes sense as to why they charge 8 times more for dedicated CPU cores.
That makes sense! Unfortunately, it'll be hard to obtain benchmarks for every plan with every provider, and it'll be even harder to get enough benchmarks with each provider to paint an accurate picture.
Even with the same provider, there can be massive differences between VPSs, it just depends whether you end up on an idle host node with no noisy neighbors or one where everyone is busy mining their favorite cryptocurrency.
Regardless, we do understand the need for accurate performance numbers, so this is something we will explore further in the near future.
Yes, it's safe to assume every VPS has shared cores. I think we'll either add a "Hybrid/VDS" product type or a "Dedicated resources" feature in the near future. :)
I'm running asterisk on a ramnode 256MB RAM "OpenVZ SSD VPS (SVZ)" system for $28.80 a year, and it serves home phone for myself and my in-laws. I destroyed it last year because I dist-upgraded to an unsupported version of Debian and man, was it unsupported. Other than that, though, my application has no problems with being on a shared system.
I would strongly recommend against using any OpenVZ VMs. You're forced to use the same kernel as the VM host and if you want to enable any kernel modules (e.g. TUN, TAP) your host needs to do it.
Hetzners dedicated CPU Price is only ~2x and not 8x.
You need to compare the correct hardware. CX31 with CCX11. Both 2vCPU, 8GigRam, 80Gig NVMe Flash and 20TB Traffic.
CX31 for 10.59€ and CCX11 for 23.68€
They are cheap and good.
We compared the lowest plan because higher plans usually have to provide a higher share due to disk and RAM resources being limited when split up in higher amounts.
And yes,they are cheap and they are very reliable, we use them a lot.
If you distinguish broken and non-broken IPv4, maybe you should also distinguish broken and non-broken IPv6? Currently, it's impossible to distinguish offers with a proper /48 from broken crap with a single /64 or even less, which makes the comparison mostly useless for me, as there is a lot of broken crap in that market to wade through.
Since IPv4 space is starting to become a significant cost, they make up a large percentage of the price for the cheapest VPSs. This is why some providers have started offering NAT VPSs - ideal for backups, monitoring, etc. Since NAT IPv4 is not the norm, this is something they clearly advertise on their website, and hence something we can easily distinguish.
With IPv6, the information from the provider is usually limited to "IPv6: yes/no", if they even mention it. The actual IPv6 space varies wildly from /48 to /56 and /64 - some providers even offer just tunnels or 10 IPv6 addresses. Since this is usually not mentioned on their website, we can't actually distinguish between them either. :(
Well, I see the problem, but then, a search site that doesn't allow you to use effective filters is ... kinda useless?
Though I think there is a more important thing to note here: Maybe you should explicitly support handling of unknown values in the UI? Failure to do so is IMO one of the biggest problems of most specialized search sites.
Sure, there are attributes that cannot always be determined reliably if at all--but instead of either ignoring the attribute completely or forcing it to "false" or "true", or some other explicit value, why not allow the user to specify whether entries for which the attribute is unknown should be shown? That way, if my requirement is proper IPv6 support, for example, I can decide whether I am in the mood to wade through tons of crap, but still have you filter out all entries for which it is known that proper IPv6 is not supported, or whether I want to ignore those and only look at the (probably more expensive) selection of providers that clearly advertise that their product isn't broken.
I understand what you're saying, but for the vast majority of people, just knowing whether it has IPv6 available or not is enough information. We do have to strike the right balance with the UI between power users like yourselves and people less familiar with hosting.
Right now we do support unknown attributes, they show up unless explicitly filtered out. For example, a server might have an unknown CPU speed, in which case it will show up in the results unless a CPU speed filter is applied.
Just to make sure we're on the same page, you'd like to see the size of the IPv6 range assigned to the server, correct? Is there any more information you'd like about the IPv6 support?
> I understand what you're saying, but for the vast majority of people, just knowing whether it has IPv6 available or not is enough information.
Well, that's somewhat of a chicken and egg problem, though. It's not like those broken networks are actually necessarily good enough for the purpose that people use them for, it's just that noone is even aware that the network setup is broken and that that is the problem they should be fixing.
> Right now we do support unknown attributes, they show up unless explicitly filtered out. For example, a server might have an unknown CPU speed, in which case it will show up in the results unless a CPU speed filter is applied.
That's exactly what I think is broken, in that unknown values are not handled explicitly in the UI, but forced to an explicit value. When I specify "I need at least 2 GHz", say, that says nothing about whether I want to see servers for which you don't know the CPU speed, as such a server very well might have a 5 GHz CPU. All it says is that I don't want to see servers for which you know that the speed is less than 2 GHz. So, you shouldn't make assumptions about whether I want to see those servers, but rather let me choose what to do with them.
Now, I don't know whether this is important for clock speed, but often there are cases where there is a particular attribute that is known for many products to have a value that doesn't fit my needs, so it would be great if you could say "not any products where this attribute has value X" ... except that suddenly half of the remaining candidates vanish, too, because that attribute happens to be unknown for those, which then means you have to do all the filtering by hand after all, instead of using the search/comparison site to weed out 90% of the candidates and then investigating the remaining 10% by hand.
> Just to make sure we're on the same page, you'd like to see the size of the IPv6 range assigned to the server, correct? Is there any more information you'd like about the IPv6 support?
Well, yes, prefix length (or number of addresses for offers that don't even have a prefix, I guess), and I guess whether that prefix can be routed completely to the server, as that is, as ryanlol noted, something many hosters fail to do correctly as well. If you can figure it out (or for providers that are willing to feed you machine-readable data), it would also be useful to know whether the reverse zone of the IPv6 prefix can be delegated.
This is very useful and way overdue, thanks. Being able to filter by BGP session support has already saved me hours of work this month (if any DigitalOcean sharks are reading: please match Vultr soon in BGP capability).
I trust you've shared it on LowEndTalk. Now: keep it up to date, not slammed with ads, moderated for accuracy, and profitable. Best of luck.
My main feature request would be the ability to filter out specific hosts that I have had bad experiences with or do not trust (perhaps an X by their username when browsing).
It would be nice to also incorporate ratings somehow. I chose some filters that I wanted. Searched lowendbox for the vps provider that came up first and there are some truely awful reviews. I recommend anyone visit there first before purchasing.
We have reviews on our website but, unfortunately, most providers haven't received any reviews yet. We're hoping this will change as our website grows and we're working on getting more reviews on our website.
Do note that, as with most businesses, it's easy to get bad reviews and difficult to get good reviews. Make sure to check multiple sources of information to make an informed decision. :)
I like the idea, but there still seem to be some glitches: When I look for VPSs with at least 5GB RAM, I get a lot of VPSes with 1.5GB RAM (e.g. https://www.serverhunter.com/?search=D21-C9F-3C1).
Also, when you use the "reporting" feature, the report seems to get sent, but submitting it clears the "Issue description" field which is then highlighted in red which at first made me think something went wrong... Probably it should just close the modal window after the submission?
The idea of creating a index of server providers is good, but all implementations are less-than-ideal, and in the worst case, useless. The core problem here is data quality, the magnitude of noise is usually high:
* The server providers do not give out machine-readable data.
* It's difficult to distinguish some details from specifications, such as vCPUs or dedicated cores.
* Promotions and discounts cannot be recognized programmatically.
* There's no reliable way to detect discontinued or withdrawn plans.
Ultimately, people will find the result of their search is either withdrawn or unusable for some reasons, and they still have to purchase the server from a well-known provider... And lurking at lowendtalk.com is still a better way to find a good deal.
Improving the quality of data sounds like a good machine-learning project.
You're absolutely right, getting accurate data is one of the biggest challenges we're facing at the moment. I'd like to believe we're making great progress with it though. :)
Initially, we started off with static data together with a management panel for hosts to add/edit/delete their offers. As you might have guessed, most providers didn't feel like doing this, and those that did made mistakes entering the data or didn't keep it up to date.
This is why we've put most of our effort into automating the data feeds and getting up to date information. Right now, 60.3% of our offers are automatically retrieved from the provider and kept up to date every 24 hours.
Whenever a provider adds, edits or deletes an offer from their website, it's automatically synced to our website accordingly. We also try to do this for stock, but some providers do not expose this information before ordering.
While I'll freely admit we still have mistakes on our website, I hope that it's still useful in finding the right server for your needs, and appreciate your patience while we keep working on improving it every day. :)
P.S: We have deployed some small scale machine learning and are experimenting with it as we go, but since our knowledge in this field is lacking, progress will be slow.
Thanks for your detailed response. I sincerely hope your project will be successful, so I don't have to spend a week, looking at all the details such as BGP tables and available payment processors, just to find a usable server in the future ;-)
With only ~11,000 entries and what looks to be a pretty defined set of searchable fields, it should be a simple setup. I'd guess the search is powered by either Elasticsearch or Postgres/MySQL.
I think the more interesting part of the product is going to be the procurement of available servers which I don't have any guesses outside of public API's + web scraping to do that.
What was interesting to me is the search ID, https://www.serverhunter.com/?search=157-857-0B2 for example. Each change results in a new search ID. If you toggle the same option on and off, the search ID is different every time. I originally thought it was a way to reference a unique combination of the facets but it seems to create a new search session for each query.
It looks to me like the default result set has 11k or so documents, which is very small. A couple of years ago I helped build a specialized vertical search engine that provided faceted search over a similarly-sized dataset. We used django on the front end with elasticsearch on the backend, and it was blazingly fast. I credit our talented django developer with the speed and fluidity of the front end, and I credit elasticsearch with providing amazingly fast aggregated results across that admittedly tiny set of documents.
Doesn't elastic search require a decent size server just to run with no data? Curious about the costs of running elastic search for a small number of documents compared to the low cost of running django. For example the frontend could be static and hosted on S3 for pennies while elastic search seems to recommend at least 8 GB of ram which would be $40/mo on some lost cost VPS providers. Pretty big price difference for a smaller site with a small record set that wants a fast search experience.
I have Elasticsearch running on our development area with just 256 MB of RAM assigned. In production it's quite a bit more, but fortunately, we used Server Hunter to find some great deals for our own infrastructure as well ;)
Not yet! We've actually switched stacks and rewritten the backend a few times already, which might make for a great future story. I'll try to make some time after our upcoming server migration. :)
Sure, at the moment it's just LAMP with Elasticsearch to speed things up. We also have a list of all the libraries and services we use here: https://www.serverhunter.com/licenses/ :)
I am wondering who is providing the information? Vultr list their "Dedicated" as 8 CPU, when it is not dedicated by Cloud VM, which I presume it is shared. And its CPU are not CPU Core but CPU Threads.
These differences make the comparison a harder. Because in Dedicated you talk Core count, in Cloud VM you talk Thread Count ( vCPU ).
Sorry about that! They should all be cores rather than threads. In the case of Vultr, we get the information from their API: https://api.vultr.com/v1/plans/list_baremetal - "cpu_count":8
Usually, we rely on our own database for core count and speed, but Vultr doesn't actually mention the CPU model in their API either.
For now, I've hardcoded a fix in our backend and I've contacted Vultr whether they can clear up their API in regards to cores vs threads.
Thanks for letting us know! :)
P.S: Every offer has a "Is some information missing or inaccurate? Let us know!" link at the bottom, in case you find more mistakes.
You didn't miss it, we're still trying to figure out the more complex pricing systems of providers like AWS/GCE/Azure. Just to give you an idea, the JSON file describing the EC2 pricing is 650+ MB. :)
We have added AWS Lightsail since the pricing is a bit more straight forward and I'll double check tomorrow if there's anything from GCE and Azure we can add.
Just got a reply (within 1 minute!): "I'm afraid this isn't something we offer any more - In order to install windows, you will need to provide your own ISO with VirtIO drivers included."
Are the sliders keyboard accessible? I use Jaws for Windows as my screen reading software and am unable to adjust the price slider in either Firefox or Chrome.
We have plans to do some more comprehensive accessibility testing in the near future, but in the meantime, please let me know if you come across any other accessibility issues.
Just gave this another look and the sliders do work with my screen reading software which is an improvement. As far as I can tell the page up and down keys don't move in larger increments. Home and end don't move to the top or bottom either. Because of this when looking for servers priced between $0 and $5 I had to repeatedly hit left arrow a bunch of times. Ideally I could have gone to the maximum price slider, hit home to set it to 0, then hit right arrow several times to find my maximum price. I assume this is behavior of the noUiSlider framework and not code you have written.
I'm very sorry to hear this, we use noUiSlider, which mentions "Accessible with ARIA and keyboard support" as one of their features, but it looks like our implementation might have broken it.
I've added this as a high priority issue and will update you when I make any progress. I'll also see if we can improve our accessibility testing to avoid this in the future.
Thank you! Unfortunately, I haven't found many providers offering this, but I'll look into adding it for the ones that do.
In the meantime, you could ping their domain name to get a rough idea (if they don't use CloudFlare or block ICMP) and contact their sales team to see if they can hook you up with something more accurate.
Question for hosting and db experts- should one go for ssd or HD when hosting? I'd think HD is better because you can anticipate failure using smart. Ssd has the advantage of being faster to load data.
On that note- are database systems now able to make better use of ssd's? Most optimizations centered around contiguous storage in hdd era.
You'd always want some kind of RAID mirroring with off-site backups, regardless of HDD vs SSD. If you take care of that, you can safely go with SSD. :)
Our ads won't always show up or be the first row, you could simply use .advertisement { display: none } for now.
Do note that our advertisements are not served via a third-party provider, there's no tracking, no chances of malware or rogue JS and they're actual offers from our website based on your search filters.
It would be appreciated if you kept them, but in the end, that's entirely up to you. :)
So it would be great if you could filter for that on serverhunter to paint a fair(er) picture, because a few hosters give dedicated cores by default.
When we tried benchmarks on the Hetzner Cloud we found that some servers had as low as 10% max CPU on a crowded host server, which makes sense as to why they charge 8 times more for dedicated CPU cores.