Hacker News new | comments | ask | show | jobs | submit login
Scaleway ARMv8 Cloud Servers (online.net)
171 points by edouardb on Apr 27, 2017 | hide | past | web | favorite | 129 comments



Scaleway can "disrupt" the industry as much as they want. As long as they don't have a solid product it's useless for production usage (backups, weird storage system, unpredictable network, etc.)

They should just offer cheap Xeon D VMs and not this ARM stuff. The CPU is way too slow and the ratio is completely off, 2 of these ARM cores for 2 GB RAM? Even a shared Xeon or Xeon D core offers better performance than those ARM CPUs.


I use C1 in production since the beginning along with a bunch of others. Their network is ok, no worse and no better than anyone else's, their weird storage is actually quite nice, nobody else offers anything similar in terms of space and performance in that price range, and I don't understand the point about backups, never used a backup feature of any hosting provider in fifteen years, it's kind of a vendor lock-in feature.

And these new ARM CPUs could satisfy performance needs of like 99% of projects.


That's just not true.

- Network ist worse than the competition for sure, just benchmark it and you will see it. And if you benchmark it over a certain time period you will see that it's very unpredictable. If you only need 5mbit all the time, sure it's the same as anyone elses. The good thing is that you don't pay for traffic, but I would rather have a stable system/network. Also, the AMS network has a bad carrier mix which is why they route a lot of the traffic through Paris, which adds latency because their network is bad.

- The storage system is a joke. Max 150 GB per volume, it's very slow depending where your VM lands. If you only look at the price, sure it's quite good, but that doesn't make it production ready.

- How is an additional automated backup system vendor lock in? It's quite nice to restore backups with a click, how is that a bad thing? You can still do your own backups, but at least the option would be great.

- No, the ARM CPUs don't statisfy 99% of projects. Because most web projects need single thread performance and that's exactly where ARM CPUs are garbage.


I can confirm this. That's my experience as well. I run an uptime monitoring tools[1] and the node run on Scaleway is the most unreliable one due to network issues. It happens randomly once in 2 weeks when the instance experience 3-5minutes of disconnecting from internet completely.

They are also very slow when you stopping instance. Mine take hours to just stopping instance.

--- [1] noty.im


> most web projects need single thread performance

What makes you say that? Near as I can tell, most web projects need I/O more than anything else. You're waiting on one end or another of a socket, a disk device, or a memory bus way more than you're doing anything chewy.

(I agree with you that Scaleway isn't serious, however.)


I/O is important, I agree. RAM is important as well, and both are probably more important than the CPU.

But as soon as the CPU comes into play a proper single thread performance is in most cases better than multiple slow threads. And if you have the choice, why would you choose ARM cores if you can have the better single thread performance?


You are generalizing much and citing no sources. You claim these are unsuitable in every case. Please provide concrete examples, links to benchmarks and other reasons to believe what you say.

I ask because you appear very biased and you experience contradicts mine, you could be right though. If you are right I would want to know, because I might make mistakes if you don't correct me in an objective way.


I think your instincts are correct, I use scaleway for clusters of mediocrity feeding fat clients and to feed cloudflare for vanity sites..

Needing the fastest cpu, great networks and accepting backup lock-in are examples of laziness in design. Sourcing commodity materials and already having a few nodes in every role is my preferred bet. Being able to upgrade the node performance by 10 times if scaling plans fail wont hurt either..


I agree with you , I do disagree with one tiny part. I agree with your entirely on any sort of lock-in, lock-in is bad and constrains future options. Known cost up front for unknown cost in the future should generally be seen as a business risk.

The tiny place where I disagree is that some tasks might take unbounded CPU/GPU time and network bandwidth. Your task clearly does not and I agree your workload looks like a common workload.

When working for the Air Force Weather Wing, it seemed clear that our decision to use the super computers we used was a budgetary decision and that is what bounded the size of the data we fed into our systems. Because we generated forecasts we had tight time constraints to meet, a 72 hour forecast that takes 100 hours to make is useless.

Weather modeling and forecasting is still advancing and even the latest models are not stable simulations (small fluctuations in inputs can cause large fluctuations in output) and this makes the most nuanced data possible desirable. While most commercial systems used 25km polygons covering the earth, ours used 17km polygons when I left. This meant that we had more precise sims and could accurately model further in the future, perhaps by several days. Even faster CPUs and networking might lead to a drop to 8-11km polygons allowing even better predictions. Ideally the meteorologists want to skip the rounding to regional polygons and use the parcels or air model that local forecasters use for the entire world. Presuming Moore's law holds up and weather modeling is amenable to newer GPUs I do not see weather modeling backing away from needing the fastest hardware money can buy in the foreseeable future.


Scaleway customer here. I read a review on HN a while back that got me to buy into their service. While on the surface, it seems like a great value, there are inconsistencies I cannot reconcile that have kept me on DigitalOcean and GCP since. For example, there's quite a bit of funny business going on in all of their stock kernels, so much so that you can't even run things like Docker or Minikube. If you try and install your own kernel mods, you'll find that an uphill battle as well.


I'm running a fully functional Kubernetes cluster on VC1S servers (x86) using this guide[1].

In case you're stuck compiling Kernel modules or running Docker, check out how it's done in the provisioner[2] that comes with the guide.

[1] https://github.com/hobby-kube/guide [2] https://github.com/hobby-kube/provisioning

Disclaimer: I'm the author of this project


Mandatory read:

https://thehftguy.com/2016/11/01/docker-in-production-an-his...

https://thehftguy.com/2017/02/23/docker-in-production-an-upd...

If you have issue with Kernels and Docker. It's not just you and it's not entirely the fault of the provider. ;)


He has some good points, but...

>All CI pipelines in the world which rely on docker setup/update or a system setup/update are broken. It is impossible to run a system update or upgrade on an existing system. It’s impossible to create a new system and install docker on it.

If the loss of package repos counts as a critical outage for you, you should be running your own package mirrors. Especially if you're not paying for commercial support and don't have anyone to escalate issues to.


The package repo is not lost. It is online and serving packages with corrupted signatures.

That's a crypto error interpretable as an active MITM attack on the repo, which causes a apt-get and subcommands to terminate abruptly with a critical error.

Last but not least, the repository is mirrored and cached, the mirrors simply reproduced the corrupted source repo :D

It is not a mere case of offline repository, though I can understand the confusion ;)


I'm well aware of how traditional "dumb" mirrors work (e.g. ftpsync), and those would indeed just mirror the packages with their incorrect signatures. But for an internal network, surely you'd want something more like aptly[0] so you can roll back to a working version?

[0] https://www.aptly.info/doc/overview/


aptly 1.0.0 => released on 27 march, that's exactly 1 month ago.

First commit done in 2014.

It's funny how people always mention workarounds that were non existent or non viable at the time of the issue. ;)

I have worked on mirrors a few times in my career. I am sadly well aware than even the best mirroring solutions are rather poor. Not gonna argue that they have flaws. Not gonna argue that we wish for better.


I suspect it was usable long before the 1.0 release - v0.1 from 2014 appears to have had repo snapshot support: https://github.com/smira/aptly/tree/3f82edf5d6c50ed6988297a4...

Nonetheless, it probably wasn't nearly as stable, and certainly wasn't as well-known then. So I'll concede that it may not have been viable at that point.


Honestly, it's reasonable to live with an apt outage one day a year. Any workaround that adds critical components in the distribution chain is guaranteed to make it worse.

The Docker crypto fuckup was quite peculiar. The distribution pipeline can't handle that. It also broke their other repos (including ubuntu) and propagated to the mirrors.


I run a C1 instance with Docker, and there weren't any immediate issues with stock kernels for me (I think I've switched between at least 4-5 during the year). Except when a systemd upgrade broke things and I had to reconnect the drive, but that wasn't Docker-related. However, I haven't had anything fancy there.

(Docker multiarch support - or better say lack of one - is still a complete mess though. But that's also not a Scaleway issue.)

There are tricks to running your own kernel there (via kexec), for Arch it's neatly packaged here: https://github.com/stuffo/scaleway-archkernel (but new servers don't have Arch as an option)

Still, it's all terribly inconvenient. Possibly fun to mess with once, DIY/learning-style... but if you just want a server that works (and works in a way you want things to be, not how vendor had set it up) it could be sort of unpleasant.


It's very unpleasant considering half of their github scripts, including the one you listed, in my experience just flat out don't run how they were intended anymore.


I've never had issues running docker on their instances. Was that on ARM or x86? I've only used the latter so far.


scaleway customer since 09/2017. It is true that CPU is slow: apt-get update/upgrade is about 10 times slower than on my desktop. SSD disk is not very fast. There are network slowdown sometimes, but it is generally a lot faster than my home access. I use it for hosting of a couple of personal sites. My apache site is faster than a cheap entry level shared hosting. My node.js site is blazingly fast (most data is cached in memory and disk access are mostly async).

I use it also as a proxy to work around enterprise internet filter (bluecoat filter blocks archive.org).

With a bluetooth keyboard, a ssh client and tmux, my phone can be used to maintain my servers from anywhere.

It is really dirt cheap compared to the service it gives.


Worth pointing out that these are new ARM processors, potentially 3x faster than the (very) old ARM processors it was using before.

I still think they should've gone with Qualcomm's Centriq 2400 [1] platform, though, which I think will be significantly ahead in performance and perf/W than either Cavium or APM's processors. I guess they went for that sweet price/performance value, but I think in ARM's case, it's worth going with the best there is, if nothing else to not continue the "too slow" stigma that ARM gets in servers, which can only hurt any ARM-based vendor.

In my view, it would be better to offer similar or better performance than Xeon D (or other Xeons, depending on number of cores) for 20% less than offer half the performance for 50-60% less.

[1] https://www.qualcomm.com/news/onq/2016/12/07/meet-qualcomm-c...


> scaleway customer since 09/2017

Traveller from the future!


typo: 09/2015


> They should just offer cheap Xeon D VMs

They do, they call them "Workload Intensive", and they start at 25 euros for 6 cores and 8^H [edit: 15] gigs of RAM. https://www.scaleway.com/pricing/ That said, I have no idea how they perform in practice.


I have a $3 ARMv7 Scaleway server, and I can vouch for the unlimited bandwidth. I've run tens of terabytes (in a single month) through it as a torrent mirror for various Linux ISO images.

The disk speed is a bit slow, and 200mbits is a bit low, but I still wish they had a US data center so I can use them as a fail-over host.


Great pricing, but literally no customer support. Week's ago I wanted to change my billing address, I was surprised not to find the option within my account settings, so I asked customer support to do so, whom replied that they simply can't do it.

I can just close my current account, then recreate it with my new address and of course, there is no migration tools.


In my case, customer support has been very good a few months ago to change the billing address


There are a lot of comments in here about the CPU/RAM but I think the important thing (at least to me) is that you can obtain 200Mbit/s for 3EUR/mo. Saturated, that's 65TB/mo, which works out to 0.00005 $/GB. Most of the big clouds and CDNs charge on the order of 0.08 $/GB. Sure you can get unlimited bandwidth on some of the bigger dedicated server packages but not for 3EUR/mo.

And yes, all networks are not equal. 65TB on a shitty network isn't worth as much if it has 800ms latency or something like that but at least there's an option.


There is a pretty great comparison of major VPS providers that looks at the networks. Scaleways network wasn't great but very tolerable.

https://www.webstack.de/blog/e/cloud-hosting-provider-compar...


The comparison overall is very good but ultimately not very useful to me since the test only covers Germany (which makes sense since it's a German company serving German customers doing the comparison). The providers vary a lot when it comes to transatlantic networking and their performance within Europe is usually very very different.

I've been testing VPNs with various combinations of source -> VPN/novpn -> destination, where source, VPN and destination are any combination of US, Europe and Asia and I've found that this is really variable. So much so that there are cases where you can actually improve bandwidth/latency by using a VPN because of the network on the other side.


I am super excited for this. I've been running an ARMv7 server on Scaleway almost since they were released.

However, I don't see any news that the new ARMv8 servers will support native IPv6.

This is a big limitation, if you want to be IPv6 accessible now, you have to run a 6in4 tunnel. The x86 servers (physical and VM) from Scaleway all support native IPv6. The explanation so far was that the ARM back plane doesn't support IPv6.

So will these new ARMv8 servers support native IPv6? Or are they build on the same back plane as the ARMv7 servers?

Edit: It's supported, I can't read.


They say in this article these new servers have native IPv6 support actually.


Thanks for pointing that out, I totally missed that!

Their main page still doesn't mention this though. [0]

[0] https://www.scaleway.com/pricing/


When creating a new ARMv8 server, IPv6 is off by default. You just need to tick it and that's it.


And kill the IPv4 address (if you don't need it) and save 1€ per month.


Too bad all they offer is a single IP, or a /127 to be exact. Very strange decision.


Is there any specific reason that you went with the ARM server, rather than an x86? They honestly don't seem that much cheaper than say an VMs from other cloud providers, and I don't imagine that they're able to beat an x86 in terms of performance.


> Is there any specific reason that you went with the ARM server, rather than an x86?

1. I work a lot with ARM in my personal and professional life, so I actually like that Scaleway is offering ARM servers because it means I can take armhf binaries I build at home and run them in Scaleway

2. I run Linux on the server, so I'm not really impacted by the fact it's ARM. All the packages I need are available for ARM. I'm not running any legacy software that would require x86.

> I don't imagine that they're able to beat an x86 in terms of performance.

It's primarily a web server, so I don't give a hoot about performance. I care more about the unlimited bandwidth Scaleway offers, even though I don't consume that much. For 3 EUR per month it's perfect for me.

VPS providers usually over provision (unless they really suck at being VPS providers) and that means you might have a great time, or a terrible time, depending on your neighbours resource utilisation. Being the only tenant means that you aren't impacted by what other people do, and as I said, it's 3 EUR/month which is fantastic for running a personal website.


They do have IPv6 - I just checked the admin dashboard.


What advantages are you hoping to get from IPv6? It's in a broken state right now and will be for years to come.


Most providers will give you a /64 IPv6 subnet, so you can assign services distinct IPv6 addresses.

Apart from that, I'd like people on IPv6 connections to be able to access my server directly? There are a lot of providers who are using cgNAT today for IPv4 connections in Asia and Europe due to address exhaustion, and I've had bad experiences with people behind cgNAT.

Plus it's 20-freaking-17. Why aren't they offering IPv6?


Each container having its own public ipv6 address or each customer's requests coming off of a different bindhost


Can you elaborate on what constitutes "broken" ?


Sure, how about routes are not working or are very very slow via IPv6? Just an example I had a problem with recently: DTAG (largest german carrier) had no route at all to a DC in the Netherlands via IPv6. I had no option to reach DTAG from this specific DC, which is basically most germans. It took days (!) to get sorted.

Next example is Cogent, they had huge packet loss via IPv6 for days (we are speaking 50%+). After a ticket was opened it was fixed very fast, but it seems no one cares about IPv6 as long as they can put a stamp on it which says "We support IPv6". Most people who shout for IPv6 are not even using it so I am always a bit sceptical.


Can anyone explain to an outsider why this is disruptive? Are these significantly faster than the old x64 offerings? A lot cheaper?

For that matter, has anyone done a real-world armv8 vs. x64 benchmark?

--

edit: would like to test them myself, unfortunately they only allow payment through a "physical" credit card: https://www.scaleway.com/faq/billing/#-What-forms-of-payment...


All I can say right now is that it's dirt cheap for bare metal. Scaleway was already the cheapest option for bare metal and this looks like a significant performance bump.


I was fooled as well. Apparently these are not bare metal. "Cloud server" is a misleading way of saying "VPS".


“Cloud server” means both bare metal and VPS at Scaleway.


It says "C1 - A true metal ARM server running in the cloud." on their product page.


They offer cloud server and bare metal.


[deleted]

Edit: Appears to be a cloud server, aka VPS, so not much to see here. Removed the now erroneous comment. I had mentioned OVH's kimsufi, which does have cheap (though not as cheap as these "cloud servers"), actual dedicated boxes: http://kimsufi.com/


Scaleway is owned by Iliad. Iliad also owns Online.net. Online.net is Kimsufi's (OVH) main competitor:

- €8.99/mo: Intel Avoton C2350 (2x 1.7 Ghz), 4GB ECC RAM, 1x 1TB HDD or 1x 120GB SSD. 1Gbit/s unlimited.

- €15.99/mo: Intel Avoton C2750 (8x 2.4 Ghz), 16GB ECC RAM, 1x 1TB HDD or 1x 250GB SSD. 2.5Gbit/s unlimited.

I have one of the latter ones, and it's amazing. The only problem I have with it is that the bandwidth between Online.net and my ISP in Denmark goes through NL-IX, and something there doesn't jive. Even though the server can easily hit the 2Gbit/s mark, I'm barely hitting 3Mbit/s from home due to the NL-IX pipe being overloaded.

I'm using a proxy at OVH (Kimsufi), which at least gives me 100Mbit/s (the peering to OVH goes through HE).

Edit: If you're wondering why there are so many HDD servers available vs the SSD ones, it's because they have been hit by a fan-of-death issue. They have fans in their PSUs that vibrate so much (or maybe hit the resonance frequency of the racks) that prevent the HDDs from spinning up. Online still "sells" those broken racks, even though they have dozens if not more servers impacted by this. The problem has been ongoing since November 2016 [1].

[1]: https://status.online.net/index.php?do=details&task_id=720


Care to point where are you getting 2.5gbit/s ? https://www.online.net/en/dedicated-server/dedibox-xc this only has 1gbit/s. A 10gbit/s card is still awfully expensive.


They are 2.5Gbit/s interfaces, as indicated on the page you linked. You get 1Gbit/s "guaranteed" bandwidth. Anything on top is just the cherry.

https://imagebin.ca/v/3KXVRWYjcZBw



Interesting. Thanks for the heads up. I've had the 8.99 offering for a while now and haven't had any issues... yet.


The servers in the announcement are not dedicated, but virtual. The CPU is the Cavium ThunderX with 48 cores.


Urgh. Missed that. "Cloud Server" is an unfortunate moniker. If this is just a VPS, then there's not much to see here.


Not much to see? If you like ARM, AArch64 VPSes are very exciting!


I've played around with it. These CPUs are fast enough (certainly faster than Heroku and the like). However, ScaleWay still does not provide automated disk backups, and manual snapshot requires you to shut down the instance first, so I went back to Linode.


> However, ScaleWay still does not provide automated disk backups

Does anyone offer that at this price point? Scaleway is ridiculously cheap. Lack of backups is par for the course for these low cost providers.

> and manual snapshot requires you to shut down the instance first

Yup, that does suck, and I wish they would fix it.


Is running a simple rsync cron backup really that tough?

Sure I understand it's suboptimal but I've always had pretty good luck with it.


> Sure I understand it's suboptimal but I've always had pretty good luck with it.

"Luck" is not a feature of a backup system.


I believe Digital Ocean offers this type of backup. I know I'm paying for something backup related to that effect.


is ZFS available for Linux on ARM? You can take snapshots and use zfs send/receive for zero downtime , consistent backups.


I'm not sure about ZFS (and it could be complicated - there is a problem with Scaleway servers that they use their own kernels so you have to mess with kexec if you want to run your own), but one can certainly do the same with LVM or even "bare" device-mapper.

It's non-trivial, though. At least if one haven't dealt with that stuff before.


It's apparently supported (so says zfsonlinux.org) but I wouldn't use it for anything you care about without ECC ram -- I'm willing to bet these have cheapo controllers which don't support that though..


Well, I would be more hesitant to use some other file system without ECC RAM than ZFS. However, "these cloud servers are backed by Cavium ThunderX SoCs, high end processors designed for the cloud, and DDR4 ECC memory," says the article. Nevertheless, I would not consider ZFS on these to be "production ready" except for very small values of "ready."


Let's challenge ourselves to not "disrupt", or "make the world a better place" in 2017. I'm trying to stem the flow of ribbing we will get 20 years down the line from our peers for using these types of marketing tropes.


You can add 'beautiful' to your list.


And "modern". And "blazing fast".

"X is modern, blazing fast..."

How about we bar marketers from using adjectives all together?!


There is no ImageHub for the new ARM servers - the only images offered are Ubuntu and Jessie, i.e. no readymade Docker setup.

Maybe that is obvious, but the new servers can't run Raspberry Pi3 Ubuntu binaries. The old ARM servers also have issues with that because the Scaleway ARM7 CPUs lacks features provided by the Raspi3 ARM CPUs (e.g. they lack Neon).

Summary: This is not a hosted Raspi replacement :-) But still quite nice.


Full-stack dev here with almost no real hardware knowledge.

Why would you want to use ARM instead of X86 on a server like this? The architecture​ switch comes with dependency headaches -- what counterbalances those to make it worth it?


At the moment you almost certainly would not want to use them. The production volumes are currently far too small to say whether ARM servers will end up being much cheaper or better in other ways than Intel-based servers.

These particular servers are based on the Cavium ThunderX which has (up to) 48 cores per socket, and usually two sockets per server. Each individual core is rather slow. The servers themselves are not noticably cheaper in price than x86-based servers. As noted above it's very early days.

There are some more interesting servers coming along. In particular Qualcomm have a 48 core ARM server SoC which has great performance per core, and should ship in a few months. (They are already shipping final silicon to partners and a few developers)

What could make the difference would be:

- ARM partners currently ship around 40x cores annually relative to Intel. Of course the vast majority being phone/tablet 32 bit cores, but that kind of volume might mean that future ARM servers become very cheap, or have an incredibly high density of cores.

- Multiple vendors are allowed to design ARM SoCs, so we may find that particular vendors optimize for more niche use cases, and if your use case happens to be that niche, that could be a win for you.

As an example of the last point: Could your application use 16 x ethernet ports and some custom hardware to process SDN dataplane (eg OpenFlow) tables, all on a single System-on-Chip? Good luck getting Intel to design such a chip for you. However an ARM partner would be entirely free to develop such a chip, and maybe hardware like that will form an important part of telecoms/OpenStack cloud deployments in future.


> The servers themselves are not noticably cheaper in price than x86-based servers

You mean given the rather slow cores? I can't think of any provider offering x86 bare metal at this price.


I mean if you buy the servers. This site has a selection of these Cavium ThunderX servers for sale with prices (in GBP): https://www.avantek.co.uk/store/arm-servers.html (Edit: Fixed the link)


It all comes down to performance per dollar.

Currently, ARM can't touch the performance of a mid to highend x86.

But ARM is extremely cheap and a solid low to mid x86.

If top end performance isn't needed. And you have the ability to recompile your software to run on ARM, you can't beat ARM pricing.

Just look at the OP link.

$12/mo for 8 cores.

For x86, you'd easily pay 10x that per month.


> $12/mo for 8 cores. > For x86, you'd easily pay 10x that per month.

Scaleway offers baremetal x86 servers with 8 cores for 17.99€/month (C2M). [0] But they are based on Intel Atom, so the performance is quite low.

[0] https://www.scaleway.com/pricing/


I don't care if a VPS as 1 core or 1024 cores. What I care about is how much work it can get done.


Why would you want to use ARM instead of X86 on a server like this?

For €12/month you get 8 (slower) cores and 8 GB of RAM. At a place like Hetzner roughly the same price gets you 2 (faster) cores and 4 GB of RAM. For some workload more slower cores and more RAM gives better performance that fewer faster cores and less RAM.


If you don't do platform-dependent development (low-level[* ] c/c++ or assembly development) then you're going to see pretty much no difference.

The main difference is that you get a 8-cores machine with 200GB ssd and 8 gb ram for the price of a 1 core, 1GB ram, 30GB ssd on digitalocean and similar. Scaleway is still developing some extra features [* * ], for most hobbyist to medium scale usage, i think it's going to be just fine (plus, the monetary savings).

It is to be said that each of those 8 cores are likely to be individually slower than an usual x86 core, but you get 8 instead of 1 and they're fully yours: you don't have to share cpu time with other users, as it happens on pretty much all vps providers: you're renting a whole machine.

[* ] = applications written in C/C++ will work flawlessly, as long as they don't exploit x86-specific features. If you're doing web stuff, you're unlikely to hit this spot.

[* * ] = see C14, an S3-like storage service.


> they're fully yours: you don't have to share cpu time with other users, as it happens on pretty much all vps providers: you're renting a whole machine.

That's not true with these. These are VPSs backed by a 48-core physical server. Scalaway does have an older baremetal ARMv7 server offering though.


Anything that runs on a JIT is effectively platform dependent because the underlying JIT has to support the hardware - that includes Java, JS, etc.


At resin.io we keep looking for a solid arm host, as we are currently emulating a ton of arm builds on x86. I guess our use case is somewhat unique, but we still haven't found something that works for us.


Totally offtopic (sorry) but when I saw you mention "resin.io" I immediately thought about the java appserver from caucho which was a pretty okay product back in the day; I assume you're not related?


Primary reason would be finally being rid of the Intel ME. For the truly paranoid.


I can't find a technical reason, but at least it is a way not to give in to Intel's monopoly on CPUs.


Cost.


Performance/TDP/Cost


    We currently offer Ubuntu 16.04 as base Operating System
I see a lot of providers offering Ubuntu these days. Is Ubuntu meanwhile more popular with providers then Debian? And why did it become so popular? Does it have any technical advantages over Debian or is it because many developers have it on their local machine and therefore want the same OS on the server?


Just one data sample : every client I've worked for (we do contract development and consulting for [mostly] startups) over the past few years has deployed on Ubuntu. I can't remember encountering a debian box since the late 90's. As to why? I don't know. Mind share probably, and a load of "to make $X work, go get an Ubuntu machine...".

For our own in-house deployments we have been migrating from CentOS to Ubuntu, again "because everyone uses it" and because it seemed to offer the best chance of working on new hardware at a time when hardware was changing rapidly a few years ago (the introduction of USB3; Atom SOCs, etc).


Just guessing here. Ubuntu has more recent packages since Debian is very conservative by design. Also, Ubuntu has a company backing it which can be of great use strategically as you might collaborate/cross market etc.


>Ubuntu has more recent packages since Debian is very conservative by design

Not really, it depends on Debian's freeze date


One factor that I've seen impacting the decision were the LTS periods. Ubuntu guarantees five years, whereas Debian's Security Team only supports three (assuming the current release cycle of 2 years). There's now an LTS team in Debian to extend that to five years, so that may help.


Note that Scaleway bills in euros, not dollars, pounds, etc. They also have no pre-paid policy, so you cannot pay a lump sum upfront to get many months usage. This means that you could be hit by significant monthly fees if you are paying via another currency. But of course, different banks have different policies, and YMMV.

My €3/mo was (IIRC) more like €5/mo once conversion fees were taken into account. This €2 flat fee would not have been a problem had I been paying for 12 months at a time, but Scaleway do not offer this, in contrast to many other hosting providers on the market. After inquiring, I was told there was no current plan to change this policy.


...so it's the exact same problem as when the rest of the world has to pay for US services in USD.


The issue isn't with the currency, but the lack of pre-paid plans and how that interacts with flat conversion fees.

It's also easy to overlook the euro issue when the majority of popular hosting providers charge in dollars - and I can already see people making this mistake in the thread.


US Dollars and Euros are the national currency for roughly the same number of people.

340 million people live in a Eurozone country.

320 million people live in the USA, plus there's Ecuador and El Salvador.

(516 million people live within the Single Euro Payments Area, where international bank transfers in Euros must cost no more than a national bank transfer. My employer in Denmark has a Euro-denominated bank account; this is presumably typical for businesses in the non-Euro countries in Europe.)


I don't understand why you would be hit by significant monthly fees just because you can't prepay.

You mean that you can't prepay when you think currency exchange would benefit you? Well, if you think you're in a currency exchange sweet spot, go ahead and exchange. Then just leave the money in your bank account for next month's bills.

What's that? Your bank won't let you hold a balance in another currency? That sounds like a problem with you bank, not you hosting service.


While your point isn't invalid, having to contact my bank and work through the process of setting this up is another layer of hassle that I don't have to deal with on another, more flexible provider.


In what country is a bank account in another currency free, or available at no additional cost beyond the local currency account?


As in the UK, I use a Monzo card. One for business transactions and one for personal overseas purchase.


Reminded me of this article "Cellphone Chips Will Remake the Server World. Period." [1] from 2013.

And also of SeaMicro [2].

Looks like the wimpy core lost this particular battle. Or maybe they only make sense at a particular scale.

[1] https://www.wired.com/2013/01/facebook-arm-chips/

[2] https://en.wikipedia.org/wiki/SeaMicro


These seem very inexpensive for the memory and storage, irrelevant of how powerful the processor is. Looks like they'd make great little VPN servers.


Why do you need memory or storage for VPN server? I would say, processor is more of a concern, supporting multiple clients with high speed might hit a botttleneck.


8GB RAM, 200GB SSD for $11.99 is significantly cheaper than linode's 8GB RAM 96GB SSD for $40. Then again all of linode's plans are cheaper than Digital Ocean's. I am curious if this price gap is due to the cheaper ARM SoC ?


They just have higher margins. Look at OVH [0] for example, they offer not only ARM SoCs but X64's for the same price and performance.

[0]: https://www.ovh.de/virtual_server/vps-ssd.xml


The hardware is cheaper, but most importantly it consumes much less power.


I have a bunch of Windows Server key's through Bizspark, Scaleway was the cheapest option to do hosted Windows Servers ( 5 $ / month). It takes some effort though ( adding scaleway drivers to a windows server ISO)

Awesome


Any info on which drivers are required?

Edit: anything to add to https://community.online.net/t/installing-windows-server-201...


Omg. I'm embarrassed. I was using Vultr for various Windows Servers... Not Scaleway ( sorry for the mistake)


??¿? Where does scaleway offer windows? I don't see in the their panel.


They don't. What you could do: install Proxmox (via Debian 8). And then inside Proxmox (KVM) install Windows.

Prior installing Windows you could create a snapshot of your machine to keep around in case of reinstalls.

This is what I have tested


They don't, I personally ran it in a VM but that took me a whole day to setup and would break every time you upgraded the host Ubuntu system.


I was mistaken with Vultr. Sorry ( see my parent post)


Little bit confusing in the admin interface. It seems like these new processors replace the old C1 offer. This is the cpuinfo of the new one:

    cat /proc/cpuinfo 
    processor       : 0
    BogoMIPS        : 200.00
    Features        : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics
    CPU implementer : 0x43
    CPU architecture: 8
    CPU variant     : 0x1
    CPU part        : 0x0a1
    CPU revision    : 1

    processor       : 1
    BogoMIPS        : 200.00
    Features        : fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics
    CPU implementer : 0x43
    CPU architecture: 8
    CPU variant     : 0x1
    CPU part        : 0x0a1

For comparison, this was the old C1, their prior ARM offering they mention in the blog post:

    Processor       : Marvell PJ4Bv7 Processor rev 2 (v7l)
    processor       : 0
    BogoMIPS        : 1332.01

    processor       : 1
    BogoMIPS        : 1332.01

    processor       : 2
    BogoMIPS        : 1332.01

    processor       : 3
    BogoMIPS        : 1332.01

    Features        : swp half thumb fastmult vfp edsp vfpv3 tls 
    CPU implementer : 0x56
    CPU architecture: 7
    CPU variant     : 0x2
    CPU part        : 0x584
    CPU revision    : 2
    Hardware        : Online Labs C1
    Revision        : 0000
    Serial          : 0000000000000000


I take it the BogoMips are just a false info in the kernel interface.

For performance, I ran `sysbench --test=cpu --cpu-max-prime=1000 run` on both (first I tried that with max-prime=20000, but that did not finish in a reasonable time span on the old server).

New:

    Number of threads: 1

    Doing CPU performance benchmark

    Threads started!
    Done.

    Maximum prime number checked in CPU test: 1000


    Test execution summary:
        total time:                          0.6541s
        total number of events:              10000
        total time taken by event execution: 0.6507
        per-request statistics:
             min:                                  0.06ms
             avg:                                  0.07ms
             max:                                  0.14ms
             approx.  95 percentile:               0.06ms

    Threads fairness:
        events (avg/stddev):           10000.0000/0.00
        execution time (avg/stddev):   0.6507/0.00
Old:

    Number of threads: 1

    Doing CPU performance benchmark

    Threads started!
    Done.

    Maximum prime number checked in CPU test: 1000


    Test execution summary:
        total time:                          10.1997s
        total number of events:              10000
        total time taken by event execution: 10.1909
        per-request statistics:
             min:                                  1.02ms
             avg:                                  1.02ms
             max:                                  1.22ms
             approx.  95 percentile:               1.02ms

    Threads fairness:
        events (avg/stddev):           10000.0000/0.00
        execution time (avg/stddev):   10.1909/0.00

When using the additional cores (2 for the new, 4 for the old):

New:

    Number of threads: 2

    Doing CPU performance benchmark

    Threads started!
    Done.

    Maximum prime number checked in CPU test: 1000


    Test execution summary:
        total time:                          0.3282s
        total number of events:              10000
        total time taken by event execution: 0.6523
        per-request statistics:
             min:                                  0.06ms
             avg:                                  0.07ms
             max:                                  0.28ms
             approx.  95 percentile:               0.06ms

    Threads fairness:
        events (avg/stddev):           5000.0000/2.00
        execution time (avg/stddev):   0.3261/0.00

Old:

    Number of threads: 4

    Doing CPU performance benchmark

    Threads started!
    Done.

    Maximum prime number checked in CPU test: 1000


    Test execution summary:
        total time:                          2.5527s
        total number of events:              10000
        total time taken by event execution: 10.1983
        per-request statistics:
             min:                                  1.02ms
             avg:                                  1.02ms
             max:                                  5.49ms
             approx.  95 percentile:               1.02ms

    Threads fairness:
        events (avg/stddev):           2500.0000/1.87
        execution time (avg/stddev):   2.5496/0.00

The new processors seem to be a lot more powerful. I run my blog on that old C1 (that might have skewed results somewhat if they were close, but they are not). I think I'll move it over asap.

Edit:

I have another Scaleway instance that runs https://www.pc-kombo.de, and is a C2S instance, which is the also mentioned X64 offering of theirs. It is more expensive, as it has more cores, but if matching those the ARMv8 seems to actually be faster. That's the performance of the C2S:

    Number of threads: 1

    Doing CPU performance benchmark

    Threads started!
    Done.

    Maximum prime number checked in CPU test: 1000


    Test execution summary:
        total time:                          0.8497s
        total number of events:              10000
        total time taken by event execution: 0.8480
        per-request statistics:
             min:                                  0.08ms
             avg:                                  0.08ms
             max:                                  0.13ms
             approx.  95 percentile:               0.08ms

    Threads fairness:
        events (avg/stddev):           10000.0000/0.00
        execution time (avg/stddev):   0.8480/0.00
        
With 2 threads:

    Number of threads: 2

    Doing CPU performance benchmark

    Threads started!
    Done.

    Maximum prime number checked in CPU test: 1000


    Test execution summary:
        total time:                          0.4489s
        total number of events:              10000
        total time taken by event execution: 0.8752
        per-request statistics:
             min:                                  0.08ms
             avg:                                  0.09ms
             max:                                 20.09ms
             approx.  95 percentile:               0.08ms

    Threads fairness:
        events (avg/stddev):           5000.0000/23.00
        execution time (avg/stddev):   0.4376/0.01
Seems like I should move that as well.


These arm servers, do they use ECC ram?


As with x86-based servers, it depends on the chipset (or SoC for ARM). For the two APM X-gene1 based servers that I have, the answer is yes. The Cavium ThunderX also takes ECC RAM.


Slightly OT but can anyone recommend ARM servers for home use? I have x86 server but is too noisy and overpowered, and a raspberry pi which is too weak. Can someone recommend some hardware in-between?


RPi 3 with an overclock isn't as weak as the previous ones… and you can boot it over network or USB. Still, the lack of SATA and Gigabit Ethernet is disappointing.

So I currently use a Mac mini '06 as my home server. Used to be my main desktop back in the day… I upgraded the CPU (Core Duo → Core 2 Duo), flashed the firmware from the '07 model (allows running OS X Lion, but idk if that did anything else), installed an SSD (yeah, SATA 1 limits the bandwidth, but latency is sooo much better & I was slightly concerned about the old drive failing) and installed HardenedBSD (because this machine has 32-bit firmware, that was more complicated than usual — I compiled GRUB 2 for 32-bit EFI! And that happily boots the 64-bit FreeBSD kernel. From ZFS, even.)

One downside of that machine is called "Marvell Yukon 88E8053 Gigabit Ethernet". It just… stops working after a while: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=206567 So my home server is currently on Wi-Fi :D

Another thing is that you have to shove a resistor into the DVI connector to let the machine boot w/o a display https://soledadpenades.com/2009/02/10/mac-mini-as-a-headless...


Not ARM, but I've used a mixture of Intel NUCs and Atom SOC-based Supermicro machines like this : https://www.amazon.com/gp/product/B00HS4NLHA (both are fanless and quite cheap but run stock x86 binaries). The Supermicro machines have a proper IPMI interface which is nice for when your box falls on its face when you're on vacation).


Depending on your requirements you could have a look at the ODROID (http://www.hardkernel.com/main/main.php).


I have two Odroids in different houses, one is three years old, the other four.

They required some fiddling to get Apache not to exhaust the RAM available. I use one for backups, and the other for a web site / image gallery + UPnP media server + storage + Bittorrent seed.

They're OK, but the initial setup took a while due to the lack of clear documentation which x86 or a Pi has.


Maybe a CubieTruck (CubieBoard 3), it is one of the few boards that has both SATA and Gb Ethernet built in to the SoC.

My actual home server is an HP MicroServer though, totally silent and uses ECC RAM.


Marvell Espressobin or ODroid. Also check what upstream kernel support you have, as the community size will be a fraction of that of the Pi


That's perfect to run Tor relay services on the cheap


they still lack two factor auth...




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: