for those who're curious: I was wondering what kind of 20GB SSDs they're using, they seem to be way slower than virtualized storage.
Ballparking the size of the case based on the comparison of a single server to a business card, it looks like they're using roughly a 7U case. Anybody know more about the specs of a case-load of servers? I'd love to know what the power consumption is. If you could put 6 of those cases in a cabinet, that would be some pretty incredible compute density.
I hope they consider selling the system externally. At first blush at least it seems like it would be an ideal setup for hosting colocated trading applications.
If you watch the video carefully, you'll notice that their rack/case contains 16 cartridges, and each cartridge has 18 servers on it. That indicates they have 288 servers per rack. 288x4=912, so I believe that when they said 912 computers per rack, they actually meant 912 cores per rack.
The first and main reason is, that when you have multiple containers/VMs on a single server, what really happens is that "peak" or "burst" CPU matters more, in terms of what the user (whether developer doing testing, or users of a website that are browsing it) sees as performance.
Only in very memory-intensive tasks would these servers outperform (because the memory bandwidth is dedicated to just your server and is not shared). Then again, being able to bump from 2GB to 4GB (provided the application can take advantage of it) of RAM might well minimize the issue due to caching or other optimizations.
Second, 2GB RAM is (sadly) just not enough. As an example, the Zimbra mail server barely runs in 2GB; and many other Java based programs are only fast once they have chewed up a couple hundred MBs of RAM.
last pid: 91361; load averages: 0.28, 0.28, 0.25 up 9+19:03:28 14:37:36
74 processes: 1 running, 73 sleeping
CPU: 0.0% user, 0.0% nice, 0.0% system, 1.2% interrupt, 98.8% idle
Mem: 38M Active, 512M Inact, 133M Wired, 8404K Cache, 87M Buf, 32M Free
Swap: 2000M Total, 71M Used, 1929M Free, 3% Inuse
2GB is enough to run a LOT of things.
The hosting market is huge. There's a niche for this, especially if they can exploit the strengths of bare metal non-virtualized hosting without the cost premium. A $5/month bare metal quad-core would be ideal for a whole bunch of applications.
$0.99/month VPSs now. Not a dedi though of course.
Only downside - I would like to know the pricing before I start using this.
If anyone from Online Labs is reading this, please let us know about the pricing!
I will give the API a try tonight.
About the API, you can give a look to your Python SDK (https://github.com/online-labs/ocs-sdk) or to the CLI one of our users developed (https://community.cloud.online.net/t/getting-started-manage-...)
Nice little setup. It's slow, of course (certainly not fast enough to encode 720p H.264 in real time, for example), and the ARM architecture is bound to be faster.
That said, they will make a juicy acquisition target for someone who wants their tech. So I am not saying they wasted their time either.
The preview is free! You should expect good prices as we designed our own hardware for the cloud.
Yes, but how much does it cost?
>The C1 server is a 4-cores ARMv7 CPU with 2GB of RAM and a 1 Gbit/s network card. It is designed for the cloud and horizontal scaling.
These would be very interesting for low-latency network-heavy applications if each machine had a 1gbps latency-optimized network connection to the core switch wherever they're hosted. Virtualization might be fine from a throughput POV but I've seen hypervisors impose a fair amount of latency "jitter" on heavily loaded hosts. It's one of the reasons why bare metal servers can be better. I'm thinking core network router functions, certain kinds of games, etc.
Another area where I can see this excelling is high security applications, like having a cloud node that is in charge of signing things with very protected secret keys like some kind of certificate authority. Virtualization has a pretty good security record, but for high-paranoia applications bare metal is better. If you offered the ability to upload your own pre-encrypted image this would be very interesting. Not quite as good as homomorphic encryption, but that's not quite "there" yet -- still too slow to be usable. At the very least you'd have to crack into the hardware and dump the RAM to break into a system and steal a key.
Finally, make stability a high priority. With low power, low heat dissipation, dedicated hardware, and solid state everything you should have an easier path to cheaper "many nines" high-reliability service. That kind of thing is kind of expensive right now in the hosting world so you'd have some pricing power there.
ZeroTier One, a network virtualization engine for inter-container and inter-VM networking as well as VPN access, supports 32-bit ARM/Linux as an officially supported platform:
It's also possible to use it with Docker very easily:
I decided to create an official ARM build and support that platform since there were so many users on Raspberry Pi and similar, but as far as I know these binaries will run on this architecture. I signed up for a preview of Online.Net so I will test once I have a "box." :)
Tested with your free trial via the web terminal, and the ARM build from the above download link works flawlessly as long as you "modprobe tun" first:
Then I pinged my laptop on the desk next to me, which also happens to be on the "Earth" virtual LAN. Fun stuff. :)
even the 5.99 euro /month offer seems much more generous than digitalocean.