
Instant Cloud – SSD Bare Metal Servers - edouardb
http://instantcloud.io
======
growt
I made a habit out of measuring the disk performance of cloud servers, so here
it is:

    
    
      ubuntu@instantcloud:~$ dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync                                                                         
      1024+0 records in                                                                                                                                      
      1024+0 records out                                                                                                                                     
      1073741824 bytes (1.1 GB) copied, 11.0101 s, 97.5 MB/s  
    
      ubuntu@instantcloud:~$ sudo hdparm -tT /dev/nbd0                                                                                                       
                                                                                                                                                           
      /dev/nbd0:                                                                                                                                             
       Timing cached reads:   2180 MB in  2.00 seconds = 1090.44 MB/sec                                                                                      
       Timing buffered disk reads: 268 MB in  3.02 seconds =  88.85 MB/sec

~~~
venti
How does that compare to other cloud servers?

~~~
rshm
Tested on random instance, without any io intensive application running

VULTR 2 GB INSTANCE dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync

    
    
        1024+0 records in
        1024+0 records out
        1073741824 bytes (1.1 GB) copied, 8.11451 s, 132 MB/s
        
        
        hdparm -tT /dev/vda
        
        /dev/vda:
        Timing cached reads:   25106 MB in  2.00 seconds = 12568.51 MB/sec
        Timing buffered disk reads: 496 MB in  3.02 seconds = 164.16 MB/sec
    
    

DIGITALOCEAN 2GB INSTANCE

    
    
        hdparm -tT /dev/vda
        
        Timing cached reads:   8926 MB in  2.00 seconds = 4468.52 MB/sec
        Timing buffered disk reads: 542 MB in  3.00 seconds = 180.43 MB/sec
    
    
        dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync 
        1024+0 records in
        1024+0 records out
        1073741824 bytes (1.1 GB) copied, 32.0005 s, 33.6 MB/s

~~~
antouank
Did the same test on a VM I have in DigitalOcean... ( Ubuntu, 512 MB droplet )

~# dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync 1024+0 records in
1024+0 records out 1073741824 bytes (1.1 GB) copied, 2.35872 s, 455 MB/s

~~~
hobarrera
Add four spaces at the start of a line to mark it as code (both to avoid lines
being merged and fixed-width font).

(yes: it's just markdown here).

------
chetanahuja
Back at the office, we were recently talking about the possibility of really
cheap (in terms of power requirement) cloud servers which are equivalent of
raspberryPi's with soldered on flash cards around the 32-64gig range. I'd bet
you can pack a shitload of these in a 1U box and still have power and cooling
to spare. The only expensive part might be budgeting for all those ethernet
ports on the switch and uplink capacity (for bandwidth intensive servers).

One of the engineers tried running our server stack on a raspberry for a
laugh.. I was gobsmacked to hear that the whole thing just worked (it's a
custom networking protocol stack running in userspace) if just a bit slower
than usual. I can imagine making use of loads of ultra-cheap servers
distributed all around the world... IF the networking issue can be solved.

Perhaps the time is right for a more compact and cheaper wired networking
solution... maybe limit the bandwidth to 1Gbps but make it ultra compact with
much cheaper electronics. Sigh... a man can dream.

~~~
static_noise
Small systems are good for realtime applications. Then the resources for an
application have to be ready all the time.

In terms of space/power/reliability/scalability large systems win. Sure a
single raspberry doesn't draw much power. But it doesn't provide much
computing power either. Throw in a few hundred of those systems and you feel
the heat. Still the computing power is comparable to a single rack server. You
want to use a RAID6 on the raspberry, sure, can be done but throw in 4 times
the number storage devices. Compare that to the rack server with 16 SSDs
configured as RAID6 where the data is shared by hundreds of virtual machines.
If you compare the "energy per bit" or "energy per operation" the high-powered
server CPUs win versus most anything out there.

So I'd say:

* If you can justify a real server, do use one instead of dozens of "simple machines".

* If you can use the cloud do it instead of providing your own hardware.

* Exceptions may apply where security or reliability is concerned. (You wouldn't run your heart monitor in the cloud when a small dedicated system does the job.)

~~~
Varcht
Makes sense, like semis vs trains.

~~~
tracker1
More like a semi vs a dozen of Prii... Semi still wins.

------
mrmondo
A truly great way to demo a product while also making use of unallocated
resources.

Unfortunately the latency to the servers from Australia is so poor and makes
it practically unusable.

Here is a trace from my 100Mbit fibre:

    
    
      samm ~ % mtr -n --report 212.47.250.196
      Start: Fri Apr 10 19:36:45 2015
      HOST: samm-mbp                    Loss%   Snt   Last   Avg  Best  Wrst StDev
        1.|-- 192.168.0.1                0.0%    10    1.7   2.0   1.6   4.7   0.7
        2.|-- 150.101.212.44             0.0%    10    2.7   3.1   2.5   4.2   0.0
        3.|-- 150.101.208.65             0.0%    10    5.3   6.9   2.7  38.1  11.0
        4.|-- 150.101.33.28              0.0%    10   28.3  20.8  14.6  28.3   5.6
        5.|-- 150.101.33.149             0.0%    10  170.6 170.4 170.1 170.8   0.0
        6.|-- 62.115.33.97               0.0%    10  170.8 170.8 170.1 171.4   0.0
        7.|-- 213.155.135.156            0.0%    10  240.9 240.7 240.2 241.2   0.0
        8.|-- 80.91.251.103              0.0%    10  322.3 335.0 321.4 413.3  30.2
        9.|-- 213.155.136.209           20.0%    10  319.7 329.8 317.9 355.6  14.4
       10.|-- 62.115.40.86               0.0%    10  319.1 319.3 318.7 320.0   0.0
       11.|-- 195.154.1.41               0.0%    10  332.5 333.0 332.3 333.9   0.0
       12.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
       13.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
       14.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
       15.|-- 212.47.250.196             0.0%    10  395.0 334.8 319.4 395.0  23.5
    

Edit: Spelling (Typo)

~~~
TheSpiceIsLife
I recognise those Internode IPs at a glance. I used to work in the ADL6 data
centre.

I'm told that traceroute isn't a reliable way to determine end point latency.
I can ping 212.47.250.196 and get a round-trip time of ~365ms. That the
intermediate hops each take 170 - 333ms to response in the trouceroute is
meaningless. Or so I thought? Maybe I'm not sure what you're getting at?

(Edit: I mean iiNet. I mean TPG.)

~~~
mrmondo
That's mtr, not traceroute so it actively measures latency to each hop. You
are correct in saying that the connection is from Internode (owned by iiNet).

Here's an example from PIPE networks (TPG):

    
    
      root@dev-samm:~  # mtr -n --report 212.47.250.196
      HOST: dev-samm                    Loss%   Snt   Last   Avg  Best  Wrst StDev
        1.|-- <removed>                  0.0%    10    0.9   1.1   0.9   2.0   0.3
        2.|-- <removed>                  0.0%    10    0.5   0.5   0.4   0.6   0.1
        3.|-- <removed>                  0.0%    10    1.1   2.4   1.0   9.1   2.8
        4.|-- <removed>                  0.0%    10    1.1   1.1   1.0   1.3   0.1
        5.|-- 203.219.106.21             0.0%    10    2.5   2.8   1.1   4.7   1.1
        6.|-- 202.7.171.25               0.0%    10   11.2  13.1  11.2  14.6   1.3
        7.|-- 203.29.129.195             0.0%    10   11.1  12.9  11.1  14.7   1.2
        8.|-- 64.86.21.53                0.0%    10  210.2 212.2 210.0 229.9   6.2
        9.|-- 64.86.21.2                 0.0%    10  373.1 373.2 372.9 373.9   0.3
       10.|-- 66.198.127.1               0.0%    10  382.3 382.4 382.2 383.1   0.3
       11.|-- 66.198.127.6               0.0%    10  382.1 382.0 381.8 382.1   0.1
       12.|-- 66.198.70.21               0.0%    10  378.7 379.3 378.5 381.6   1.2
       13.|-- 80.231.130.33              0.0%    10  380.6 381.0 380.6 383.0   0.8
       14.|-- 80.231.130.86             20.0%    10  373.9 374.1 373.9 374.7   0.2
       15.|-- 80.231.154.17             10.0%    10  371.3 371.5 371.3 371.9   0.2
       16.|-- 80.231.153.58             10.0%    10  379.4 379.1 378.9 379.4   0.2
       17.|-- 5.23.24.6                 10.0%    10  354.1 354.4 353.9 355.6   0.5
       18.|-- 195.154.1.39              10.0%    10  355.2 355.2 354.9 355.5   0.2
       19.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
       20.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
       21.|-- ???                       100.0    10    0.0   0.0   0.0   0.0   0.0
       22.|-- 212.47.250.196            10.0%    10  354.5 354.3 354.0 354.9   0.3  
    

A throughput test struggles to obtain 1Mbit:

    
    
      samm ~ % iperf -c 212.47.248.211
      ------------------------------------------------------------
      Client connecting to 212.47.248.211, TCP port 5001
      TCP window size:  129 KByte (default)
      ------------------------------------------------------------
      [  4] local 192.168.0.22 port 56958 connected with 212.47.248.211 port 5001
      [ ID] Interval       Transfer     Bandwidth
      [  4]  0.0-11.1 sec  1.38 MBytes  1.04 Mbits/sec

~~~
TheSpiceIsLife
Ah, yes, sorry, I was interpreting the numbers incorrectly. Should probably
sleep more. Thanks.

------
drinchev
Interesting, without any information for the user ( no credit card required ),
it is really interesting how they prevent usage of this box as a hacking
machine.

~~~
carlesfe
That was my first thought. But then again, all my private servers have been
hacked or under intense attacks by bots, so maybe it is my own brain which is
now compromised.

Anyway, I think this is a super cool and creative service, kudos to the
author.

Edit: Possible bug, I get nothing on Safari after 1 minute of waiting:
[http://i.imgur.com/pKoNPrV.jpg](http://i.imgur.com/pKoNPrV.jpg)

Edit 2: Ah, on trying again, I get an error that all servers are busy. Maybe
we killed it again, HN :)

~~~
drinchev
You are right. I've been hacked a couple of times too. What I see the hacker
is usually doing is make my machine a DDoS bot that acts on demand.

Anyway with a service like this it would be far easier for anyone to make his
hacking attempts more untraceable.

------
Lennu
Tried it for 30mins. I installed apache2 and made a quick mirror of my
personal homepage with wget.

After the 30min session the window closed and it told me that session expired.
Although the actual server was still up at the ip. About 5 minutes later
apache shut down, so I guess the server was destroyed.

This is great for experimenting things. You could quickly for example test
"sudo rm -rf" at the root and see what it does. Nice one!

~~~
volent
You can already do `docker run -ti ubuntu bash` and not be limited to 30
minutes test :)

~~~
Lennu
Sure, but you need larger platform than just a browser to do that.

I mean, for example total beginners may find this truly helpful.

~~~
icebraining
_Sure, but you need larger platform than just a browser to do that._

Do you? JSLinux (by Fabrice Bellard) disagrees :)
[http://bellard.org/jslinux/](http://bellard.org/jslinux/)

------
mrsirduke
FWIW: We recently talked about their servers, hardware and such:
[https://news.ycombinator.com/item?id=9309459](https://news.ycombinator.com/item?id=9309459)

------
de_dave
Seems like a really smart way to use spare capacity to market your services.
Especially so when your services aren't quite the norm (i.e. ARM rather than
x86)

------
kamilafsar
Apparently they also have Object storage which seems to be way cheaper than
AWS Glacier.

€0.02 GB/month, with unlimited requests and transfer
[https://www.scaleway.com/pricing](https://www.scaleway.com/pricing)

------
Retr0spectrum
My password had an "0" in it - I had to guess whether it was a number or a
letter.

~~~
Omie6541
definitely, I am brute forcing since it has two sticks and I don't know if
they are l/I/1 or l

~~~
astrodust
Note to developers, exclude similar letters from your alphabet when generating
passwords or spell them out phonetically.

------
taksintik
The tag line "Get your 30 minutes free server" is extremely confusing.

Do you mean to say "Get a free server for 30 minutes"?

~~~
arthurfm
I'm confused too, since it could also mean "Get a free server in 30 minutes"?

------
ck2
If ovh ever puts ssd on their kimsufi, they will blow all the micro-servers
out of the water

[http://www.kimsufi.com/ca/en/index.xml](http://www.kimsufi.com/ca/en/index.xml)

Meanwhile for $42 you can get dedicated SSD

[http://www.soyoustart.com/us/essential-
servers/](http://www.soyoustart.com/us/essential-servers/)

The only problem with all of these is no ECC

~~~
martius
Kimsufi offers servers with SSDs in France: KS-2 SSD: Atom™ N2800, 4GB of Ram,
40GB SSD, 100 Mbps for 9,99€/month.

[http://www.kimsufi.com/fr/](http://www.kimsufi.com/fr/)

------
mijoharas
I just get what looks like a mac window div (judging by the class `terminal` I
guess it's supposed to be a terminal to the server) saying 'the server refused
the connection'.

~~~
edouardb
Try to refresh?

~~~
mijoharas
Nope, happens every time. ubuntu chrome `net::ERR_CONNECTION_REFUSED`.

------
prattbhatt
" _Oops!

All C1 servers are busy, please retry in few minutes :)_"

~~~
keyme
And the back button is broken as a result.

------
vorotato
Isn't the point of "Cloud" computing, high availability, high uptime? If
there's a hardware failure on your "cloud machine" it just fails, and there's
no recovery. This sounds like just regular hosting and not cloud hosting,
please correct me if I'm misunderstanding how this is set up.

~~~
Gracana
Individual nodes can be unreliable (and therefore very cheap), availability is
maintained by distributing load to on-line nodes. That's my understanding of
it, anyway.

~~~
contingencies
The technique is known as 'high availability' (HA) or 'fail over' clustering.
[https://en.wikipedia.org/wiki/High-
availability_cluster](https://en.wikipedia.org/wiki/High-availability_cluster)
'Cloud' computing is simply another term meaning "using someone else's
infrastructure as a service" (IaaS), which is essentially a restatement of the
centralized computing paradigm.
[http://en.wikipedia.org/wiki/Centralized_computing](http://en.wikipedia.org/wiki/Centralized_computing)

The gaping problems with such paradigms (chiefly survivability/evolvability
over time) were well highlighted for large scale, general systems by the
internet itself, RFC3439 (2002) puts it thus: _Upgrade cost of network
complexity: The Internet has smart edges ... and a simple core. Adding an new
Internet service is just a matter of distributing an application ... Compare
this to voice, where one has to upgrade the entire core._

My take: cloud computing is about to get smart edges; cloud providers are
about to be commodified; and we are about to effect an appropriately flexible
layer of additional abstraction to the entire field of computing that will
further push us towards a position in which we treat computation as any other
service and networked communication itself as a means of economic exchange.

------
nnq
Mostly of off-topic, but I have to ask: what's the state of Go compilers for
ARM nowadays? Are they generating optimized code on par with x86 ones?

------
Retr0spectrum
My session lasted for 35 minutes - The web app closed after 30 but my VNC
connection continued, presumably because someone else took my slot.

------
Maro
No offense, but "30 minutes... with a click" is pretty standard in the age of
AWS. It's not a good headline anymore :)

------
jradix
Here is the message I have : "All C1 servers are busy, please retry in few
minutes :)"

~~~
waitingkuo
got the same message..

------
jfoster
I'm very impressed that it lives up to the promise of getting it "with a
click."

------
breakingcups
I just get a timeout in the terminal-like window. SSH-ing to it also doesn't
work.

------
ponytech
Are there any restrictions on what you can do? Input/output ports blocked?

------
Filligree
Your page breaks the back button, at least on a busy error. Not cool, guys.

~~~
weavie
How so? You can still click the Get My Server button when the error message
shows up at the top.

~~~
Filligree
Yes, but if you press 'back' you get a split-second view of the previous page
before being redirected to the error page again.

In general, don't use unconditional in-page redirects, whether they be
javascript or meta refresh. If you want to do a redirect like that, you can
have your server serve a 301 and the browser will collapse history
appropriately, but if you _must_ do it in JS then use history.replaceState.

------
curiously
this is great. I just wish digitalocean would deploy faster. it says 60
seconds but more than often it takes several minutes to load a snapshot or
destroying can take ages (since you are still billed for idle and off
servers).

------
dd3344
I put a fork bomb...

~~~
afandian
I think that's one of the arguments in favour of this approach. Your
neighbours on AWS might notice. On individual real servers, they wouldn't.

~~~
jordanthoms
It's not such an issue these days though - all the non-toy AWS instances have
dedicated cores, sometimes dedicated processors.

------
dd3344
Limiting Fork Bomb In Docker

[https://devlearnings.wordpress.com/2014/08/22/limiting-
fork-...](https://devlearnings.wordpress.com/2014/08/22/limiting-fork-bomb-in-
docker/)

