

Putting Teeth in Rackspace Public Cloud - pquerna
https://journal.paul.querna.org/articles/2014/07/02/putting-teeth-in-our-public-cloud/

======
lnkmails
IPA is a fantastic idea. Some questions:

1) I am actually curious how IPA is deployed to the ramdisks. Any pointers?

2) The turn around time for provisioning is now dependent on download speed
etc. When provisioning batches this could be a problem, right?

3)Did you use any kind of CDN (for image persistence) when dealing with
provisioning in different availability zones?

4) Does IPA also implement SSL/Auth?

~~~
jroll
Great questions!

1) We boot a CoreOS image over PXE. IPA is built using Docker, exported as a
filesystem, and runs in a linux container via systemd-nspawn. It can take
config options via command line or kernel command line. The build system is
here. [1]

2) It could, yes. Images are downloaded directly from Swift, and both the
client and the server has 10gig links. We're also investigating multicast and
bittorrent as alternatives for image distribution.

3) Not sure if you mean agent images or OS images... regardless, at Rackspace,
each region runs as its own standalone cloud - so there shouldn't be any
communication between data centers when provisioning. Does that answer your
question?

4) We're working on implementing client certificate checking for communication
between IPA and Ironic. The agents also live on an isolated VLAN that is only
accessible by Ironic and Swift.

[1] [https://github.com/openstack/ironic-python-
agent/tree/master...](https://github.com/openstack/ironic-python-
agent/tree/master/imagebuild/coreos)

------
avelis
Very insightful. I enjoyed reading the personal view into OnMetal's journey @
Rackspace.

~~~
pquerna
thanks! Author here, happy to answer any questions...

~~~
voltagex_
What did you have to do at the BIOS & firmware level? It's the first time I've
heard of being able to customise that kind of thing.

------
mwcampbell
Do you plan to offer lower-end OnMetal nodes? For example, 16 GB of RAM, a
quad-core Xeon E3 processor, and a 256 GB SSD? Presumably you could put
several such nodes in one chassis to make that kind of configuration
economical.

~~~
pquerna
_avoiding too many forward looking statements_

As a general principle we will add more form factors.

For your specific example hardware, that kind of specification could be
achieved using the Open Compute Micro Servers / Server Card designs, instead
of the full-on 2 processor windmill designs:

[http://www.opencompute.org/assets/motherboardandserverdesign...](http://www.opencompute.org/assets/motherboardandserverdesign/Open_Compute_Project_Micro-
Server_Card_Specification_v0.5.pdf)

As you make smaller servers, things like HA-Networking become a higher portion
of the cost too, so it might be more feasible if we dropped the 2x10g Bond and
went to a single 10g port for example, would losing the HA networking but
having those kinds of specs be interesting to you?

~~~
wmf
I wonder if 2x2.5G (if you can get it to work) would be better than 1x10G.

------
jxf
Not directly related to the article, but asking here: what did you use to make
those diagrams? It looks like the Unicode box drawing characters, but I was
wondering if it was from a tool or handcrafted.

~~~
russell_h
This is what Paul used: [http://bramp.github.io/js-sequence-
diagrams](http://bramp.github.io/js-sequence-diagrams)

