Then call it FaaS - it's not serverless and that term is misleading, marketing bunk. Even the title of the post describes the servers used in it's deployment, I'm quite sick of this term - I don't find it at all helpful when describing application architecture.
To me, "Functions as a Service" is massively more obvious than "Serverless".
Google App Engine, Google Big Query
, AWS Lambda, AWS Athena
The name (which is just a marketing term, not technical) just reflects that you don't deal with servers but with services.
Nothing fancy about serverless.
It's not terribly complicated either - if you are only concerned with code, then your deployment is 'serverless' in the sense that _you are not concerned with servers_.
Of course 'servers' are involved; the point is whether that's of concern to your deployment.
In an old job, in an effort to disambiguate, I would always use "computer" to mean hardware (losing battle).
I would say "machine" means the hardware and/or the operating system of the "server" (which is a fancy word for "computer").
The "and/or" part being very important!
This is the problem with most abstractions in IT I guess - they're beautiful and clean until reality bites.
If you can draw the application architecture without anything that is best labeled a server, wouldn't that make it a "serverless" architecture?
But really, I think the primary use case is cost. Actually having access to that many physical machines to play with in a classroom or home learning environment is sort of new! The market hasn't really had such accessible linux computers at "Ehh, if it breaks I'll just buy a new one, no big deal" prices. It's educational, and the more stable the ARM support is, the better a student's skills will transfer over into the real world of systems administration.
Try 3.5 watts, not counting overhead of most USB power bricks being incredibly inefficient.
A current-gen 35W laptop CPU will be some 10 times faster as a RasPi, have much faster storage available (SATA3 or NVMe versus… USB2), much faster I/O (GBit LAN and GBit Wifi versus… USB2), and a lot of other benefits. (Like an integrated screen and battery and keyboard and …) It also won't need external hardware to communicate with other cluster members – that 10-port ethernet switch will need power, too.
One RasPi is relatively energy efficient; RasPi clusters… not so much.
> But really, I think the primary use case is cost.
 http://raspi.tv/2016/how-much-power-does-raspberry-pi3b-use-... , see the numbers for "Multi-threaded CPU Tests", which is the most applicable for server workloads
 Running that script manages ~9 runs/second on an i7-6700HQ, vs. ~0.9 run/second on a RPi3.
And at $5 each, if we're talking hardware costs for setting up a "toy" cluster for, say, self-learning or student labs, that's hard to beat. I suppose you could do better using VMs for a virtual cluster, but that adds other complications unrelated to the clustering task. But I agree there doesn't otherwise seem to be much practical purpose here, and the overhead of running an OS on each Pi really cuts into performance compared to a single chip w/ multicores instead.
At the same time, you're comparing the power consumption of lets say 10 whole RPis/platforms to the consumption of a single processor. Stick that processor in a platform (laptop), and it's going to use much more than 40W.
Like you said, you get a lot more with the laptop, but given your benchmark (10x difference), my guess is that 10x RPis would still be more power efficient than a laptop with a 6700HQ at that specific task.
Gets you 11 Pi's ...
Gets you only 1 Intel CPU, no memory, motherboard, heatsink, fans.
Reminds me of the Celeron® Processor J3455... 10W rating on Intel there page. On AVERAGE! Then when you see the real power usage under load for MB + CPU + 16GB memory, its actually doing 35W.
Where as the Pi's are doing 3.7W max per piece. So even with 4 pieces to match the performance, your still half the wattage.
If Intel really scaled that good in power vs performance, why are we not seeing x86 phones all the time?
It's the 3rd beowulf cluster I've ever built, the 2nd being from recycled PowerMacs and the 1st being built with Pentium IIs. It's the most powerful Beowulf I've ever built. It's also the smallest. It fits in my hand and it runs off USB.
Now you know what it is, I'll tell you about what I use it for.
The first problem I used it for was to approximate 1 billion digits of Pi. I started with Monte Carlo methods, but while they scale well they're non-optimal. Eventually I managed to implement a Chudnovsky-type algorithm that worked despite the limitations of the Pi 3 head node and Pi zero nodes.
Most recently I wrote code to explore the Mandelbrot set. Using some custom software I knocked up, I set a start and finish x,y,z,w and h coordinate set and it renders individual frames which are then stitched together with ffmpeg.
I need to rebuild the cluster because I made some booboos with how it was set up, and there's been substantial advances in the HAT configuration. I'm thinking of doing it over christmas.
What I've found works best are:
* Learning about problems
* Learning about scaling problems
* Learning about scaling problems with solution constraints
* Learning about scaling problems with solution constratints over a very long period of time.
As long as you're not in a rush to finish calculations and don't mind picking something up, pecking at it and coming back later (like say, a week or so) the Pi is mostly fine. Although ISTR my final Pi approximation was in the order of minutes to run to a million digits.
I know other people host sites, I just like doing basic maths problems to improve my maths and algorithms knowledge.
 - https://clusterhat.com/
In terms of any performance benefit? No.
(Full disclosure: I am one of the authors)
You can build pipelines from S3, SQS, SNS, Lambda to do a lot of work very quickly in parallel with less overhead than similar self-hosted or self-managed solutions. You don't have to worry about spinning up extra VMs, or dealing with overprovisioning. It all just works.
(Apart from that, I think this is a misconception - Amazon seems to have convinced people you don't need a sysadmin anymore, whereas in fact once you start exploring the whole AWS infrastructure and its complexity, you quickly realize you still need sysadmin's knowledge plus understanding of how their services work and all their quirks.)
True serverless let's me offload that cost to AWS instead of having a sysadmin
Whoever created them
> Application deployments
> testing, qa
QA / Customer Support
So, still no sysadmin.
Not saying it has to be this way, just saying originally that serverless can save you money.
In fact, we are in the process of moving all our APIs over to AWS Lambda w/ ES and it's going to save us 25-50% of our EC2 costs.
We might be able too do the same since we are re-writing in another language, but without AWS Lambda, we would have never gotten that shot.
The "armhf tax" is that you tend to have to build your own images for stuff :( Then you need your own build infra (or "heath robinson" qemu builds) because pis run out of memory building a lot of stuff... but mainly if C++ is involved so ymmv.
That said, I got a rack of 8 pis doing nothing right now, so...
(unrelated http://www.bitscope.com/product/BB04/ is handy if you want to rack a lotta pis, not affliated...)
There is probably a micro business for someone running a slick docker build system for armhf handling the qemu emulation or toolchain dirtiness "under the hood" in the cloud somewhere, on x86-64 boxes with a lot more than 1GB of RAM.
Then export a RAM disk from that machine and add the NBD disk as swap on the Raspberry. It would be slow, but builds would complete. Then you'd need only one low-to-moderate power machine (a PC presumably) in your Raspberry cluster, just with lots of RAM in the one PC.
Could this work for speeding up builds of ARM images and then deploying locally?
I have a similar RPI rack collecting dust for the same reason, hence the question.
Now that I'm thinking about it, I'd like to see if going from the RPI's USB3->SATA adapter->M.2 adapter->16GB of Optane ($40+tax locally) would work, and if it did work (a big if), what performance is like.
Edit - Scratch that, I just remembered the Pi 3 is still USB 2.
As suggested here, the blog post only talks about installing the Docker Community Edition (CE) that is published under Apache 2.0 License. This is the official incarnation of the Docker Product and is provided to have:
- a consistent user experience across different linux distributions
- strong security guarantees
- regular bug fixes and updates
Moby Project serves as the upstream for the entire Docker Product and includes all open source components that make up Docker, such as runc, containerd, notary, moby, infrakit, linuxkit, libnetwork, hyperkit, vpnkit, datakit, etc.
It's my understanding that the bulk of the components that make Docker work are fully open source, and some extra support and deployment related things are the only things that are commercially licensed. This would include Docker Swarm, as it's a part of Docker itself and not something separate. IANAL though.
I'm pretty sure the reason they go for plain Docker over Moby is sheer ease of use. Despite being a bit weird to understand under the hood, Docker is just dead simple to get up and running with, and using the clustering mode that's built right in is easier to teach new readers than Moby, which from its Github page is obviously designed for folks that are already rather comfortable with Docker. Straight from Moby's Github page:
"Moby is NOT recommended for: Application developers looking for an easy way to run their applications in containers. We recommend Docker CE instead."