I think the author is implicitly describing the paradigm shift from software and hardware abstractions to service abstractions. We used to make computers by soldering electronics together, then by assembling cases and components, then buying premade servers, then renting cloud capacity, and now we're starting to rent everything as services.
The cloud is about going from capex to opex and the "capex" now is the initial work needed to define your own architectures and stacks for every project (before you get to work on the actual project, i.e. the differentiating part). Amazon is eliminating most of this by offering building block services that fit together with little hassle.
So the challenge for open source is how to move on to this era of service abstraction. It's no longer enough to just provide an NPM package or a configure script.
I think Docker is in a good position to bring us there, but it's currently stuck at the stateless container level. Something needs to evolve so that launching a scalable and auto-maintainable database cluster along with a connected web application cluster is as easy with Docker as it is by renting a few Amazon service.
>The cloud is about going from capex to opex and the "capex" now is the initial work needed to define your own architectures and stacks for every project (before you get to work on the actual project, i.e. the differentiating part). Amazon is eliminating most of this by offering building block services that fit together with little hassle.
Maybe they are eliminating it for themselves, but if they are eliminating most of that for you, then that contradicts your prior thesis. You aren't defining your own architecture then, you are just conceding to becoming a part of Amazon's architecture. Amazon will lock you into their ecosystem as much as possible. As Amazon's ecosystem fills up with services, there will be little to differentiate your service from any other Amazon hosted/run service.
A much better idea would be to have architectures that don't depend on any specific vendor or company, which are deployable everywhere.
If you want to run your own infrastructure from the ground up, maybe. Ironic (bare metal deploys) + Magnum (container deploys with Kubernetes) is exciting in that area, but as someone who wrote and ran an OpenStack cloud, I wouldn't recommend it. For most people, something like ECS/GKE/bare Kubernetes on a cloud provider is a decent step in that direction.
Terrible usability, and mainly aiming at the lower levels of the stack, not the level of actually deploying applications. It is far easier to get a container stack running than to bother with Open Stack.
IMHO the problem with many competing social networks (Tsu, Ello, Diaspora, etc) is that they try to be exactly like Facebook. So in practice they become crappy subsets of Facebook. Then nobody has much interest in using them because Facebook is a much better Facebook.
The more interesting competitors tend to pick a narrower use case (say, sending secret pictures) and become really good at it.
One thing I particularly like about CoAP is the /.well-known URI support (http://tools.ietf.org/html/rfc7252#section-7). It lets you implement services with easily discoverable data endpoints. E.g. IoT sensor devices can be made self-documenting in a lightweight manner.
I wish Lambda could listen to arbitrary TCP ports and respond e.g. to HTTP requests or MQTT messages. In the current form it seems pretty limited for any scenarios where generic clients initiate the requests, such as collecting device data and metrics. As I understand, currently the clients have to speak AWS-specific protocols to submit events / data.
The impression I get is that it is not really realtime, I am guessing all requests get queued and then scheduled on to a suitable host soon, but perhaps not within the time to respond to a TCP request.
We are also working hard on alternative Flocker backends to support other use cases. For example, we have a block device backend to OpenStack Cinder in the works (and afterwards AWS EBS), which is more suited to high availability than migration - watch this space!
Angular.js should be considered a tool that solves certain problems well and some other problems less well.
In my book, Angular (when used with Yeoman's generator-angular-fullstack) is a high-productivity tool that immediately generates a fully functional, well structured MEAN application that's very easy to extend. You get something useful running in less than an hour. That's perfect for prototypes and internal enterprise apps where productivity is more valuable than performance or SEO.
OTOH if you're working on a big web app that will take months to develop, you might as well spend more time building the foundation, and also choose a framework that provides server-side rendering, high DOM performance and things like that.
I've found it depends on the depth of 'flow' you're in. When trying to force yourself to get something done (perhaps using various productivity techniques), eliminating distractions can be quite beneficial. OTOH when actually enjoying what you're doing and being naturally focused, the distractions don't really matter and can even add enjoyment to the work.
Personally I can detect this easily by trying to watch TV or movies at the same time as I'm working. If it feels good, then I'm naturally focused on the work. If it feels distracting, then it's probably best to eliminate all distractions.
Docker on ARM is cool, but it also adds some complexity and heaviness.
- You can't access some things from inside containers, e.g. Bluetooth LE.
- Processes that run in separate containers don't use shared libraries (shared RAM) so they weigh more. At least I believe so.
I recently built 5 Raspberry Pis for IoT data collection purposes, and they were slow and unreliable when running multiple Node.js apps in Docker containers. Moving the Node apps to run under plain Arch Linux systemd made the Pies noticeably more reliable and efficient.