(I work at Heroku) Do you have more details on what sucks? Anything we're not already tracking to fix in our public roadmap? https://github.com/heroku/roadmap/issues
Glad to hear there's a roadmap. In my PAAS thrashing I've got quite a few notes:
- no wildcard subdomain ssl
- poor metrics, nothing per instance
- poor dyno granularity - (jumps from 2.5GB RAM to 14, no cpu/storage control)
- no transparency on what each dyno actually is
- external postgres replication disabled (deal breaker)
- no first class postgres metrics and access logs/alerts (kibana recommended, but not great)
- no external postgres backups (e.g. S3)
- no deletion locks on dynos and add-ons, esp databases!!!
- no warning when add-ons like databases are being deleted as a result of apps being deleted
- deleting an add-on also irreversibly deletes all replicas and backups
- painfully inconsistent naming for databases through their connection str env
- dyno types for build processes use Perf-M? Not configurable
- No lambda or github action style computing
- No scheduled scaling
- Unable to choose aws-us regions
- Hard 30s timeout limit
- Limited to 1 api key per user. No labels, configurable permissions, usage logging
- no http/2
- frustrating enterprise offering. massive over-sell, near zero value
> dyno types for build processes use Perf-M? Not configurable
What are you looking for here? Larger build dynos? Heroku provides the build service for free so we use perf-m dynos to get fast builds with reasonable cost (for us).
I have IKEA Trådfri (which is ZigBee-based). While the hub does connect to the internet for software updates (using ethernet - no WiFi, and no WiFi password to update), there's no "cloud" component. You can't create an "IKEA account", even if you wanted. If IKEA went out of business tomorrow, my switches and bulbs would keep working. I like that approach a lot.
It would be perfectly feasible to leave the IKEA hardware in the house if we were to move.
IKEA are one of the outliers in this space unfortunately.
Last year I bought some Osram Lightify bulbs, as here in Europe they are compatible with the Hue hub (which like Tradfri also works offline). I also got their own hub to update the bulbs, and it is absolutely useless. It says it is connected locally, but still it takes seconds for the bulbs to change state, and usually in a group only some of them change. The same bulbs paired with the Hue hub work perfectly.
It is WiFi only, and I‘ve had it randomly lose the connection and need to be reset. I haven’t tried it offline, but I imagine it’s not going to work.
I only got smart bulbs as I wanted to be able to change the colour temperature and brightness, they are just connected to dumb switches. Tradfri seems superior for this now, as their controllers don’t even need a hub to control lights.
Works great so far. The iOS app was janky at first (it would forget about the hub it was connected to) but seems better now. I don't really use the app though - a recent update added Alexa support, and that works great. I've mounted the remote control on the wall and either use that or Alexa to turn the lights on and off.
I don't want purple or green lights in my living room, so the fact that IKEA bulbs are limited to warmth and brightness is fine for me (I like to just keep them on "warm").
> I'm aware of runC but don't know if Docker images are realistically portabl
(It wasn't clear from your comment if you were aware of this)
Since last year (docker 1.11) docker itself no longer is a runtime, and uses runC as the default runtime (https://blog.docker.com/2016/04/docker-engine-1-11-runc/)
Docker CE and EE are based on the Docker open source project and work the same way. You can also use a Docker CE client to talk to Docker EE host or swarm, and vice-versa.
The Docker Cloud team is working on improvements to reduce segmentation too - stay tuned.
We take backwards compatibility seriously. If you encounter problems updating from one version of Docker to the next (whether from 1.13.1 to 17.03 or from 17.03 to the upcoming 17.04), please open an issue on docker/docker so that can fix the incompatibility and improve our change process.
Quoting from the blog post:
The Docker API version continues to be independent of the
Docker platform version and the API version does not
change from Docker 1.13.1 to Docker 17.03. Even with the
faster release pace, Docker will continue to maintain
careful API backwards compatibility and deprecate APIs and
features only slowly and conservatively. And in Docker
1.13 introduced improved interoperability between clients
and servers using different API versions, including
dynamic feature negotiation.
Docker takes backwards compatibility so seriously they wholesale block the client and server from communicating with each other if they differ by a single minor version.
Docker takes backwards compatibility so seriously they've released multiple versions of a docker registry all with completely new APIs.
> Docker takes backwards compatibility so seriously they wholesale block the client and server from communicating with each other if they differ by a single minor version.
That has been fixed. Note that this limitation (although it turned out to be annoying, which is why we removed it), did not actually break reverse compatibility in the API. It just made the client excessively paranoid about reverse compatibility. In other words the client didn't trust the stability of the daemon enough, even though the daemon in pratice almost never broke compat.
> Docker takes backwards compatibility so seriously they've released multiple versions of a docker registry all with completely new APIs.
I'm not sure what you're referring to, but I will look into it. Is this still affecting you? Or is it a past problem you are still pissed off about?
With all due respect, this is exactly the attitude that will prevent enterprises from ever taking Docker seriously.
Why should enterprises trust you on backwards compatibility when longstanding issues with backwards compatibility were just fixed and then glossed over like this ("it never broke in practice because we forcibly made you update")? Docker has repeatedly made poor decisions with really poor optics both in the open source community and with their product, this is just one example, and asking enterprises to just trust you now while not even providing the support terms most of the enterprise world demands is doing the exact opposite of inspiring trust.
Do you honestly not remember sunsetting the python docker registry just a year and a half ago and then introducing a brand new golang registry product with an entirely different API? Because that's precisely what enterprises pay to avoid, they don't shell out absurd money for LTS versions to hit a constantly moving target. And please don't patronize me with "past problem", some of us lowly end users of your product had to clean up that mess just to get day to day operations working again. Forgive me if I'm gunshy.
Some of your claims about breaking backwards compatibility above are incorrect. I am trying my best to point that out without seeming dismissive of your overall point - which I think is that Docker can do more to improve stability and backwards-compat. I agree with that point.
pdeuchler expressed skepticism about Docker's current compatibility statements based on Docker's historical compatibility practices.
Suggesting that this could be "a past problem [he's] still pissed off about" comes across as tone-deaf when the underlying issue is Docker's credibility when it comes to backwards compatibility.
The quarterly ("stable channel") CE releases (17.03, 17.06 and so on) are supported for 4 months, and will not get new features during that period. EE quarterly releases have a 1 year support period, and also won't get new features.
During the support period, bug fixes will get back ported to those versions and released as "patch" releases (e.g. 17.03.1).
When installing, you can choose to install either the "stable" (quarterly) channel, or the "edge" (monthly) channel.
This seems to carefully avoid making any promises about the future. Where SemVer _does_ make promises.
Also, picking a date for versioning is weird as it doesn't contain any information other than when the Changelog was set in stone. Too bad this decision was made, and Docker choose not to value the stability of SemVer.
We most definitely WILL respect SemVer where it matters: the API versions.
Docker is a collection of many different components, exposing many different interfaces. Semver in Docker version doesn't make sense for the same reason it doesn't make sense in Ubuntu or Windows.
Cool, thanks for stressing this! I'm fine with not choosing Semver, but a date holds not guarantees on backwards compat, nor any other useful info. But I do get you'd like to version components the same to stress those are meant to be used together.
Thank you. I think you buried the lede at the bottom of the page though - the certification program is free currently.
On the other hand it looks like you have to purchase an EE license to test your code for certification: "Content that runs on the Docker Community Edition may be published in the Store, but will not be supported by Docker nor is it eligible for certification".
So looks like a minimum of $750 for one node EE license to play?
Echoing @shykes below[1], Docker had premium paid products before this launch, but we're trying to make that clearer and to simplify the product lineup.
Note that Docker CE is _just_ as good as the Docker you were using yesterday. In addition, the version lifecycle improvements are designed to get new features into Docker users' hands faster (with monthly Edge releases) and to improve maintainability by overlapping the maintenance windows of free Docker CE quarterly releases.
> Docker CE is _just_ as good as the Docker you were using yesterday
The concern is that the Docker CE we'll be using in 2020 will be missing useful features that Docker EE has, and which vendors who ship containers/Dockerfiles for their products will rely on.