For the most companies your specified reasons are used to start with a clean fresh new Version 2.
New System, new Language, use the good parts rewrite the bad parts.
And new Language would make it more interesting for "younger" developer that start with e.g. Rust / Go / Whatever. (That did not mean the code will be better)
It's not really aimed at people managing other peoples Trello boards. It's for anyone managing multiple boards, that could be an agency with boards for each of their clients. But those aren't their clients boards, they are part of the agencies workflow and may be private from their clients.
There are other needs for things like this - I'd tried to build a system to get a dashboard view of multiple issue trackers, because as a consultant, I'm often having to deal with multiple client environments, and switching between multiple issue trackers is a pain. It was harder than I expected, and focusing just on value in Trello is a neat idea.
I'm also a consultant, and brought all my clients into Trello, at least for user acquisition projects (what I consult on). Having everything in one system makes life significantly easier. However, I still have to dig into each board to see what's happening, and even then there's no status dashboard; just an activity feed.
The obvious question is: what's the real problem with that?
A container is a container, as long as docker itself has not bug, the container can only harm the containers content.
Most problems exists in the custom created software in the container (e.g. web-services with bugs, backdoors, ....), this will be a problem for Docker, VMs, Real-Servers, whatever too.
The real problem is the interoperability of different container, if you link the whole data, without any audit, to another container, you can have a problem, but this problem is not docker specific.
>> A container is a container, as long as docker itself has not bug, the container can only harm the containers content.
Presumably a container has network access of some sort? Malicious code could start probing and attacking anything exposed that way.
>> this will be a problem for Docker, VMs, Real-Servers, whatever too.
The implication is that you wouldn't get into this situation with a 'Real-Server' so easily, because you wouldn't just download an image and run it, without having an update/patch strategy or having much more idea of what's going on inside it.
But you assume that a container HAS full network access.
A firewall must be configured, but a firewall must be configured for a VM too.
My point is, that their is not so a huge difference for production systems.
>> But you assume that a container HAS full network access.
No, I'm presuming it has some sort of network access, a malicious container could (for instance) still probe other containers for vulnerabilities, serve malware etc etc without full network access.
>> A firewall must be configured, but a firewall must be configured for a VM too. My point is, that their is not so a huge difference for production systems.
If you're downloading VM images from somewhere and running them without checking what's in them you'll run into the same problem, sure.
The problem being pointed out here is that when applications are bundled outside of the purview of a packager like debian you -
- don't have as much trust in the origin of the app
- don't have an easy way to keep up on library patchlevels etc for security
Not true if the software in your VM or RM is managed by a package manager and comes from a place that issues security updates, patches etc.
One of the criticisms in the article is that much of what's going on now, either with containerisation or weird build systems like Hadoop's, misses out on this.
problem : meteor relies on a specific mongodb feature,ie polling the db for any change. I heard there is experimental support for PG but it seems to rely on a complex hack using triggers ... So the issue isn't really about mongo queries, but whether it is possible to track db edits from third parties or not.
That specific MongoDB feature is "tailing the oplog", where the oplog is the capped collection with the idempotent commands that represent the changes in the database. This is what Meteor uses to receive (be pushed) changes (not polling).
Certainly, logical decoding provides the necessary infrastructure to implement it (and also normal tailable cursors): thank you Andres, really nice work! :) But some work is also needed to transform the representation you are using in PostgreSQL into MongoDB's oplog entries.
This is definitely what we are using in ToroDB to emulate it (currently, under development).
Not enough. Its more fun to create software, but documentation is the part that no one likes.
It is absolutely necessary, in particular when other work with your code.
And we have many stages of documentation: the project documentation (what is it, what does it, how, ...), the code doc, the rest-api doc. And it get really complex on further development.
I think that is one reason for the small JS project explosion on npm / github of the last years.
New System, new Language, use the good parts rewrite the bad parts.
And new Language would make it more interesting for "younger" developer that start with e.g. Rust / Go / Whatever. (That did not mean the code will be better)