In every project that I've seen using it, it was a tremendous unnecessary waste of time in comparison to writing feature/acceptance tests just in Ruby. Gherkin is just such a poor language in terms of expressiveness compared to Ruby. Not to mention things like poor REPL experience when writing the scenarios.
They're successful mostly because (many) end users ignore patents; at least the ones that don't work at large companies with dedicated legal departments (and targets on their back because they have actual money).
Further, that's why Vorbis, VP9, AV1 etc, show less adoption, even though they're free, for end users the other formats have been baked into hardware that only works with the closed ecosystems and the average guy can ignore the patents and continue to work with that non-free ecosystem.
I don't know if this counts though. It's not crippleware or control, it's a pretty modest nominal tax on a physical good (between 2 and 3%) that otherwise already costs money.
You could say the market pivoted away from things covered by the law into the area of the exceptions but that's a little too neoclassical for actual human behavior - I'd need to see significant direct evidence to support the idea that a 2% cost increase was both passed on to the consumer and the consumer made significant purchasing decisions from that price signal delta.
I'm going to guess that in practice, the 2% difference was absorbed in, for example, cheaper packaging, scaled manufacturing, permitting larger margins for QoS failures or by placing on cheaper areas of the distributor's shelf space as opposed to being directly transmitted to the actual consumer price of say $9.80 versus $9.99.
Even if it was directly transmitted, the majority of consumers aren't that discretionary with such small deltas. But this is another topic entirely.
Regardless, that's the only thing I can find that disputes my initial claim.
> creating a container for any minor thing like npm + some js project
I am backend engineer with a non-JS background and to me that's a a very useful thing. I really don't want to install specific node version, yarn version, etc to run some node/JS application. If I can just docker-compose up that's much easier to me and I know it does not affect anything globally. So removing it after playing around is easy and non-risky.
I think the point he was making was too many people and projects are turning towards containers as a sort of "end all" where we're abandoning local system dependencies by shipping applications with their own bundled versions of libraries.
My biggest problem with this shift is I'm seeing is people are running their distribution updates and containers are still shipping with vulnerable versions of libraries. So a server admin updates his servers thinking he's all good, No, No he's not because $DEV hasn't updated their container dependencies and now that server gets pwned because here's yet another layer of dependencies I have to deal with on top of the ones I'm maintaining with the OS.
So no, please don't make containers on everything just because you can, because more often than not the container you never intended to be used in production WILL be used in production because someone was lazy and didn't want to deal with the dependencies.
Additionally, to your point, specific node versions? specific yarn versions? Is it really that unreasonable to demand libraries be backwards compatible with warnings on software saying a method is being depreciated? Like I can understand for major revision changes but seriously. I should be able to run the latest version of node and that code still work with warnings telling me that the code I've written will eventually stop working in later versions due to changes in the underlying setup.
My point being, don't weigh on containers as a crutch for badly coded software. Operating systems have moved towards rolling releases, it's about time libraries and programming languages followed suit.
And might I add, some people are shipping containers with full unoptimized Ubuntu (or other large distros) for running projects that are kilobytes in size
Container escapes are a very real security threat that need to be taken seriously. Just like you wouldn't let all your users have a remote shell on your machines even if they are locked down. Just because they can't get anywhere doesn't mean they can't find ways out of that. If you're on the machine and can run whatever code you want, it's not a matter of if, it's when.
The admin may know what applications they are running in the container but I can bet you they don't know every library that container is shipping and I hold very little faith that these admins are going to pin a laundry list of every single container they run along with all the different versions of libraries each container is bringing and constantly checking that list for CVEs. This problem increases for every single container you bring onto that server.
Edit:
I love containers, don't get me wrong. I've seen first-hand how incredible it is to be able to set up an application inside a container and then remove that application and have no residual packages containing the libraries and dependencies that program needed. I get it. I just don't like how dependent we've become on them.
Now I'm a network engineer first, and server admin second. I don't want to spend a majority of my time pinging container maintainers to update their dependencies when an updated version of a library comes out. I expect to be able to update my local copy of that library and get on with my day and not have to worry about when this library is going to get patched in each of those containers.
Jira is slow and Confluence too. I 120% agree. It's the biggest issue for me. The longer the page in Confluence the worse it gets. I use it for weekly meeting notes and every 2 quarters I need to start a new document because editing the current one causes the CPU to go brrrrr.
It says "This mode protects a table against concurrent data changes" but it does not elaborate how. Is it similar in consequences to what MariaDB describes?
Basically Postgres SHARE lock will ensure that there is no active transaction in this table before executing query over table. This is a table wide lock and as I said it's not a best solution, as it will slow down a bit other events producers/consumers.
In our case - active data export process, will slow down a bit main application while reading new events.
Best solution would be using external tool to handle streams, like Event Store, Kafka or new RabbitMq Streams. But we prefer to stick with Postgres.
Exactly same experience. Hearing disabled and using subtitles whenever I can. The generated ones are not good at all. The more unusual wording (i.e. full of technical terms or custom names) the worse it is.