Hacker News new | past | comments | ask | show | jobs | submit | calineczka's comments login

What you say and the math does not agree. Do it 10x and in your scenario already 1TB is consumed of those "hundreds of TB of writes".


Most laptops don't have 128GB RAM, they're probably 16GB or 32GB.

Samsung warranties their 990 Pro SSD at 600TBW for 1TB, 1200TBW on 2TB, and 2400TBW on 4TB.

So, lets assume you've got a laptop with 32GB RAM and a 1TB SSD. If you hibernate it 3x per day. After 10 years you'll have used 350TB of your writes.

(32GB * 3) * 365.25d * 10y = 350.64TB


> Cucumber

In every project that I've seen using it, it was a tremendous unnecessary waste of time in comparison to writing feature/acceptance tests just in Ruby. Gherkin is just such a poor language in terms of expressiveness compared to Ruby. Not to mention things like poor REPL experience when writing the scenarios.


What about MP3, MP4, H264 etc? Aren't they good examples of successful formats despite the necessary licensing fees etc?


They're successful mostly because (many) end users ignore patents; at least the ones that don't work at large companies with dedicated legal departments (and targets on their back because they have actual money).

Further, that's why Vorbis, VP9, AV1 etc, show less adoption, even though they're free, for end users the other formats have been baked into hardware that only works with the closed ecosystems and the average guy can ignore the patents and continue to work with that non-free ecosystem.


Their are private copying levies - essentially blank media has a royalty paid to lobbying groups. Here's the US version: https://en.wikipedia.org/wiki/Private_copying_levy#United_St...

I don't know if this counts though. It's not crippleware or control, it's a pretty modest nominal tax on a physical good (between 2 and 3%) that otherwise already costs money.

You could say the market pivoted away from things covered by the law into the area of the exceptions but that's a little too neoclassical for actual human behavior - I'd need to see significant direct evidence to support the idea that a 2% cost increase was both passed on to the consumer and the consumer made significant purchasing decisions from that price signal delta.

I'm going to guess that in practice, the 2% difference was absorbed in, for example, cheaper packaging, scaled manufacturing, permitting larger margins for QoS failures or by placing on cheaper areas of the distributor's shelf space as opposed to being directly transmitted to the actual consumer price of say $9.80 versus $9.99.

Even if it was directly transmitted, the majority of consumers aren't that discretionary with such small deltas. But this is another topic entirely.

Regardless, that's the only thing I can find that disputes my initial claim.


Those technologies aren't examples of formats succeeding purely on their merits, so much as being established safe paths through a legal minefield.


They're established even in places that don't care about U.S. legal minefields, which seems to prove the parent's point.


Is it more expensive than keeping an office for entire year, though?


Yes


...how fancy are your offsites exactly? :D


What's the source of this claim, especially the "still" part. I could never find newest data which would confirm it easily.


> creating a container for any minor thing like npm + some js project

I am backend engineer with a non-JS background and to me that's a a very useful thing. I really don't want to install specific node version, yarn version, etc to run some node/JS application. If I can just docker-compose up that's much easier to me and I know it does not affect anything globally. So removing it after playing around is easy and non-risky.


I think the point he was making was too many people and projects are turning towards containers as a sort of "end all" where we're abandoning local system dependencies by shipping applications with their own bundled versions of libraries.

My biggest problem with this shift is I'm seeing is people are running their distribution updates and containers are still shipping with vulnerable versions of libraries. So a server admin updates his servers thinking he's all good, No, No he's not because $DEV hasn't updated their container dependencies and now that server gets pwned because here's yet another layer of dependencies I have to deal with on top of the ones I'm maintaining with the OS.

So no, please don't make containers on everything just because you can, because more often than not the container you never intended to be used in production WILL be used in production because someone was lazy and didn't want to deal with the dependencies.

Additionally, to your point, specific node versions? specific yarn versions? Is it really that unreasonable to demand libraries be backwards compatible with warnings on software saying a method is being depreciated? Like I can understand for major revision changes but seriously. I should be able to run the latest version of node and that code still work with warnings telling me that the code I've written will eventually stop working in later versions due to changes in the underlying setup.

My point being, don't weigh on containers as a crutch for badly coded software. Operating systems have moved towards rolling releases, it's about time libraries and programming languages followed suit.


Thanks, you got my point!

And might I add, some people are shipping containers with full unoptimized Ubuntu (or other large distros) for running projects that are kilobytes in size


In what scenario does compromising a container pwn the server?

If the server is running just for that container's application, then the server admin knows it is there and needs updating.


Container escapes are a very real security threat that need to be taken seriously. Just like you wouldn't let all your users have a remote shell on your machines even if they are locked down. Just because they can't get anywhere doesn't mean they can't find ways out of that. If you're on the machine and can run whatever code you want, it's not a matter of if, it's when.

The admin may know what applications they are running in the container but I can bet you they don't know every library that container is shipping and I hold very little faith that these admins are going to pin a laundry list of every single container they run along with all the different versions of libraries each container is bringing and constantly checking that list for CVEs. This problem increases for every single container you bring onto that server.

Edit:

I love containers, don't get me wrong. I've seen first-hand how incredible it is to be able to set up an application inside a container and then remove that application and have no residual packages containing the libraries and dependencies that program needed. I get it. I just don't like how dependent we've become on them.

Now I'm a network engineer first, and server admin second. I don't want to spend a majority of my time pinging container maintainers to update their dependencies when an updated version of a library comes out. I expect to be able to update my local copy of that library and get on with my day and not have to worry about when this library is going to get patched in each of those containers.


I agree with every word, well stated.


Jira is slow and Confluence too. I 120% agree. It's the biggest issue for me. The longer the page in Confluence the worse it gets. I use it for weekly meeting notes and every 2 quarters I need to start a new document because editing the current one causes the CPU to go brrrrr.


This could end up on the pile of complaints.

https://ifuckinghatejira.com/


Why not use a page per meeting?


Can anyone comment on how to do it without race conditions? I described the problem here: https://github.com/RailsEventStore/rails_event_store/issues/...


At SurveySolutions we ended up with using shared lock during events readout from postgres. This solution is not the best, but good enough for us. https://github.com/surveysolutions/surveysolutions/blob/5bc9...


Thanks for answering, I appreciate. I found good documentation in https://mariadb.com/kb/en/lock-in-share-mode/ but I am not sure how it works in Postgres: https://www.postgresql.org/docs/9.1/explicit-locking.html

It says "This mode protects a table against concurrent data changes" but it does not elaborate how. Is it similar in consequences to what MariaDB describes?


Basically Postgres SHARE lock will ensure that there is no active transaction in this table before executing query over table. This is a table wide lock and as I said it's not a best solution, as it will slow down a bit other events producers/consumers.

In our case - active data export process, will slow down a bit main application while reading new events.

Best solution would be using external tool to handle streams, like Event Store, Kafka or new RabbitMq Streams. But we prefer to stick with Postgres.


Rubymine is fantastic most of the time. But when projects use too much DSLs and 'magic', nothing helps.


Exactly same experience. Hearing disabled and using subtitles whenever I can. The generated ones are not good at all. The more unusual wording (i.e. full of technical terms or custom names) the worse it is.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: