Hacker News new | past | comments | ask | show | jobs | submit | mythhabit's comments login

Some of the main benefits of shipping every day, is:

  1. Develop a system where you can turn codepaths on and off with a toggle.
  2. Become better at architecture. Learning to split a task into smaller chunks that are easier to reason about, both overall and individually.
  3. Learn to do multi phase increment.
  4. Develop an automatic deploy/rollback system.
All are good practices, and essential if you need HA. It also gets the velocity into your daily work, so if you need to deploy extra logging, a bugfix ect, you can do it in minuts and not hours/days/weeks.

Can you do all of that and ship on a weekly basis? Absolutely, I just haven't met anyone that do that.


> Can you do all of that and ship on a weekly basis? Absolutely, I just haven't met anyone that do that.

I'd go as far as to claim that you cannot, at least at a level close to a highly available system updated with full CD without any gating or manual intervention.

Weekly deployments implicitly lead to merge trains and large PRs and team members sitting on their ass to approve changes. Each deployment is huge and introduces larger changes which have a larger risk profile. As deployments are a rare occurrence, teams don't have incentives to invest in automated tests or mechanisms to revert changes, which leads you to a brittle system and pressure to be conservative with the changes you do. As deployments are rare, breaking deployments becomes a major issue and a source of pressure.

To understand this, we only need to think about the work that it takes to support frequent deployments. You need to catch breaking changes early, so you feel the need to increase test coverage and invest in TDD or TDD-like development approaches. You also feel the need to have sandbox deployments available and easy to pull off. You feel the need to gate deployments between stages if any automated test set breaks. You feel the need to improve testing so that prod does not break as often. If prod breaks often,you also feel the need to automate how deployments are rolled back and changes are reverted. You also feel the need to improve troubleshooting and observability and alarming to know if and hoe things break, and improve development workflows and testing to the ensure those failures don't happen again.

You get none of this if your project deploys at best 3 or 4 times a month.

Another problem that infrequent deployments causes is team impedance. You need more meetings to prepare for things which would otherwise be automated away. You have meetings for deployments for specific non-prod stages, you have meetings to plan rollback strategies, you have meetings to keep changelogs, you have meetings to finalize release versions, you have meetings to discuss if a change should go on this or that release cycle, etc. bullshit all around.


If your tests only run as part of deploy, you've already lost.

So long as you have a good set of integration tests and the right staging environments to run those tests against, when you ship doesn't matter. I've worked on multiple teams with test suites that give very high confidence what changes are good and are not; most tests were fast and could run before code was merged and block submission. The expensive integration tests ran against a staging environment, and when they started failing it was very quick to identify the range of CLs there and understand what to rollback/fix.

For most of my time there those services only pushed to prod twice a week - sometimes less if the integration tests were failing on the day of one of the deploys. Not every day, not every commit. And yet we had all of those benefits that you claim are impossible. No list of idle people waiting to approve changes, no "huge" deployments, infrastructure for automated tests and more. There are no meetings - those two weekly rollouts are entirely automated infrastructure, and unless an abnormal error rate is detected during the canary proceed entirely without human involvement.

The world didn't fall down. Customers didn't ask us where things were. Oncall appreciated that there weren't constant builds at risk of breaking and falling over - they only had to monitor rollouts twice a week and otherwise could focus on failing tests, logs, and alerts.


It's unfortunately often people that recover from some serious traumatic experiences, that are the most emphatic. They know what it is like to be completely lost with no one to help.

Children are too, but somewhere along growing up, many unlearn that.


That is fine to do, if you prepare the child. I mean, start by taking all the steps with them, gamify it, and at some point I'm sure most children would be proud to do it.

But just doing it with minimal preparation... That's bad parenting. I doubt that experience alone is the reason for being fiercely independent - that usually (and unfortunately) comes from the person being unable to rely on their care takes for their needs as children.


I agree. And it's worth remembering that Scrum is a starting point. The entire point is to adapt the process such that it fits the team/product/organisation, through retrospectives.


If you have the tools to work with metal, you can use those tools with few, if any, modifications and do the same thing with nylon.


Abstract math (type B) is a very rigorous discipline that underpins the other kind used by engineers (type A). Type A is indeed learned by repetition along with understanding. It is very important to simply do the math to become better at it and understand what you can expect from your calculations.

Type B on the other hand far more about understanding. You will never understand the theory of a mathematical space and how to apply it, by simple repetition. That is a far more theoretical and creative endeavour. You need to learn it and apply it to understand it. I suppose you could call the process of applying it some kind of repetition, but in my opinion the insights comes from applying it to concepts you already know.

A formal learning path is a very good idea, because people with more knowledge know what order you can progress in, in such a way that you actually apply your knowledge in a natural way and build on previous learnings. And it is definitely a huge help that teachers can help you guide your learning when you are stuck.


Proofs in abstract algebra, for example, require the ability to quickly and correctly manipulate symbols on paper (using already discovered rules/lemmas/theorems).

The repetitive practice is in this manipulation of symbols. It takes a long time and deliberative practice to learn this skill. You just practice by doing symbol repetition in different contexts, instead of doing the same thing over and over again like multiplication tables, because your symbol manipulation abilities have to be general [1].

If you try to teach, you will quickly discover that there is a wide difference in this ability for math majors by their final years. And the students who have poor symbol manipulation abilities inevitably struggle at the higher level concept application, because they keep making mistakes in the symbol manipulations and having to redo it.

[1] Contrast the training of 100m sprinters (multiplication table), who only run 100m on a fixed track that they will eventually race on, and the training of cross country runners (symbol manipulation), who practice on a variety of routes, because their races are on different routes.


That not hard if your production and/or materials is subsidised by the state, in an attempt to outcompete western companies.


> subsidised by the state

Like the $7b in subsidies each for ford and gm or the 100bn in free loans? Perhaps the 3bn in subsidies for tesla.


Not saying USA is not doing it, and both Germany and France is doing it as well in the wake of Corona. But the amount is visible and until we get more transparency, we can only assume that China is doing it even more, because they are known to do that for strategically important sectors.


I believe that the original definition, way back before most of us were born, was a test of the smallest possible unit, i.e a single function ect. At least according to Wikipedia, in 1969, it was defined at unit tests, component tests and integration tests.

In todays development world, the unittest is primarily a developer tool to help speed up development. One should not be afraid to delete unittests if they are doing refactorings. But the long term value is always in the integration tests.


Not the smallest unit, just an internal unit or component smaller than the whole of the system.

The important thing is not the size of the code affected by the test, the important thing is that a test should verify a single requirement isolated from other requirements.

I believe the original distinction between unit test and integration test was that integration test was for when parts developed in isolation by seperate teams was finally integrated. But this tended to be a disaster, which is why continous integration was adopted.


Just going of the Wikipedia page on unit testing, in the 1956 paper on programming for the SAGE project, refers to testing as: Parameter testing, assembly testing, shake down testing. In the context, parameter testing is testing of individual sub routines (functions/methods). From there the term unit test was used in the Mercury System as unit testing of "individual programs outside and inside the system". I suppose here, the ethos is the same.

Obviously, when reading the rest of the papers, they are clear on the fact, that it is the specifications that the programmer have developed for, that should be tested. That was at the time, synonyms with individual sub routines, and as such it was both the smallest unit and a logical testable unit at the same time. Since then, we've come a long way with programming smaller units with better composability.

I'm not sure I agree that the original meaning of a unit test was lost. Perhaps, only one part of the definition was carried over to modern development practices. In any case, I always stand by the fact that long term value is in tests of the "API". Everything else is implementation details.


If I'm understanding the specs correctly, it is basically a LLM, but for audio. So it requires some serious power to encode it, because it is using the latest AI hype to achieve the result.


I have the exact same experience as you. I have build 50+ sets, 10 of them 1500+ piece sets and a few over 3000. I have never had a missing piece. I have had small pieces that was hiding or I dropped on the floor, but never missing. There is always some additional pieces in the set, and I assume it's because they err on the side of being sure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: