"There is an huge junkyard of technologies that failed to gain broad acceptance, many of them far more revolutionary than Rust (e.g.: Lisp, Smalltalk). I don't see why those technologies' story can be avoided."
Yeah, but I think more importantly much of the value that Rust brings would have been available 30 years ago if language development/selection wasn't so siloed, full of biases, and driven by (often undeserved) popularity.
Aside: Downvotes on HN can be an expression of age related, self-righteous sniper pique; Opinions on what contributes to a conversation can be all over the place and are entirely subject to biases, which can be interesting (I guess). Doesn't really matter, and Hail Satan anyway. Also "Q for Mortals" is an interesting book.
More like they were free beer compilers, on a free beer OS.
On my piece of the world we were mostly using BASIC, Pascal dialects, Clipper, until Windows 3.x took over and make C relevant, alongside Petzold's book.
And by then, many of us would rather use Borland compilers with Object Windows Library in C++ and Turbo Pascal, than deal with Win16 directly.
It's no accident that Rust's bootstrap compiler was written in OCaml. Rust borrows a lot of ideas from it. Arguably, OCaml and ML are closer ancestors of the language than C.
I like Rust, I use it as much as I can. But, it actually doesn't bring many new features to the table. What it's done is successfully learn from the past, scooping up the good ideas and ditching the bad ones. Its real selling point is how it wraps all of those old ideas up into a modern, well-maintained, stable package that's ready for use in the real world.
> What it's done is successfully learn from the past, scooping up the good ideas and ditching the bad ones. Its real selling point is how it wraps all of those old ideas up into a modern, well-maintained, stable package that's ready for use in the real world.
Which is no small feat. I wish OCaml cleaned house and settled on just one stdlib and removed a lot of legacy baggage, and oh yeah, add built-in Unicode support, and 5-6 other things I am forgetting now.
Sure you can muscle through all that but having Rust around really makes you wonder if it's worth it and in my case I ultimately arrived at "nope, it is not" so I just use Elixir, Golang and Rust.
To be fair, many languages have their bootstrapper written in OCaml or another similar ML. When I took my programming language interpreters and compilers classes, that's what they mentioned to us, because they are uniquely good at writing recursive descent parsers.
Yes, they are available, however Modula-2 provides safer strings, arrays and reference parameters, no need for raw pointers for those use cases.
In Modula-2, the equivalent to unsafe code blocks is IMPORT SYSTEM module.
EDIT: For those that aren't aware, GCC finally got GNU Modula-2 as part of the official set of frontends and no longer a side project, bring up to four (D, Go, Ada, Modula-2), the set of safer languages out of the box on a full GCC install.
Evolution of a manager and scrum team achieving high-performance together:
Team: "We are having trouble with Bob."
Manager: "Ok I'll talk to him."
Team:"We are having trouble with Bob."
Manager: "Don't come to me, you guys need to deal with that in your retro."
Team: "We voted Bob off the island."
Manager: 'Ok, I'll forward to HR."
Autonomous teams get more done because they have eliminated management as a wait state and will proactively scale with other autonomous teams to maximize the amount of work not done. 21st century managers need to re-focus on flow efficiencies (as business engineers), and not people. 20th century managers won't have a job in 5 years.
Sure, maybe optimize further. Why have managers? The CD pipeline goes throughout the whole value stream, not just at delivery. Just like delivery automation (eliminate intervention of humans), there's also human automation (helping people get out of their own way). So I could see a use case where managers are just eliminated. You need people who can eliminate the noise and replace with enough signal that delivery teams can execute. POs and stakeholders can do that; managers not needed.
I think what you bring up though is an interesting point: maybe managers needs to transition to business engineering roles.
This all will play out in the 21st century. 20th century management is a legacy artifact at this point. Best example is F500s, which are generally mediocre in their execution. If a SpaceX type org (14k employees, private) gets into their space, they're screwed. Even with politicians in their pockets, I don't see how rock swallowing dinosaurs like Boeing (170k employees, public) will make it. Just the energy they have to expend to get anything done compared to SpaceX is massive, due in part to their giant management bureaucracy.
Good points, and eye-rolling stuff for teams thaqt don't understand the intricacies. For instance:
- burn-down / velocity charts: Teams use this at standups to make sure their sprint isn't drifting. That's why a burn-down should be tracked in hours and not points. With points, the data isn't actionable in a reasonable amount of time. The the team sees a problem, they might make use of a pre-determined emergency procedure to address it.
- retrospectives: Yeah most retrospectives are horrible. Retros should be like post rocket test - examine your telemetry deltas to see what changed (edge cases, governance,compliance, desgn, etc.) These conversations are not forced, but good teams always repeat the same data analysis unless they intentionally change it.
- poker sessions: Yeah this is totally misunderstood. Pointing stories is about snap reactions to comparing difficulties and complexity, and that's it, move on. Teams will tighten up estimates when they do an implementation plan in sprint planning. So they don't sweat estimates.
- daily stand-ups: The whole team is responsible for the sprint backlog, nothing is assigned, everything is volunteered. So if you're working on something that is going south, or you have some extra capacity, let your team know about it. The team will work together to scale capacity to get things done, which is how they can disappear on Friday afternoons.
- user stories and related tickets (eg in JIRA): Well yeah, Jira sucks. So do all the other major backlog tools. Jira gets addins, but the fundamental approach to backlog development hasn't changed in a decade. (I'm working on a soln from scratch, btw).
Also, user stories are meant to be work-items with enough signal so they can be executed with certainty in a sprint. So that means a lot of refinement to the left of the story must occur to get rid of the noise (epics to features to stories to tasks). Once a team says a story meets their definition of ready, that story can be scheduled for a (timely) sprint. Team members may be doing hard-core story refinement because of some technical hurdles, so their time outside of development during sprint can be pinned down with the team's capacity plan. BTW, capacity plans and implementation plans belong solely to the team. They're nobody else's business, including managers to CEOs.
Sure, comes from a prevailing view that estimating is bad. But it's only because estimating is treated like the answer, then we just go with that answer.
The best scrum teams I've worked with maintain a very important survival notion: We don't know the full minimum survivable solution today, but that's ok because we do know enough to get to tomorrow at least. We survived another day. Point is, estimating was never meant to be an answer. It was just meant to kick off discovery. And that's a great way to start because humans tend to be exceptionally good at quickly comparing things for difficulty and complexity. It's an instinctive survival skill.
Ultimately, well after estimating this scrum team will do an implementation (tasking) plan in hours if needed during sprint planning. They will compare that plan against their team capacity plan in hours. If the implementation plan blows out the cap plan by say, 30% then that's a warning sign and they'll tell the PO they need to drop a story for the upcoming sprint. If the cap plan compared against the imp plan shows a 30% surplus, then they will pull in a stretch story and bump their velocity, or won't tell the PO and maybe go play golf at the end of the sprint.
So the story pointing exercise just gets the team some data that helps them get to the next day (survival speak). There's still a lot of story refinement to be done, before they will tell their PO a story is ready to be pulled into one of their sprints.
When teams pull in stories to their sprint backlog, the team is committing to get that story done, so they need immediately actionable data on what's going on at every standup. If the burndown was done in points, the graph ends up looking like a straight line for days with sudden dropoffs toward the end of the sprint - that hides problems. So a sharp team will use task hours instead. Makes the plot a lot more actionable from day to day.
No, the prevailing view is estimating in hours is bad, hence the push for estimating in t-shirt sizes or other stand-ins for task complexity.
The survival terminology is really off-putting. And what’s the obsession with golf? This doesn’t sound like any scrum team I’ve worked with or would want to be a part of.
Well, a rather xor response, but that's ok. Let's go a different route: Someone is using a diet change (fixed), and the elliptical machine to lose weight. This person has no exercise background. The elliptical routine is basic - same program, one hour a day, 8 weeks. BF test is done at the end of each week. Graph the results. What do you think the plot will look like? Why did the subject's weight loss stall out?
Estimations are meant to be exactly that, estimations. Don't try to make them anything else. They offer a grey answer, so use the best tools you have to deliver that grey answer. In this case, it's simply comparing challenges for size, complexity, etc. Don't try to do estimations in a different way just because you want an immediate answer. Estimations are meant to be a starting point to get to a minimum survivable solution. That's how humans use instinctive tools that use the least amount of energy to get to a point that they can survive a challenge. Think about the elliptical example.
In scrum, estimations are the conscious way to get this discovery kicked off; they aren't meant to be an answer, because you don't have enough data yet for a solution. They are meant to be a starting point. Iterate to the solution. When you think you have enough signal for a solution (the sprint), do your final check and balance, which is a tasking plan in hours, during sprint planning.
Another example: The year is 1929, and you're asked to guess the weight of the Empire State Bldg before construction has even begun (1931). What answer delivers more survival information? 180,000 tons, or this: "Well, we know each floor is going to have x concrete, y steel, but not sure about the other construction materials. So it will be something like (x+y+?)*#floors. If your life depended on it, which answer would you go with?
You are looking for a number, and that's why people don't understand what estimating is about. The incorrect approach is like, "Well these estimations are crap, so let's change the estimation methodology." Yeah but it's still and estimation, and you already have a great instinctive tool to make those estimations via relative size comparison, which can be done so quickly, it could literally save your life. Instinctively, we accept that, conscoiusly we don't, so we turn into ill tempered Veruca Salt.
Regarding the golf reference, I'm from Florida. Sorry if the attempt at levity was weak.
Unfortunately, we aren't understanding the intricacies of high-performance. I've worked with maybe 150 scrum teams. 99% were mediocre. Remaining 1% understood energy usage, how to limit what they took into a sprint backlog, and how to manage their capacity.
When I asked a manager about a particular team, he said "I don't care if they do fight club in the morning, just let them keep doing what they're doing." To reframe, high-performing scrum teams are actually anti-legacy management, they get more done, are happy (low energy state) and stakeholders were satisfied. So their rolling sprint goal was taking a half day each month and going to Top Golf
. The key is, everybody on the team was a scrum sme, and understood how to work the process as a team to reinforce certainty and stability. They also didn't need a scrum master.
I've got lots of anecdotes from this couple of teams, what kind of work their managers actually did, and the sneaky ways they made the process work for them. There's a lot more to scrum than a certification. Just the teams approach to sprint planning was in a whole different class from what mediocre teams normally did.
As for the other 99% of teams, they were mostly management tools. Scrum had been co-opted, especially in orgs using the SAFe framework. This has been at F250's in my experience btw.
I wonder if this team could have choosen any framework and it would have worked, they just happened to use scrum. With the founder mode discussions recently there was this saying akin to "a successful process is a result of talented people" and not the other way around. It stuck in my head and I feel this is an instance of that. Although you definitely can force a process on a talented team and have their productivity diminish.
Agree. It's just that well understood scrum is a great process for delivering certainty and stability, as proven by lots of teams (provided they did scrum correctly).Just saves energy to thoroughly vet the process before tossing it.
But first principle, this is a physics issue: Whatever way a team comes up with to maximize the amount of work not done to deliver the same or better outcomes, I'm all for it.
Dig it. I use Balsamiq all the time. Some challenges when using Wine, so I have to open a cringey Klaus Schwab windows machine. Would be great if this app showed Linux some love.
Yeah, but I think more importantly much of the value that Rust brings would have been available 30 years ago if language development/selection wasn't so siloed, full of biases, and driven by (often undeserved) popularity.
reply