Amazon has a lot of bad internal tools, but this person's experience doesn't match mine (being here for 8 years) at all
> 40% of my time trying to tame the bad internal tooling I was forced to use to submit my code, get it merged, deploy it, check logs, etc…
The tools for code submission, pull requests, pipelines, metrics, and logging are fantastic. Google is better. Most companies aren't.
I have never spent 40% of my time battling internal tools....
> 20% of my time in meetings
Developers complain when they're not invited to meetings, and they complain when they're invited. On my team we brutally introspect the value of every meeting, and if it looks like it's not delivering value, we find a new process.
> 20% of my time writing unit tests to hit the 100% coverage requirement of the codebase I worked on.
This makes no sense. This isn't a company mandate, every team is free to determine what code coverage percentage makes sense for them. Give this feedback to your tech lead, nearest Sr. SDE or PE -> 100% test coverage should never be "required"
> 10% of my time tracking down bugs in other team’s codebases for either internal tools or frameworks and trying to get them to acknowledge the problem by filing tickets.
> On my team we brutally introspect the value of every meeting, and if it looks like it's not delivering value, we find a new process
Oh yeah I love the multiple hours we have spend every week 'introspecting' processes, just to throw out one of the dozen we'd already defined and add another one. And this 'introspection' typically boils down to the loudest, most ambitious mouthbreathers forcing their BS down everyones throats so that they can jot down their amazing process contributions in their promo doc. Brutal is the right word.
But it's the difference between an individual carpenter making a rocking chair for himself and his family, or maybe making a couple to sell to his friends, and being a structural engineer.
Bikeshedding is not a necessary antipattern to the process, but large software projects absolutely need group collaboration, and a discussion of processes, tools, and best practices.
'Buidling things with code' is not just coding. It can include design, architecture, collaboration, planning etc. There are many high quality, highly complex, open source projects that don't rely on many of the 'processes', meetings, toxic competiveness, thought-policing, and beuracracy found in AWS.
Based on your comment history (which is consists of about 0% technically interesting topics, and about 100% pro-amazon 'tales from a super-senior principal engineer guyyyss') tells me you don't like engineering, you like politics and beuracracy. Which is okay!
>I have never spent 40% of my time battling internal tools....
In my experience, not at Amazon, long tenure employees get used to the quirky tools but the impact on new employees can be massive. Same with bad code bases, bad documentation and so on.
Maybe so, but don't you think I talk to new employees? It's half of my job to support my whole team and deliver through others.
I battled those tools when I started. I watched them get better.
I've seen what new hires struggled with 5 years ago and what they struggle with 1 year ago.
Night and day.
The tools have gotten a lot better.
Here's the other ugly truth: That "40% of struggling with internal tools" may be saving the engineer 300% of time of having to implement the same from scratch themselves. Software engineering isn't all algorithms and data structures. A lot of it is just boilerplate code hooking up A to B. And better leave that boilerplate code to the internal tool that you have to figure out how to configure than implement it yourself.
> Maybe so, but don't you think I talk to new employees?
Everyone below a certain level talks to new employees. Do I believe you take their concerns seriously and actively try to help? Based on my own experience with PEs as well as your comment history I think you absolutely do not.
In general my experience with PEs at Amazon led me to conclude that the vast majority of them are:
- entitled
- lazy
- egotistical
- less technically useful than the average l5 engineer
I really think this is true. Like compare the CR process to Github's PR process. To me, it's horrendous. But to others who are embedded, maybe they love it.
Yeah, comparatively, I find Github's process to be bizarre. Why do I have to fork a repo just to propose a change to it? Why does my "Pull Request" (really it's a Push Request, but it's called a Pull Request because of the 2 repo requirement) have to be associated with a remote repo in the first place?
(I wouldn't say I love the CR process - making multi-package CRs is definitely flawed, and I had a Sage question open with the Builder Team for nearly a year where they admitted as much - but I certainly prefer it to Github's)
But I'm pretty new to the outside world and really keen to understand alternative perspectives. What's good about Github's process to you?
You can do PRs from branches in the same repo on GitHub just fine if you want to. Having them in a separate repo makes more sense because 1) it doesn't require the dev submitting a PR to have anything above and beyond simple read access to the target repo, and 2) there's no concern that someone might depend on WIP branches created in those private forks, so they can be deleted or rebased at will.
The reason why it's called a "pull request" is because you're requesting the owner to pull your changes in; I don't see what this has to do with the number of repos in the picture.
> You can do PRs from branches in the same repo on GitHub just fine if you want to. Having them in a separate repo makes more sense because 1) it doesn't require the dev submitting a PR to have anything above and beyond simple read access to the target repo
Right, that's my point, though - why is there no distinction between "ability to create a real, actual, complete branch on a repo" and "ability to create a 'fake' branch that only exists for the purposes of diffs for a change request"?
For contrast, the flow I was used to inside Amazon was:
1. Clone the repo locally
2. Make my changes locally
3. Run a command that creates a 'fake branch' on the main upstream repo, which is used as the reference for the change. This works even if I don't have push-permissions on the repo (in which case, I can still push my change once it gets approval by clicking a button in the UI, whereupon a service account will push "on my behalf")
Whereas for Github, the process (if you don't have push-permissions) appears to be:
1. Fork the repo to my own Github account
2. Clone that locally (equivalent of 1. above)
3. Make my changes locally (equivalent of 2. above)
4. Push my changes to my own forked repo
5. Run a command (either CLI or UI) that creates a Pull Request from my repo to the target repo
Sure, it's only two extra steps - but I don't see why that friction has to exist in the first place. It gets _much_ worse if your change is open for a while (which will happen to coincide with the case when you don't have push permissions - that is, when you're contributing to code that you don't own), because then you need to resolve rebase locally and push to your fork rather than just being able to update in the UI (Github UI doesn't support rebase-pulls, only merges).
I see your point. FWIW I personally like it because it lets me move between my desktop and my laptop almost seamlessly - just push any changes on one end, and pull them on the other. It also means that, if any (or even all) devices suddenly die on me, whatever I was working on is still safely in my private repo.
But, yes, this does mean pushing things routinely a lot, not just when it's time to make a PR.
Fair enough! I only have a single development machine, so that advantage was invisible for me - but that makes sense!
You could theoretically get the benefits of both approaches, though: have your own personal repo to which you push "in-progress" commits for durability and portability, but maintain Amazon's tooling which generates PRs with a diff between a _local_ commit and the target (by, behind the scenes, generating the ephemeral fork from which to Request a Pull), and permitting updates to that PR from local (not necessarily "pushed to an online repo") commits. That's _still_ advantageous over GitHub's model, because:
* If you don't want to have a personal repo, you don't have to
* Even if you do, the process of updating a PR is simpler and more flexible when executed purely with local Git commands rather than by manipulating a remote repo
Fair enough! I thought I remembered it being a (likewise invisible) branch on the upstream repo, not a separate repo entirely - but I've been out for a while and might be misremembering. Regardless, as you say, it's immaterial - so long as the tooling makes that invisible, the UX is the same.
I don't think Github's PR process is that great. E.g. it misses the ability to diff 2 revisions of a CR if there is only one commit - which would happen if the next revision is a fixup and doesn't create a new commit. I usually used this to review changes incrementally. I find it rather sad that github doesn't have it and one needs to push multiple commits for the sake of making something incrementally reviewable.
> The tools for code submission, pull requests, pipelines, metrics, and logging are fantastic.
THANK YOU. I feel like I'm taking crazy pills whenever I see people bash Builder Tools - Pipelines in particular. Compared with what seems to be available in the Open Source world, it's fucking stupendous, and has silky-smooth integration.
> 40% of my time trying to tame the bad internal tooling I was forced to use to submit my code, get it merged, deploy it, check logs, etc…
The tools for code submission, pull requests, pipelines, metrics, and logging are fantastic. Google is better. Most companies aren't.
I have never spent 40% of my time battling internal tools....
> 20% of my time in meetings
Developers complain when they're not invited to meetings, and they complain when they're invited. On my team we brutally introspect the value of every meeting, and if it looks like it's not delivering value, we find a new process.
> 20% of my time writing unit tests to hit the 100% coverage requirement of the codebase I worked on.
This makes no sense. This isn't a company mandate, every team is free to determine what code coverage percentage makes sense for them. Give this feedback to your tech lead, nearest Sr. SDE or PE -> 100% test coverage should never be "required"
> 10% of my time tracking down bugs in other team’s codebases for either internal tools or frameworks and trying to get them to acknowledge the problem by filing tickets.
So, software engineering?