There is no happy medium. Due to its complexity, the whole ecosystem of modern software engineering is inherently dysfunctional. Being able to understand existing solutions and to create your own solutions that aren't broken toys is both essential.
"A novice was trying to fix a broken Lisp machine by turning the power off and on.
Knight, seeing what the student was doing, spoke sternly: “You cannot fix a machine by just power-cycling it with no understanding of what is going wrong.”
"Don't build your own" does not imply "just use what's there blindly". You should have an understanding of the tools you're using, but of course if you try to audit every line of code you include into your app (God help you if you're using node and npm/yarn) then you'll never get any work done.
At some point you simply have to trust other developers, but don't be afraid to build your own, especially if it's smaller functionality (looking at you left-pad ...).
Dependency management is a skill all developers should learn but it's not one that is actively taught or encouraged. Most (including myself) only really start learning once you've felt the bite of that one bad dependency, or fought through dependency upgrade hell.
In short, make sure you're asking yourself "Do I really need this as a dependency or can I build something myself?"
and how do you do that? - btw. without reading bit per bit you still have a problem since you can't audit a binary with a full trust chain, i.e. the auditing tool needs to be audited aswell, etc...
The "don't build your own" advice has a more specific piece of wisdom embedded.
When you build and use a non-trivial piece of software, you also create a large body of non-trivial design and reliability issues, which will take you many iterations and possibly rewrites to get right. Mature software has generally had time to address those issues, and what isn't addressed is better understood by the users and implementors.
When you roll you own, you have to ask: compared to the thing I'm replacing, do I understand the problem domain well enough to anticipate the issues?
I definitely understand the sentiment of "don't build your own" when it comes to cryptography, but do people get mocked for not using Docker and instead using a traditional containerless setup? (Well, I guess people get mocked for anything.) Of course you shouldn't reinvent Docker; if you really need containers, use what's there (whether that's Docker or something else) and try to learn how it works, but I think there's more choices than just "use docker containers" or "build your own containers".
We run our core compute in-house (well, in datacenters); our application isn't particularly well suited for running in cloud hosting environments at all. In our environment, we don't use Docker at all.
Well with one exception. There is a cross-compilation build environment for an Intel XScale based-industrial computer which is deployed to several hundred remote locations. The previous developer was fond of working alone unchecked and insisted on creating a (pet snowflake) Docker host and a set of containers into which he installed the SDK for said industrial PC. God only knows where he pulled the original Docker base image from. He spent a year doing this unchecked (amongst a few other things) so that he could pad out his CV with the word "Docker". And after that was done, he left the business.
You start the Docker container and it fairly neatly builds the entire environment and creates a filesystem image which can be written to a CF card and installed in the physical industrial PC's, and handed to the field maintenance team.
My point is - the whole build process for this could run on a bare VM which would be under our regular configuration management. The Docker container and host really provide no benefit in this situation (except perhaps quick start up time for a fresh build environment) and are really just a hassle, because we'd rather not mess with Docker. The integration into our Jenkins instance was a complete nightmare - we spent hours poring over the Docker documentation which we found sub-par, or too new (the Docker version he had installed was ancient by this point) and running afoul of various Docker bugs.
We don't have enough other use cases for Docker to make it worthwhile for the rest of the team to learn in depth at this point. It's still on my TODO list to de-dockerize this build process and nuke that Docker host VM forever.
There is absolutely nothing wrong with building your own for personal projects, especially fun ones. The "don't reinvent the wheel" is more about how if you are ENGINEERING something, you should fully understand standards and best practices, how to use them, any configuration if needed, and composition of tools.
The professional will use and understand tools, it's the enthusiast who builds his own.
I'd say the difference is in what you do with the one you built.
In a number of disciplines, you might learn the basics of building a tool that you use, but you elect to use that knowledge to buy a good tool rather than mastering that problem domain.
So what do you do with the one you built? Can you bring yourself to 'build one to throw away'? We struggle with this mightily. Once we've put effort into something and we have a 'thing' to show for it, we have trouble walking away from the thing. It's our form of hoarding.
I worked at a place that was processing voting for a TV show. The CTO at the time insisted that everything be written in-house so that we would understand how it all worked.
Problem with that is that the code was full of bad documentation, obscure function names, unexpected behaviors and vast areas of missing functionality. Our people writing the software were less capable than the open source equivalents. With the support of some other people I was stumping for stewardship over authorship.
Basically, we should have been allowed to use smaller libraries of Open Source software as long as one or more employees were intimately familiar with the internals of that library. For the one I wanted to use, I already knew it fairly well so I could have reached that level with a month or two of work. Instead I had to keep adding features and bug fixes to our busted piece of crap, or (more often) reimplement functionality in the caller.
It's not something to have a crisis about. We all stand on the shoulders of giants. There's too much out there to understand every facet yourself (for a fun example of this, go read I, Pencil) and have any time left over to actually make anything. Like most things in engineering, and life, it's a trade-off. The only problem here is the smirking know-it-all nature of comics like that.
Sure! And this does not mean that once we are standing on them we can start to piss on their head. (e.g., the python programmers who despise C, or the matlab programmers who despise fortran). There is a worrying trend of ignorant programmers who dismiss "old-school" or "legacy" systems without realizing that they rely on them every day.
You can mock perl, for example, but then you buy a brand-new macbook pro and unless you run "file /usr/bin/*|grep -i perl|wc -l" you do not realize how much your computer depends on the this language.
Possibly, not building your own, but investing some time researching how others have built already existing tech?
That helps if what's out there doesn't work for you - either if it breaks (and you have to fix someone else's stuff that you use) or lacks feature-wise (and you _have_ to build your own, either by extending or re-designing).
The happy medium is build your own, do an amazing job, and don't care what some clueless middle manager who secretly hates that you are actually able to write your own is telling you.
Also forget about sleeping or having any social or family life for the next eight years.
We're told not to build our own since that's a waste of time and our version won't be as good.
At the same time we're mocked for not understanding what we're using.
What's the happy medium?