I co-founded Newzbin (where we created the NZB file format) from 2001 to 2010, and I’m now the co-founder and CTO of Cloudsmith (a Series B-funded startup in the artifact management space).
I recently wrote this short memoir on how tech and curiosity helped me survive severe depression, dropping out of school, and a lot of self-doubt, and how that journey eventually led me to 20 years of building startups.
It’s about growing up in a broken home, finding escape from the burnout of life in a beige Commodore 64, and building a life from very little. There are also a few odd tidbits about co-founding Newzbin, inventing NZBs, and (briefly) fighting Mickey Mouse (ish) and friends in court.
My main two takeaways are (other than, "isn't tech wonderful?"): (1) university degrees don't matter perhaps as much as many might think, (2) talking to people isn't an easy way out of depression, but it's a lot better than _not_ talking to people; also, passion and grit can and will carry you (especially as a founder).
I’d love to hear from others who’ve taken a non-traditional career path or found stability through tech. I'm not sure if it’ll help anyone who’s already deep into their software career, but if nothing else, it might be a decent read.
No, not like that! I'm talking about that visceral feeling of getting yourself out there, of being vulnerable and outside of your comfort zone, in person or in front of a camera. Just you, the real you, and the tides of the Internet; dropping Internet cliches of pseudonyms, handles, cat GIFs and avatars. You do this to promote yourself, your product or some part of the ecosystem that you dedicate your life to.
For this I recently took part in a live stream (as in, a real-time broadcast, gulp) chat/interview with Darko Fabijan, co-founder of Semaphore CI (https://semaphoreci.com), as part of their Semaphore Uncut series where they discuss IndieHackers favourite topic(s): the problems that we face as software industry professionals, how we're solving them, and what we're working on that excites.
For our session, we chatted about our experiences and passions as founders, about building Enterprise-class SaaS products, about how package management (the domain of Cloudsmith, https://cloudsmith.io) integrates with CI/CD, and what's coming up next for both of us - You can watch it the full thing here (with a podcast to follow):
It was a pleasure chatting with Darko, although it was my first experience doing a live stream. Darko was obviously a lot more practiced, calm and measured than I was; he was fantastic and I can only thank him for directing the conversation smoothly. Having watched it back I think I managed to just about get away with it externally, but internally I'm a maelstrom of nerves. Certainly my domain is at a keyboard and not as a natural orator, but we all know here that practice makes perfect.
Apart from one or two brain blips, it went well; I didn't articulate about package management patterns and advice in the way that I would have liked. We had a followup conversation that was much smoother, as is typical, and I'm hoping to add some of that to the podcast. As is typical for anything that's live, we also had a bit of meltdown pre-stream with the streaming software and Skype, which both decided to crash just as we started to broadcast. Maybe they'll release an "off-cuts" series of bloopers in the future.
Overall though, great fun and extremely worthwhile; I would definitely recommend it as a soapbox. If not for your product, then for you personally. Do it to better yourself. Do it to grow. Do it because you can do it. Although I'm well-versed with talking to customers and doing demos, these are targetted/pointed "scenarios" that play out similarly each time. This though, this pushed me to the edge of my abilities. Would I do it again? Absolutely, and I will. Watch this space.
So Hacker News, my actual question is:
What have you done recently where you exposed the real you?
Did it work? Tell us more, and give us a link (if you can). Be polite. :-)
For context to others: Unless you've been following developments in Rust recently, you may or may not have realised that Rust 1.34 [1] introduced the ability to point Cargo (the Rust package manager) at your own private registry, either self-hosted or managed.
So this is really exciting for anyone looking to privately develop or distribute Rust crates (packaged libraries), or to mirror some portion of crates.io for other reasons (e.g. availability, isolation, modification of public crates, etc.).
As others have stated you could run your own registry or use an alternative service for private repositories, to minimise or eliminate the attack vector.
By replicating the images (or packages) that you need into your own account, you can minimise the possibility of a bad actor replacing a well-known image with something untrusted.
An alternative is to side-cart a service like Notary (https://docs.docker.com/notary/getting_started/) in order to establish a chain of trust for images. If an image gets changed, Docker will refuse to use it and you will be warned that it is untrusted.
If you're missing the auto-build functionality, this can be achieved reasonably easily with any of the mainstream and awesome CI/CD services out there, such as:
You can run your own private Docker registry but you will still depend upon the base images pulled from hub.docker.com in your deploy chain unless you make sure to clone the base image Dockerfile from github and build it yourself. Even with this protected setup; you still have exposure from poisoned Github repos after this attack because of the compromised Github access keys. I'm not sure you can eliminate this threat, even with third-party services. What a mess.
It might be OK for the Docker Hub aspect at least, with a caveat later on; the GitHub aspect is unfortunate and I completely agree. Direct access to source is rather dangerous territory.
Back to the images bit first:
Base images are only referenced/pulled at build time. So if you've already built your own image and stored it, it'll contain all of the layers necessary to run it without explicitly pulling from Docker Hub.
In the case that you're building new images (likely), it'll need to pull the base images from Docker Hub. However, if you pull the base image(s) from Docker Hub first, you can tag them and store them in your local (or hosted) registry, then refer to those explicitly instead.
For example (using a Cloudsmith hosted registry):
docker pull alpine:3.8
docker tag alpine:3.8 docker.cloudsmith.io/your-account/your-repo/alpine:3.8
docker push docker.cloudsmith.io/your-account/your-repo/alpine:3.8
Now, instead of the usual FROM directive:
FROM alpine:3.8
You can refer to your own copy of alpine:
FROM docker.cloudsmith.io/your-account/your-repo/alpine:3.8
As you can see Docker's syntax doesn't make this extremely pleasant, and you'll have to change existing Dockerfiles to point at the base images, but it's certainly possible to mirror your dependencies without rebuilding.
Caveat: The downside is that you have to trust those dependencies at the exact point you pull them down, so I concede it is still not perfect without rebuilding the lot. :-)
"It's a known problem and we're working on it."
No further details given.