For example, my current conundrum is how to deploy an Elixir Phoenix/MQTT app. Writing the app was a fun curve to climb. And I could use techniques like described here to learn from others in the actual programming. But how to build an executable I can wrap in a systemd process running on a different machine? Those are actions people do, not expressed so much in code I can look at. The few blogs I can find on the subject are mired in deep CI toolchains.
I want the blog that discusses the secret sauce to learn to acquire the knowledge to work the raft of ever evolving tools we have to work with now days. The "materials" (the languages) are the easy part now days. It's the massively automated complicated machinery we've built around the language of ideas that are my personal pain point of entry.
It now mostly boils down to, use mix release:
with the "secret sauce" being to setup a build server where you will build the production release. You'll want the same flavor/version of Linux you plan on deploying to, and then copy the build artifact (tarball) from your build server to your application server (or somewhere else in-between).
One other thing to note, there's a good chance (because everyone does this) that you'll have some broken environment variables, or module attributes, because you thought they were set at runtime but they are in-fact set at compile time.
Maybe I should write that blog you're looking for...
by far the most frustrating thing when I started out programming learning python was the point at which I'd made something I wanted to show my friends and I got to the 'How do I produce a standalone executable' point.
Not only an issue with advanced deployment, even for very mainstream languages the tooling story is still often not great. When I taught a CS intro class in uni this kind of stuff dominated student questions.
This is also the domain of the more extensive language tutorials and/or “Learn X” books. Elixir’s website’s getting-started docs have a very good section on using Mix, for example, including `mix release`.
(People tend to forget to re-check an ecosystem’s official getting-started docs as new tools are introduced into the ecosystem. I’d encourage everyone to give your favourite language’s docs a quick skim every year or two; something new might jump out at you!)
The time I've spent on the Github actions is substantively higher than the time I've spent on the .rs files. Of course you can't "test actions before commit" in the way you can actual code, so I kept having to make branches, make 15 commits like "try action fix again", followed by squashing them all down and merging.
Concourse gets this right - you can run a pipeline task as a one-off from your workstation until it’s done, and only then check it in. And even ssh into the build container in order to debug build failures.
1) find out if the runtime/framework is supported but Heroku or if there are any buildpacks available.
2) spin up a Dokku instance using Vagrant for local development and testing
3) deploy to a live Dokku server
If/when I encounter any issues I add Heroku or Dokku to my search query and 9 times out of 10 I’ll find an answer to my issue. Else I just dig into the Dokku docs and GitHub issues and figure it out.
So for instance googling for deploying a Phoenix app with Dokku results in a few hits such as this one .
There's also a lovely UI for Dokku being actively developed: Ledokku 
The only drawback currently is when you want to horizontally scale your deployment. You can use their kubernetes or nomad schedulers but I think those are an overkill in terms of complexity. You could also use a load balancer in front of multiple Dokku instances but you then lose the ease of deployment, configuration, etc…
Which is why I think their docker swarm scheduler  will be one of the most important feature they could add. It’s currently on the roadmap but I’m sure with a bit of sponsorship and a few pull/merge requests it will become a reality.
The former sounds like a makefile, and the latter sounds like a Terraform plan (perhaps combined with something like Kubernetes manifests, but that’s getting more architecture-specific). These days I don’t think there’s any excuse to use the point-and-click approach for setting up infrastructure: it’s effortful, bug-prone, a security hazard, means everyone has to be trained in yet another area, and risks accidentally spending far more money than you intend (either by using surprisingly expensive services like Spanner, or by inadvertently leaving unused infrastructure running).
That said, I do agree that platforms like AWS are unnecessarily complex for the vast majority of CRUD web developers. The complexity makes sense for the small percentage of people who are genuinely setting up a very idiosyncratic and unique architecture, but the 98% of CRUD developers really need an opinionated platform, perhaps built on top of AWS/GCP/Azure and modelled on v1 platforms like Heroku, which would set up the infrastructure you need for the average web backend.
Were you trying to further illustrate the tooling point?
Made multiple contributions to the CairoGraphics project back in the day. Biggest problem? The insane "clever" Makefile structure one of the maintainers had set up. It worked as long as it worked. If it needed to change, one guy alone pretty much was able to tune/change it. It was a language unto itself.
A year ago I've read article about the person that self-published a book (printed and ebook), here wrote an article how he do that and how he created it in markdown (very well written book about TypeScript in Polish), I've asked an year ago (when read that article) if can publish the code, he said that he was thinking about this. Recently he published how own blog system online on GitHub and wrote another article this time with link to GitHub. I still waiting for the book code.
This makes it a no-go for younger programmers who are spoilt by friendly tooling - great build systems, package managers, superb documentation tool compared to the ancient, decrepit dinosaur that is doxygen, etc
I've always been fascinated with talent acquisition and skill development and would probably different recommendations today after having more experience and reading Ultralearning by Scott Young.
In the book does he reflect on any of this, or is it based on the MIT challenge at all?
Anecdotal, but I burned through 1/3 of a two semester abstract algebra course in 3ish days of full-time studying, and solving all exercises. But in all honesty, the retention would have been very low had I not began a linear algebra course aimed at graduate pure math students (I am not a math student, nor do I have a math degree).
For such a challenge to work with topics like mathematics, the content needs to be planned such that every course studied builds on top of the previous one, so that the student essentially revises and uses the content studied the previous week.
Have you tried flashcards and spaced repetition? Perhaps these could fix what you perceived as a downside?
By the time of the examination, I have an extensive set of notes that I can search through, and have transitioned to solving the exercises for speed over precision since precision has already been attained. I also transition from solving on paper to solving in my head.
By solving for speed, I mean that after many repetitions the answer that I provide is coarser and distilled because I have good understanding of the finer details, and the finer details can only be attained by writing extensive notes and solving for precision.
In the end of the day though, time is needed to fully absorb the content. The reason is that making structural changes in the brain is very expensive, but spaced repetition and usage of certain pathways make them much more efficient.
In essence, the Feynman Technique is, inho, the best way for a scientist to self-study a topic, and flashcards in various forms along with spaced repetition help achieve that task.
Background on the book Ultralearning. It was written by Scott Young who went viral for doing his MIT challenge, to teach himself the MIT CS coursework within a small amount of time.
I would expand on the post and focus on the concept of direct learning. That is, if you're not really practicing a skill in the way you're going to use it in an actual real life situation then it's less optimal.
The example he gave in the book and I totally agree with is learning a language. People look to apps like Duolingo where you're working to recall vocabulary and language in a way that's much different than when speaking.
This isn't to take anything away from doing drills wherein you focus on a specific subset of a skill, like say, free throws in basketball.
The approach I discovered myself and outlined in this post is really a drill for doing code reviews in a language you're learning and learning idioms and patterns from the community. People don't usually look at these aspects because usually the advice is to build a project you're passionate about.
I'm bad at finding side projects to build from the ground up that I'm "passionate" about. I have a couple of drills to work on coding more actively than reviewing code. I take an open source library that I'm interested in, take the tests and write the code to make the tests pass. Or vice-versa where you write the tests for a library. You can make this as big or small as you'd like. I'd start with either a function or a module that's interesting.
This way you zero in on the coding aspect and you don't worry too much about designing the interface since it already has tests. It's also much more real world than doing leetcode algorithm problems. I was taking this approach when I was working on learning how the raft consensus protocol worked.
For me your article is a revelation, didn't realized that I can just read old PR to check how to contribute to Open Source project. I've never read this advice in any guide that show how to contribute to FLOSS and I'm working on Open Source for more than 10 years (I'm sometimes read about how to get started, when I was starting with OSS, there were no guides like this).
I would give this advice to anyone that that want to contribute, it's even more important than looking for good-first-issue or help-wanted. This should be first thing the person do when trying to contribute. Look at old PR.
For me learning new language is side effect. You always learn when you're practicing with real project and work with more experienced developers. It doesn't matter if you work on closed source program in your own team at work or on Open Source. But with Open Source there may be more people, and big project are usually created by very smart people.
Second, look at existing open source, well written code that, again, solves a problem you're interested in. I always emphasize this: Things you're passionate about. That way you can master any language/framework. By master here, I mean you can code anything you want in the technology efficiently. Your final app will be: Easy to modify/enhance, easy to understand in terms of code. Memory and CPU efficient in terms of runtime.
Everyone says that but I can't think of any real problems I personally have that I could solve using programming.
A better approach would be to checkout the repo at a commit before the fix and try to replicate the solution in a short amount of time. You would then build context around what the contributor had to figure out and in the worst case you'll have a "gold standard" solution to fallback on (assuming the PR was successful).
For me it's best advice how to start with Open Source, and be sure that your PR will be accepted. And as side effect you will learn a lot, but this is with any practice like with your idea, but you will make project better. Your idea is as worthless as doing LettCode or similar.
> 2. When you want to level up, start reading the diff, and review the code and changes yourself before reading the comments.
> 3. Finally, when you start feeling more confident, start leaving those comments on new PRs so that the maintainer doesn’t have to. You’re starting to contribute to open source!
The steps from two to three are pretty dramatic, I personally would replace step 3 with tackling an open issue related to code you reviewed before. I feel like to give feedback on a PR you need to be intimately familiar with the code, something you get from writing and/or making changes to it.
Doesn't work for Clojure though :)
For me it helps tremendously to see how the sausage is made.
Its different way but you will learn a lot of new libraries, ways to mutate objects, lists, all sort of data structures and new things really really fast.
Among other differences, leetcode teaches you little about reading large unfamiliar codebases; debugging; organizing large software-engineering projects; working in teams; teasing out actual requirements; making incremental progress; real-world performance (and the tools you need to identify bottlenecks); and most of the libraries and frameworks that are common industry knowledge.
Doing Leetcode doesn't even teach how to build a 10000 LoC project.
For me those exercises are more about developing muscle memory than really learning a language.
OPs idea is good, but I think fails in the same way. I don’t think you’ll get much value out of reading PRs until you have certain familiarity. No amount of PRs will teach you what a monad is, you need to dive deep and conceptually understand the model(at least IMO).
This reveals a fundamental problem in coding. Best practices for performant code shouldn't require ad hoc digging into PR's, and as long as it does then we'll have code that is buggy & slow(er than necessary).
Learning from others, in any field, will always be a valuable source of improvement, but it just doesn't seem that, in software dev, it results in laying down solid incremental increases in general knowledge that makes its way back into the education of future devs or current devs in a language new to them:
If this was structural engineering, you'd have to have taken a "materials" course and learn all about different types of materials, their properties, load capacities, degradation profiles and how to evaluate new ones that come your way under the same criteria.
Maybe that's what we need for software development. A structural engineer wouldn't use a composite material without knowing its performance characteristics. Why should a programmer use something like string collection from a language without knowing its performance characteristics?
This is on us to demand this, to standardize-- not languages themselves-- but the performance profiles & characteristics that we must know about in order to make a choice on which tool to use. And it shouldn't be that each user has to figure it out on their own, dig into PR's or whatever. Again, there will always be experiential learning. But too much is experiential right now.
Up front, I don't disagree with you, but let me throw out a parallel benefit of your scenario here:
For the most part, in software engineering, a building won't collapse if I'm fucking around with a language and doing sub-optimal things. If I need optimization, I probably know that going in, and would probably take the time to know exactly what language/features I should use.
Since most software built today is pretty low risk/inconsequential if it fails, we might be moving the state of the art forward faster than they might in structural engineering simply because we have the freedom to fuck around and learn. We can test our materials in production, whereas I hope the dude that built my office can't. Like, yeah, definitely don't do this with medical devices and airplanes, but with CRUD app of the day, I might learn something when people decide to use it all of a sudden and it grinds to a halt.
I dunno, I should say I'm not a real software engineer in the first place and am open to being totally wrong here.
Thanks-- I think that's a very concise response & reflection on my comment.
I still think we can & should do better, but you're right that the lower stakes probably lower the bar on acceptable crystallization of experience into best practices. Which is problematic because of things like writing a library for your own low-stakes project, but the library gets published on github and used by someone in a something that isn't low stakes.
Maybe part of what we need are defined "stakes" levels and corresponding criteria for acceptable practices at each stage.
Totally! And I really like
> Maybe part of what we need are defined "stakes" levels and corresponding criteria for acceptable practices at each stage.
I think I'm going to start testing this with the TPMs and Engineers I work with. I'm going to ask a more senior TPM on my team to think about this and how it should be incorporated into our specs. My hypothesis is our engineers would be happier knowing about the risk profile of whatever failure modes we've id'ed, and they can design accordingly.
That said, I don't really work on high risk software, so this is all relative. Most of our stuff is in the "push the button again" category if it dies.
Success Open Source project are usually created by very smart and experienced developers. And big projects have a lot of them. Their Code Reviews are much better then anyone your closed source team will have, unless you're junior developer in team of Senior developers.
Right now I'm thinking that at work we also have git (for intranet application) and we have PR, this may be very good idea for newcomers to read the PR that was done to understand how some features were implemented instead of just diving into recent code. This may be best advice I've seen in a while. But maybe it's just my own idea that came from this article, that you've understand differently.
For me this article is about advice read closed PR you will learn a lot, here for Open Source projects, because OSS projects on GitHub are biggest projects you can find.
As I said, experiential learning and learning from others will always be important & valuable, as it is in any field. I just think the balance between that and more established best practices is weighted too heavily toward the "figure it out for yourself finding ad hoc sources" side of things.
That's an interesting take – I like the idea of a catalog of standard tasks with implementations in several languages as well as their performance characteristics. I suppose Rosetta Code gets the ball rolling with this, but it's missing some performance metrics. It reminds me of Ben Hoyt's piece on counting unique words in the KJV Bible in different languages.
My last example is to add a state machine with xstate to a project fetching some data and formatting a nice output. Do I need a state machine? Not really, but it is a good way to learn it. btw, the goal of the project is to smooth attribution to stack overflow's answer. I just started it, sorry for any bug.
the app: https://stacktribution.vercel.app/
the code: https://github.com/aloisdg/Stacktribution
It could be small projects or it could be well-known puzzles you already know. Fibonacci- iterative and recursive, fizz buzz, sudoku puzzle solver, 8 queens, etc.
Storytime: I would work at site when I worked in defense and do 15 hour days. I could sit there and monitor, as the job required, but I was also learning Perl for the job. I had no Internet so I spent all my time writing tools and reimplementing every programming puzzle that I could think of in Perl. In very short time I became the go-to "Perl" guy even though all the "toys" I made in spare time were "stupid and useless" according to coworkers.
I don't think you learn anything with that, unless you mean getting confortable with the toolchain.
Then ES5 to current shipped with better scoping, import, map/reduce, promises, async/await
It's taken me from "this is a mess" to "ok I can work with this."
It sounds to me like you have a familiarity issue. When something changes drastically in something you're comfortable with it evokes a very strong natural rejection, because it's like someone's taking something away from you.
I really admire small and simple languages that don't change much over time. Lisps, SML, etc.
But once again, many of the improvements are quite nice once you learn them and get used to them. I wouldn't want to give up arrow functions, for example, now that I'm used to them.
The first time I learned it was back in the days when it was mainly for mouse rollovers. I want to tackle it as if it were a different language these days (which in many ways it is). But in years past I've been put off by what seemed like high volatility in the current best practices, to the point of flavor-of-the-month syndrome. I'm sure there must be a stable core that's worth learning and using, but as an outsider I have trouble spotting it.
Better to learn the piano or how to do portrait art.
For instance I’ve been super interested in SolidJS for months. I learned from reading the source that a lot of the work is done by its underlying dom-expressions compiler. And in reading its source, I learned enough about JS AST transformation that, when I had a need to do some AST transforms of my own for work, I knew enough to confidently timebox a proof of concept to two hours (and actually finished the work in that time!). All from reading code casually on my phone.
Sure, I front-loaded a lot of that work in my free time. But I did it because I was genuinely interested in the project I was learning from.
- First of all, find out what the typical toolchain is. What IDE do people tend to use? What compiler? How is package management done? These can be really complicated or super simple to answer.
- Compile a Hello World and see that it runs. If it's reasonably specific and supported by a bigcorp, there's often extensive downloadable examples. Android and iOS for instance will tell you a lot in their tutorials. If there's a book, get the book and see how the author presents it, just skim it for key concepts, don't get bogged down in the cpp templates SFINAE explanation, it will only make sense once you have done some coding.
- Find out how modules work in your language. Every language has this, and you need to know it before you can get anywhere, both reading and writing.
- Note down keywords from the tutorial code. Recurring things you see, look them up. If you're doing Rust maybe you see `match, await, clone, some, and unwrap` quite often. If it's iOS maybe `controller`, or if it's Android maybe `fragment`. Google all these things.
- Look for the libs that you need. If you need a websocket, look for that. Major frameworks will tend to have good examples in idiomatic style. You can't know all the libs you'll need, so just get the ones that are obvious. This will give you a better histogram of keywords and soon key concepts.
- Start to code your actual thing you want to make. As you run into issues the errors will give you keywords. This will improve your knowledge as you google those as well. After a short time you will run into larger issues than syntax, and those issues will turn out to have been mentioned in the appropriate books.
Note that some languages have pretty subpar standard libraries. This might have changed but ~10 years ago the Ruby standard library really left some things to be desired. I don't recall the details but I wasn't a fan of parts of it.
On the other hand, the Rust standard library is top notch.
That's a lot more hit-and-miss. On one end of the spectrum, you have Java, where all of the lower-level, nitty-gritty work happens within the JVM anyway; and on the other end of the spectrum you have C++, where it's "turtles all the way down" almost, with lots of repetitiveness, ugly hacks to within the library to help the user avoid ugly hacks in their code, a big bunch of preprocessor macro definition checks for meeting innumerable compatibility requirements for different versions of the language standard on different platforms, and so on. Yes, you will learn from it, but it will be painful.
Then one year ago I picked TypeScript for a long-term personal project. TypeScript is now my main language.
- I was still working in PHP ocassionally (to get paid) - after using TypeScript and Eslint, I decided I at least need to use some linter in PHP. The linter had a very useful rule I did not not have in Eslint, rule that said: "This looks like commented out code". Thanks to this, my new TypeScript project is now not polluted with commented out comments all over the place. I'm not sure if I would have picked it up if not for this small detour.
- I was used to describing what every function does, even if it was obvious just from the name of it. This is because it's a very common practice in some PHP projects - I presume this is probably because of lack of the type system, you have to use PHP doc to comment function parameters, and if you comment function parameters, you may want to add the description of the function anyway. Thankfully I took some detour, I wanted to learn a bit of C++. I looked at some codebases, particularly Chromium - and I was surprised to see how little comments it had, compared to say, WordPress (PHP). I immediately knew this is the right approach for me. I started dropping any obvious comments and not repeating myself and instead name variables / functions to be more descriptive.
- I also looked a bit into Rust and saw how the language is using return results rather than exceptions. I compared both approaches and decided that it would be better if I used return results rather than exception, as Typescript has no way to annotate that a function throws and what it throws.
My first PR at nixpkgs, I had to close because I didn't understand what the maintainers were complaining talking about, it was like they were speaking a different language.
It was only one month later, after reading other PR reviews, talking on IRC and thinking about all of it, could I make sense of what they wanted from me. Since then I've started doing exactly what the author described: comment on other PRs with things that I was confronted with, and/or saw others be. By now I've gained enough knowledge to come up with own criticisms - indicating that I've learnt quite a lot.
One pet peeve I do have is on forums dedicated to help people (i.e. Stackoverflow, Arch forum, etc), albeit a small percentage of users, seem to think that most basic things are "common" knowledge. I understand that we shouldn't handhold and leave people to do little to no work but the attitude certain responses have rub me the wrong way. People ask questions precisely because they are uninformed, why not point them to the documentation or at least in the right direction?
I’ve used this approach to come in as a lead developer to unfamiliar languages and give meaningful feedback to developers who have worked in that language for 10+ years.
To me, I always learn the best when there's a necessity to ship my code. Side projects won't do, and I always revert to using my skills developed at work. Therefore, this sounds like a very good piece of advice to start with.
You will improve rapidly.
It's thought provoking and not what I expected. I'm used to hearing "If you want to learn, build something." That assumes some basic knowledge I simply don't have, so I haven't yet managed to pull it off.
It bucks up my language skills, design skills, debugging skills, research skills, framework skills, etc.
How do you read PRs while you catch up on emails?
Not to discount the overall advice, but this statement is kind of weird.
But I recently discovered coding livestreams, and the good ones are really amazing! It's really eye-opening (and sometimes fun) to watch experts talk through their thinking process, while deciding between language features or primitives, or while picking dependency libraries, observing their tooling and stacks, and watching them test and debug things.
I'm learning Rust right now, which I think is a deep and complex language, and watching these streams have been incredibly useful.