1. When I was introduced to QBasic
2. When I got to know how simple and amazing Lisp is
3. When I was able to code at the speed of my thoughts with VIM
4. When I got to know Express.js (after learning Django)
5. When I knew that everything in Smalltalk is a message including if else statements
For example, most "cutting edge" web apps are better off as PHP monoliths. Facebook was a PHP file for a long time. But most apps in general should never make it past being shell scripts, which are better off staying as spreadsheets or better - text files which are better off as pieces of paper whenever possible. And all paper is better off left as thoughts whenever possible, and most thoughts should be forsaken.
> Disgust is the appropriate response to most situations.
Congratulations you just wrote my favourite comment on hacker news.
Thanks for the laugh.
You certainly make a strong case.
That’s it. For a long time I thought I was good at programming and I kept making the same mistakes over and over. All nighters, leet coding, runtime hacks, etc...
And then one day it struck me. It’s hard and my mind can’t keep up with yesterday’s cowboy coding.
Slowly I started putting defense mechanisms everywhere - good type systems, immutability, compile type instead of runtime, never overwork and sleep the best I can.
Life is much better now and I'll never go back to thinking I'm good at programming.
I remember I was hanging around in a small computer book store, and there was a big book of 'code complete 2' there. I have totally no idea what's that, just opened it out of interest, and I remember I flipped to a chapter about how to write a good 'if else'. And I was like 'holy crap, I wasted my entire life, I never thought in that way, I must be the rookiest rookie in the entire programmer world'. I immediately purchased the book, and that indeed helped me a huge deal in the career.
I agree with you on this.
Just weeks ago, the software I had just finished writing complained to me at first run that I had forgotten to configure a setting. Defensive coding ftw.
> never overwork and sleep the best I can
Your best work can be done while not at your desk. Don't mind taking frequent breaks.
Nowadays it really depends, some people for sure get it more than others. Some just dismiss a lot of the safeguards I value as some kind of functional programming hype. Teams are hard :)
- most tech is stupid simple, like retard levels of simple, with incredibly complex description
- SOA is stupid and monlith is always the way to go(maybe with handful simple microservices)
- microservices are not SOA
- event sourcing is almost never a good idea
- raw bytes over the wire or in storage are not as complex and mysterious as you might think
- new cool and shiny tech is cool and shiny for 5 minutes, until you implement it, then you realize it brings a ton of new complexity and a ton of wasted time and you should have stayed with what worked before and you realize the people who promoted it were just trying to make a sale
- mysql/mariadb will handle way more than you think
- if you want to waste resources, use java/jvm
- yagni should be tattooed on every programmer's hand so when he types on the keyboard he is constantly reminded that he is wasting time with stupid crap nobody will use
- don't think in "what if" terms or try to predict the future, just code what is needed right now, avoid being a smartass
- you can charge more for your services the more essential you are for the project and the harder it is to onboard new people, ie. your value increases over time
- if you charge less now, you will get paid less tomorrow
- running your project entirely in cloud will bankrupt you, use it to gather usage data, than move to bare metal. cloud is cool and "in" but it will eat your wallet, for no good reason whatsoever
- single binary is always better than docker
.. i could go on and on...but thesre are more guidelines than aha moments so that is it for me.
People always say this as an argument against bare metal servers, but aren't you still have that staffing cost even if you're using a cloud provider? Cloud servers are more reliable than bare metals, but certainly not 100% sla yet. Shit happen regardless of using cloud or bare metal and we'll have to prepare for it regardless whatever type of servers we used.
Most issues I had with both cloud and bare metal servers are usually connectivity issues, which addressed by the vendors or data center operators with me doing nothing but refreshing status page rigorously. I never have hardware issues but it's probably because I always retire old servers after 4 years of operation.
I think their point is that it may not be cheaper if you account for staffing cost. Also, especially as a lone developer or a small team, it may be worth it to pay for someone else to manage the metal instead of doing it yourself.
Wait isn't event sourcing the fundamental principle of redux ? And redux is the most popular UI state manager library out there. and why you feel event sourcing is not a good idea ?
- whatever cool code you write, if you can't sell it, it is useless
- every dev should work as support tech for some period. then only you understand what your customer needs and challenges in debugging
- Before adopting any new tech, always always look how difficult it is to debug when shit breaks
- keep things simple, so you can have peaceful sleep
- You will never learn everything, accpet this. Learning is continuous process. Just learn to stay curious, that's all.
- learn more and more CS fundamentals instead of getting certs in over-hyped products
- if you want to learn coding, code. don't watch video tutorials.
code can be valuable in more ways than just ‘someone will pay money for this’.
That's it. For a long time, when I thought of the hardest stuff to program, I was thinking of computer games. Mind you, not the latest AAA billion dollar development budget 300+ people on the team kind of games. No - given my age, I was thinking about single-programming doing code and graphics and sounds on a C64 kind of games. To me, it was incomprehensible how one person alone could manage to do a machine, especially with the hardware limitations, do all that.
And then one day it struck me. I had started to dabble with programming in my early teens and then never stopped but, I guess, gotten better at it over time.
Like with every process that evolutionary rather then revolutionary, there's a good chance to lose track of your progress. I remember clearly after finishing college, I really though that everything I knew then about computers and programmers I had already known even before I entered university. But then one day, a freshman asked me a question about a programming assignment in their class, and it was completely trivial to me. Yet, when I was a freshman, I would have probably asked the same question to someone else.
And yet, that was not my aha moment. It still took many years for me to realize that my idolization of game programmers was perhaps a bit much. Mind you, I do realize of course that someone specialized in any area will be able to produce better code than someone who's not - for any definition of "better". I'm still not a game programmer and never will be, so I still have the highest respect for their profession. But I do realize now that there's not outer-worldly skill that separates the game coder from the crop of all other programmers. Anything is within reach - what you need is not some innate talent, it's just dedication.
Life is somewhat better now and I'll never go back to thinking I'm not good at programming.
One thing I always find amusing about such endeavours is the discrepancy in the level of difficulty when presenting their contents. Often you see (or read) them explaining programming concepts on a kind of baby level ("imagine a variable is like a box that you can put something in...") but then when it comes to video games math, they have no trouble assuming people know advanced concepts from linear algebra or even calculus. Cute.
No surprised it didn't really go anywhere, but an excellent resource if you have the time.
2. Knowing the business is key to having personal buy in for work. IF you work in a bank writing bank software, understand the bank and banking so you understand the context of the software. It's a real 10x thing to know why a requirement is a requirement.
3. The software you write will live a lot longer than you planned. Your experiments from 1. will haunt you.
There's another problem arising out of this for those of us that DO like to solve problems and move on - we're seen as not 'proper' techies if we're not completely obsessed with technology, able to bore for hours, with a github full of projects etc etc...
I like using technology to solve problems. I really don't give a stuff about the intricate subtleties of a versus b technology unless it absolutely matters, because for the vast majority of situations the obvious, pragmatic choice will do the job just fine.
It really irks me because we as 'tech' people don't think about time and money and that spending it in one direction stops you from another. It's very frustrating.
2. That programming and making stuff people want are two entirely different things, although programmers always assume that whatever you're building, it's the right thing. You can spend your entire career in programming, learning all sorts of goodness about things like Erlang innards, and never really understand what your job is.
3. That all of that mousing around, learning a new IDE every two or three years when I started was a complete waste of time. I automatically assumed that the more cool and shiny the programming environment, the easier it would be to code and the better my code would be. But no. grep and awk work (mostly) the same way now as they did 30 years ago, and any time I spent learning which hotkeys did that work in some long-lost dev stack was a complete waste of time.
4. Conversely, that UX beats internal architecture, every time. If folks are having a blast using your app, you win, even if it crashes every five minutes (How they could have fun if it crashed every five minutes is a good question. You might want to ask them)
5. The more smart people you throw at a programming project, the bigger mess you end up with. I know when I say that people are thinking of "The Mythical Man-Month", but it goes deeper than that. Even if you somehow manage to stay on-schedule, human communication around innovation stops working at a certain scale, and that looks like a hard limit. There are ways around it, using things like Code Budgets, following CCL, layers and DSLs, but nobody does that, so it doesn't matter. We, as an industry, have absolutely no idea how to staff or run projects.
ADD: One thing that was quite profound that I discovered late: if you code well, the simple act of programming can tell you something about the way you're reasoning about the problem that you couldn't learn any other way. Programming is by no means a simple one-way street where ideas come out of the head of the programmer and end up as bits on the tech stack. It's very, very much bidirectional. Our programs influence us as coders probably much more than we influence them.
Sometimes I feel like my job is to protect other programmers from finding out what their job is.
- Implementing small languages, and how it inevitably leads one to a deeper understanding of all the layers that make computing and programming possible
- The Make-a-Lisp project and the language-agnostic conceptual design at the heart of it - https://github.com/kanaka/mal/blob/master/process/guide.md
- Declarative nature of React (the view as function of state); using centralized unidirectional state management and actions
- Test-driven (and, similarly, type-driven) development
- TypeScript - Benefits and costs of type definitions, dynamic/gradual and structural typing; power of a language integrated with the editor environment
- Virtual machines, QEMU, and later Docker - Commoditized, reproducible builds for applications and their environments
- Build pipeline scripts
2. Doing NAND to Tetris the first time. It taught me not only how computers work, but how powerful recursive layers of abstraction can be. I had absolutely no idea how my system looked on RTL, but I was still able to build it.
3. Also Lisp. I wish Nand to Tetris had picked something more lisp like in the second half to show how simple and powerful it is.
4. When I'm just coding something- alternating writing and testing - is much quicker than writing a bunch and then debugging it. For bigger projects, setting up CI/CD early can similarly save headaches.
5. The functional big three (map, filter, reduce), but for me even more so closures. I had gotten stuck trying to hardcode coefficients for a polynomial until I noticed I'd end up with a lot of duplicate code, which was what I was trying to avoid with FP in the first place. Then I realized I could just put the polynomial function itself in a closure and call it with the coefficients I wanted, when I wanted.
1. I can use math, intuitively, even if I don't know how the exact calculation works.
2. I can use a debugger and replay it again and again to understand what's happening.
3. I can diagram a high level design and not have the complete context in mind and still produce a 10K LoC OpenGL powered computer graphics engine.
After that project, programming never felt impossible to me anymore. After that project, I always had some confidence of being able to learn whatever I needed, as long as I have enough time.
Here is the engine (2013): https://www.youtube.com/watch?v=PH6-dLvZEiA&t=1s&ab_channel=...
So to answer the original question, my epiphany is that something being "hard" is a relative thing and that "hardness" should never be a barrier that keeps me from reaching my goals.
With these new and easy frameworks it's incredibly easy and quick to create new projects but creating and launching something is just the tip of the iceberg.
The main work comes after that, i.e. traffic, leads, conversions, optimization, etc which unfortunately can only be done with good old tedious hardwork and laser focus.
For me that quick feedback loop makes coding as fun as gaming.
I mostly do graphics stuff and there are 70 lines of setup until I get to the 1 line that executes something based on the 70 lines of setup. But even in non graphs code my program 1000s of lines, all of which have to execute where as a REPL is one line at a time so yea, a video would really help see how to apply this.
from IPython import embed; embed()
I know there are people that can write code for couple of days without running it (seen this with some java colleagues I've had), and then it all almost always works when run, but I am just not that kind of developer.
The first time (1993) I was logged in on a remote machine and downloaded a file to there from another remote machine. It was just magic that the machine I was touching had nothing to do with that file.
When I hacked together a mailinglist system in shell script.
When I realised that C is basically just all memory locations.
When I understood closures.
lsof to see files and ports opened up by a process
strace to look at what system calls to the Kernel by program is making (and ltrace for library calls)
pstree to see subprocesses spawned
gdb to inject myself into infinite loops of programs written in c-based implementations (e.g. Ruby)
I'm a collector of aha-ish moments so if you want more, my YouTube account is linked to in my profile.
2. End of list.
So with Ruby, I was just pattern matching—I never had an intuition of what lambdas were doing until I moved over to JS.
The Feynman Method was another aha moment. Learning how to learn, in my case, by writing down stuff in my own words as if I were teaching someone else was probably the most important thing. That helped me develop a much deeper understanding of important concepts.
Similarly, not trying to learn everything was a big deal. Don’t try to learn all the niche technologies you see in job listings expecting you need to know those things in order to qualify for the position. If you just do dozens of tutorials, you’ll end up knowing a lot of things very superficially. It’s much better to know a handful of ubiquitous, related technologies very well and have a strong foundation of programming fundamentals. Those things can transfer over to other technologies, when you need to pick them up.
It's telling that comments like this inevitably involve Rails, not plain Ruby.
Rails's "magick" is obfuscatory and at times counter-intuitive. Ruby, the language, is pretty straightforward. I wish more people spent time playing with Ruby before diving into Rails.
This is one of my recent decision. Previously I would jump to every shiny frameworks and try to learn every fancy programming languages. This drained my energy so much I had to stop doing it. Instead I have decided to learn ubiquitous tools e.g math, algorithms and data structures, etc.
- When i realized that someone might quit his/her job in his trail phase. It did not occure too me that both sides are checking if its fun/good for them
- The main reason why i'm successful is that, besides my character flaws (which did become better over the last 15 years), good software engineeres are in a very strong demand and when i look on how hard it is to get good people, i just might never have a really hard live
- Softskills are crucial: Taking responsibility, being on Time, being reliable, taking action when it matters without hesitation
- You had a salary negotiation or a discussion and something was decided? You still can get back to this 1-x days later and say 'you know i thhought about it and i'm not happy with the outcome at all. We need to discuss this topic again'
- Don't complain if shit is shitty. Either change it, try to change it, accept it, or quit. Stop telling others thats it shitty and do nothing.
- Estimation is bullshit, never works, never aligns, no one really retrospect it and if you ever find a team where it works, your team gets dismantled for whatever reason and you have to start at 0. Prioritise for relevance, optimize how you work, accept the outcome.
- Never accept a deadline. Without a deadline, your manager can't come back and say 'you promised' which leads to you doing overtime for a misstake your manager did: he/she missmanaged!
- Do less but better. Whatever you do shitty now will come back
- Not doing something because you actually figure out what the other person needed/wanted, is more often then not the better result if it does not to lead writing more code
- Understanding how you programm a computer game for three main reasons: 1. memory allocation can fail 2. how the game loop works and how to programm it 3. Randomness
- Sentences to know:
-- I can try to get it done, but i can't promise
-- I have a date tonight, i can't stay (if they insist:) I have expensive cards for <event>... (pressuring you into doing overtime just because is missmanagement)
This is one thing I need to learn to say. I have been doing lots of stuff other devs want and hating myself for not being able to say no.
As everything else in life. This was a life aha moment.
It always seemed so lame compared to neat new functional languages or distributed actor frameworks. And, not being much of an ops person, I almost wanted not to be any good at powershell (or any shell) so I wouldn't get ops assignments.
After getting familiar with it though, it's improved my workflows and ultimately quality of life. I'm not only more fluent in ops now, but I also get to spend less time on it, which is what the rationale for not learning it was (write a script once and never have to remember what I did when creating a new environment).
My next realization was that a well-defined and clearly communicated product definition is 10x more important than good coding.
Sometimes using "uncool" technologies and languages is ok, and they often have a good reason.
2. The concept of "innovation tokens" if you solve a business need
3. There is no silver bullet.
Learned the hard way after switching to the new "flavour of the month"
2. When I read K&R at the age of 14. Before that, I had just followed random tutorials that I found on the internet. From that point on, I went straight to the source (no pun intended) when it comes to learning new stuff.
2. Vendors are often pushing bad architecture
3. Architects often push things for the wrong reasons
4. Companies often push risk to their vendors, avoiding
collaboration, inherently increasing risk
5. ORM's hide business logic, preventing a company from understanding its business and adapting to change
6. Relational Databases are an operational anti-pattern
7. Graph databases are excellent tools for common problems like security, social media
8. Document databases are excellent tools for Event Stores and CQRS
9. Cloud native development (FaaS) is awesome
10. Domain-Driven Design and Event Storming are excellent disciplines to add to a corporate development group
11. Corporations inherently don't understand Agile because they have to measure everything through strong process definition
Could you please expand on this? What is wrong with Relational Databases and what would be the "correct" pattern?
Another aspect of all of this is the misconception that we should design systems based on a relational data model. This only leads you through the whole impedance mismatch of ORM's and how you serialize/deserialize your bounded contexts or objects. We should not do this at all. We should design systems on those bound contexts and determine the data store accordingly. Operationally, a Document Database or Event Store is going to be the best tool.
We can fire change events at a data warehouse to store data in a relational manner for analytics and reporting, but none of that is necessary for our operational system.
In your operational system, if you need any sort of complex join, you've already created a poor design.
I will say this. This did not occur to me until I'd gone through the process of developing a complex system using Domain-Driven Design. Until you've done that, the old patterns will remain like concrete.
Simpler is better. Boundaries need to be respected. Tools should be selected based on need, not desire.
This inherently hides that logic from external review.
One of the things we need to enable as architects is to make sure the systems we build are adaptable. Any "framework" that begins to hide logic through efficiencies is bad architecture.
In this chapter the author demonstrates that it is wrong to solve a problem by creating a series of objects that do stuff. He shows that the right way is by abstracting the problem into its most essential elements. Do not try to emulate reality with code, but make it its own, abstract thing that solves the issue instead.
This chapter and its code felt like pure poetry to me.
2. No ORMs
3. No Frameworks, use libraries to build up your project
4. Use strong type systems for modelling (make invalid states not representable)
I can hit a hot key when hovering over any function and it’ll show me the docs and take me to the full page if I want. Hitting another hotkey it’ll auto fill parameter options. It’ll auto import the libraries I need. It’ll complain if I’m using the wrong type.
It gives me so much more confidence that the code I’m writing will actually work. It’s slightly more of a pain to write, no doubt about that, but the payoff is huge.
I was burnt by java in the past and can’t stand how every project seems to end up like this https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris.... But Go has shown it doesn’t have to be like that.
Although, I have a growing dislike for Redux.
Starting from there, I don't take any programming language seriously anymore. Whatever makes the job easier and faster to accomplish in context.
As a relatively new Clojure developer, the meaning of the parentheses was the ah-ha moment. I imagine it’s the same for other Lisp-y languages.
When starting out as a developer, I think there is a tendency to see the particular language/framework/syntax that you're using being all-important.
Over time, and with experience, you realise that the language and syntax are just the "fine detail" of how you solve problems as a software developer: as your understanding deepens, it's as if a kind of abstraction happens in your brain, because you stop thinking so much about the fine details of language and syntax, and start to worry about things like managing complexity and optimising design etc.
And at that point, the realisation for me was that I can apply that experience to any language/syntax/framework, which frees me up to pick the best way of solving any particular problem, and to not be stuck in a rut with any particular technology.
An added benefit is that a lot of the debate over stuff like "language X is better than Y", or "this code style is correct and that one is wrong" become unimportant, because you're thinking at a level that's not limited by specifics.
Of course, being purely technically talented is one way to go as I have chosen to be.
I'll properly give Bulma, Tailwind or another popular framework a go for my next project - let me know if you can recommend any other.
Disclaimer: I built it.
- when I finished a student programming assignment a year
later and realised that a well-chosen set of procedures constituted what we'd now call a DSL
- when I ported Martin Richards' very clean BCPL compiler and realized that you could write efficient code in something that wasn't Fortran
- when I read the Thompson/Ritchie paper on Unix
- when I completed my first reading of SICP
2) It's not enough that an idea makes sense to you to communicate it. People usually hate changes, you need to understand all the details before opening your mouth.
3) I remember many a-ha moments regarding how the Internet works while reading "Computer networks" by A. Tanembaum.
4) When I was finally able to exit vim.
3. When I started learning Rust, had another aha about how the language is designed without GC. I never thought that was possible in a language.
These maybe simple things for many, but I have no educational background in any of these, so I'm amazed by things that people actually got used to.
Conventions for syntax, names, logic, API, structure, vocabulary and so on.
Conventions are by nature arbitrary, influenced by culture, history, social behavior and a whole lot of human weirdness.
Don't try to learn all the conventions first, it comes faster with practice and exposure to it. Solve the problem, then find the conventions you need to apply the solution in your context.
The beginners paradox is that they need to learn a bit of specific conventions (E.G: part of a language, one paradigm and a few libs) to start working on solving a problem, so it's a frustrating experience.
There is no easy way to build the rest of the conventions based on your knowledge of what you know now, because it's artificial.
It's also what leads people to say "don't learn languages, learn to program". Which makes no sense to you at all, until, well, you finally know how to program. But you got there by learning conventions on the way.
At a major company, over ten teams were automated in a year.
Beautiful and useful abstractions around data and data processing tasks that provided extreme value.
I could never replicate it, but it reminded me about the power of software and the extreme ability that some have to weird it.
Later, in college, we were learning lower level programming details (like what C translated to in assembly and how it managed calls and the stack frame). Despite this being my third CS course in college, I hadn't really grokked recursion yet. But I had a flashback during one of the classes to the TI-BASIC programs I'd written using a stack, and realized I'd recreated recursion (but manually). After that recursion and loops were synonymous in my mind (as they should be, at least in many cases) since I knew how to translate between them. Whenever I saw someone managing a stack and looping until it was empty, I knew both that it could be and how it could be translated into recursion (and vice versa).
It seems to be one of the hardest topics for many of my colleagues (especially those without a CS degree, so lacking practice with recursion) to understand or ever use. But I can usually get them to understand it once I draw a few things out on paper and show the two solutions to a problem (recursive or iterative). This doesn't mean they like recursion, most still avoided it, but they started to understand that it wasn't magic, it was just the computer managing the same data structure they were manually managing.
2. When I gave into Power BI finally and fixed a report someone else created, it felt so good. I don't think there is anything that compares to it and I only scratched the surface.
What would be a good resource that helped you get to that aha moment ?
On top of my head I can remember these:
2. Jerome Cukier blog posts
3. dashing d3 js
So on one side i was programming in scala and on the other side i was "hand-compiling" small C functions into m68k assembly...
The aha-moment came when I was hand-compiling a recursive function down to m68k assembly and I saw that I could completely eliminate ALL the recursive calls by re-arranging some register values and some values in the stack frame, inserting a small preamble in the assembly and then at the end of the assembly routine jump back to said preamble instead of making a recursive call.
2) Written code can take a long time to stabilize and is thus expensive to change and maintain
3) Ostensibly orthodox legacy systems are full of anti-patterns.
4) How easily a small problem can turn into an infinite number of other, small problems, and the intuition required for where to 'draw the line'.
For me there is no other way of learning anymore and it serve as my real memory. Basically it give me the confidence that something that I learn now will be known years later.
Anki is a program which allow you to learn/memorize by using a Spaced repetition technique where newly introduced and more difficult "ideas/notes" are shown more frequently, while older and less difficult ones are shown less frequently in order to exploit the psychological spacing effect. More about the technique at https://en.wikipedia.org/wiki/Spaced_repetition
I use it for everything that I learn which I also want to know in the future, this range from CS related knowledge like a new algorithm that I learn to an English word which I don't know to even remembering the main concepts from books that I read or names of people that I meet.
The flow goes something along the lines of:
1. Learn something new till you actually understand it
2. Summarize it to a note or few notes with proper questions (in a way this stage reminds me the Feynman method)
3. Find/Create (by create and find I mainly just do screenshots or just save pre-made images... ;)) some pictures (if applicable) as adding visuals enhance memory capabilities
4. Insert it into Anki!
Admittedly this flow demands way more time and energy from "just listening/reading" some content, but after so many years of programming I found that I learned so much but forgot most of it... so I prefer to learn better and slower, and honestly after doing this for around 1-2 years now I got to say that it's absolutely amazing (I tell it to anyone who is willing to listen not only to programmers).
There is also a daily practice where you need to pass through some notes (this is the memory spaced repetition part), this usually take anywhere from 5 to 45 minutes a day (for me and based on how intense I am learning during the last period).
You can download Anki for free and with no subscription fees at https://apps.ankiweb.net/
It also support Linux/PC/Mac/Android/iOS (the iOS version cost money).
0. When I learned how to model systems in TLA+
1. When I figured out how to structure programs using monad transformers... sort of the moment where I started reasoning algebraically about programs on my own
2. When I learned how to specify propositions in types and verify my designs and programs
3. Learning JetBrains IDEA, the only IDE that I ever enjoyed using.
4. Learning red-green-refactor TDD. Now refactoring is something I do as a matter of routine, not dread.
5. Understanding the fractal complexity of Bash. It's weird how a language can make stream processing and parallelization basically trivial, while making things looping over non-trivial sets of files reliably PhD-level tasks.
6. Doing pair programming, and then keep doing it for four years because it was brilliant.
7. Installing Arch Linux, the first distro where things weren't constantly broken because of version drift.
This is why I love software so much and struggle with people. Software is all logic and mostly deterministic, while with people feelings are involved (which could be argued to be also deterministic but then we get into philosophic discussions about free will)
But on the software/hardware side, any bug can be resolved by digging through the layers of abstraction and figuring out where the logic error is.
2. When I learned event-driven programming using Windows Forms in C# and I was able to create programs that resembled the ones I used
3. When I took a course in POSIX, programmed using fork and pipes and learned about stuff like race conditions
4. When I spent a year learning everything there is to know about coding (including assembly, lisp, smalltalk, rust) and realized I would never feel as happy as I felt during 1-3 because I had changed as a person
2. Hating working on a Java/Angular/OO project after 5 years of FP
- I would learn some technology to a point where it does what I want
- I take what I learned to some new technology but find myself doing things from what I learned in other technologies
- this puts me in a strange loop where reality wasn't lining up with my expectations or I would do things that were more work than necessary because of learned habits
- the aha moment was when i started learning the theory of the thing I was working with
* it was key to getting out of this rut
* turns out this applies to any theory from abstractions all the way down to computational theory, type theory, automata theory, software analysis theory, hey if i ever get there maybe even software synthesis
in a nutshell i would summarize this aha moment as "you can keep doing the same old tricks and eat the cost or you can always dig deeper and see what costs can be avoided."
- what is the shape of the input data (requirements, configuration, dependencies are also data) of the thing I'm working with and what is the shape I'm looking to produce as output of this software (could be any combination of side effects, screens, or just data)?
- on a larger feature or set of features i look for a domain language to be discovered or did I create a domain language and does it hold true?
- what are the fundamental assumptions I was making and could they be improved from first principles?
- what is the expected and unexpected behavior of what I am building today or what I built in the past and why? (learning opportunities)
- the takeaways are my assertions are only as good as my understanding and understanding requires detail, attention to detail is only as good as my checklist, and my checklist is only as good as the questions I'm asking, this helps create a habit loop where I can hopefully improve outcomes with each iteration based on deeper introspection
> (+ (expt 2 32) (expt 2 32))
def retry(block: =>T): T = ???
- The free/libre/open source community care deeply about fundamental problems of our society and are trying to provide legal and technical tools to help take steps to create a better world. When I was younger, I thought "suckers! They're giving their compiler away for free!" It took me a while to internalize the free software ideals and even longer to be an active proponent.
- Corporate software, especially Microsoft, is in the business of creating a walled ecosystem, charging end consumers for their product and charging software developers to be part of that ecosystem. The first 'a-ha' moment was when I realized they, and others like them, were a racket, or at least trying to be one. The later 'a-ha' moment was when I realized there was a viable alternative to this game.
- Most (but not all) computer programming language flame wars about which language is better boil down to whether the developer prioritizes speed of program execution or speed of program development. See . Newer language wars center of safety and bug creation, so maybe this point is dating me.
- Programming languages are more about human psychology than about some mathematical proof of correctness. Programming languages are designed as a compromise between how a computer needs to be told to operate and how humans communicate. All the theoretical and mathematical infrastructure around language design are there to justify decisions once the language has passed the "psychology" test. This is the same idea between JPEG, MP3, etc. compression. Fourier transforms and wavelet transforms don't inherently create a saving, it's only when we can throw away higher order coefficients because the human perceptual system isn't as sensitive to them as lower order coefficients, does it give benefit. That is, JPEG and MP3 are specifically tailored to the human perceptual system. The math is there as the building block but which bits/coefficients to throw away are determined empirically. To belabor this point a bit more, programming language discussions arguing over functional vs. imperative, type safety, etc. that don't try to determine measurable metrics of success, preferably through testing their hypothesis with data collected on actual usage, are expressing an opinion only.
I learned to focus in on just the first compiler error, and ignore all the rest.
Read the first error. Resolve the first error. Recompile. Repeat.
This is just one example of breaking big problems into smaller problems.
You can’t possibly know what a company you join will be like until you actually have been working there 12 months.
For large code bases you can’t really rearchitect anything. You are stuck with how it works. Maybe on small scales you can refactor.
Don’t blindly apply design patterns. SOLID is good as a thinking framework rather than a code review gate.
Marketing isn’t what you think it is until you study it somewhat. E.g. it’s not glossy ads!
Does it actually take you 12 months, or did you mean that as a measured, sensible statement? Or are you perhaps looking at it from a bi-directional loyalty or advancement perspective and not just general culture?
For me, that would only apply to companies that I interpret as middling/unimpressive at first glance. The really good or bad companies stick out much more, so I can usually tell by the hiring and onboarding processes and the first couple of tasks you're given, even as a consultant.
Buy outs are a big factor plus reshuffles of management and teams.
Not only that, I’ve had excellent vibes at places in the first 6 months to find out the asshole factor is high later on too. Also without any of this stuff just changes, and I don”t see how that is avoidable. Change is constant! MMMV.
2. in software engineering it's always a people problem no matter what they tell you. see point 1.
1. Writing a Scheme interpreter in C
2. Abstract data types and encapsulation
3. Functional programming and recursive algorithms
4. Groking OOP and design patterns (this took me a long time)
5. Understanding how processes and scheduling is done in an OS
More recently, not really a "aha" moment, but Git has been a game changer.
I learned C on Windows, and before I learned any dynamic languages. And before I had ever written a unit test.
I knew all the rules, but I was not good at making a reasonably sized, correct program in a reasonable amount of time.
But then I developed a good workflow for writing Python, shell, etc., and then went back to writing C, and it helped immensely.
C is a dynamic language in many respects anyway!
2. realising that its more about delivering than dreaming up the perfect abstraction (get it done).
3. what you think the user wants vs what the user thinks they want vs what the user actually wants vs what the user actually needs.
4. that there are always tradeoffs
5. building a product by yourself (whether on the side or starting up) will give you invaluable experience.
2. When I was going through Tim Roughgarden's Algorithms course and saw the derivation of runtime complexity for mergesort and finally understood/visualized what a logarithm actually did (in school it was just taught as some rote function to help you manipulate eqautions of the form y=b^x)
3. Learning how TCP works from the bottom up. I think the biggest aha moment was when the textbook I was reading explained the TCP algorithm as a state machine that's just running on the client and server machines with the rest of the underlying network just forwarding packets, i.e. "pushing the complexity to the edge of the network".
4. Working through the nand2tetris project resulted in a lot of "oh X is just basically this at its core"
5. When going through a textbook explaining how relational database engines were implemented and seeing that they're essentially just using disk-persisted hash tables and self-balancing search trees to build indices on columns and make `select`s O(1)/O(log) time (I wasn't taught this in my uni's database course and assumed there was some fancy magic going on to make queries fast)
6. Realizing that I could just do a form of graph search/dependency resolution when learning a new codebase/trying to understand how a function works. I think before seeing someone do this in front of me I would usually just panic at the thought of "thousands of lines of code" rather than just "keep iteratively diving into the functions being called". Whenever I'm learning a new language, the first thing I'll do is setup the LSP plugin in vim so that I can quickly navigate up and down call graphs. Tbh I don't understand how some developers claim to not need this and instead just manually grep+open file in order to "jump to definition".
7. Forcing myself to derive the rotation logic for AVL trees. I was curious if, given just the high level properties of how an AVL tree behaves in order to guarantee O(log) time lookups, if I would be able to figure out all the rotation cases. Was a very rewarding exercise and something I plan on writing a blog post about (eventually...)
8. Learning about the log data structure and how state can be replicated by replaying/executing this stream of transformations/updates.
Done various sections of this and would heartily agree with how illuminating working through the project's been
2. When I started reading programming and software engineering books after having just "done it" – this brought so much thought and structure to what I used to improvise
The section on graphics and UI described BOOPSI - an object-oriented way of constructing UI elements with inheritance, etc. Had never been exposed to that before and it blew my mind.
2. Being an early adopter of Tulip (later renamed asyncio) and gaining an understanding of the event loop and concurrency without threads.
3. Understanding that all code is just a particular representation of some S-expresssion.
My biggest "OK Moment" (1) was non developers building tools via Spreadsheets Sheets when they do not have a tech team or the team's bandwidth, Which makes them IDEs. They also function
- independent micro data stores
- a data exchange mechanisms
The first computer I had access to was my family's Windows 95 PC. I learnt to write HTML in notepad and see it rendered in Internet Explorer. This was the beginning of my career, but so much of what was happening was accepted by my brain as simply "magic".
I would later chip away at that stack of magic and learn how more and more things worked, but even while being an accomplished programmer I still had this feeling that magic existed. It wasn't until I learnt to build my own computer from discrete logic components (thanks, Ben Eater) that I finally felt like I understood it. Computers are just machines.
I've since revisited textbooks (like Knuth) and the history of computing (starting with Babbage) and feel like my eyes are no longer obscured by my preconceptions about what a computer is.
Understanding what someone really needs should be a skill taught in CS.
2. React and Typescript
3. Jetbrains IDEs
I probably should transfer the really good findings to a notebook, or maybe take a photo. But I haven't yet.
To me technical debt is therefore defined as how many decisions you have to make in order to create a feature. Clean is when you don't have to make many decisions to get things done.
Example decisions, I'd say I spend at least 90% of my time developing on these decisions:
- What feature would be good to have?
- Is the feature worth the effort to build?
- Is the feature worth the compute costs?
- What language/framework should we use for this feature?
- How should we structure persistent data related to this feature?
- Where should the code for this feature live?
- How should we test this feature?
- How performant should this feature be?
- What name should this helper function/variable have?
The more of those you have to think about when developing the slower you will make progress. Therefore the main productivity hack is to write down guidelines or roadmaps or design documents for all of those so you don't have to think much about it when developing. This means don't be a manager when coding, let someone else do that work or do it before you start programming.
Things you can do to reduce mental cost of above decisions:
- Product roadmap with features that would be good to have.
- Discussions in the roadmap related to how much value said feature will provide and the effort to produce it.
- Discussions in the roadmap related to how expensive the feature will be to run.
- General guidelines on what language/framework you use.
- Have a very good architectural document describing how you structure persistent data.
- Have list of example commits showing where to put code for different features.
- Have a well documented testing strategy with examples pointing to commits with good integration and unit tests.
- Have guidelines on how much typical actions are allowed to take, like "page update should take 100ms at most".
- Try to write code where you don't need a lot of long superfluous names, namespaces and anonymous functions are your friends.
- Lastly, as much as possible try to make reasonable defaults for shared code. If you have to make 20 configuration decisions in order to use a library then you wont save a lot of time using it, and likely people will just copy the configuration from other places anyway since making 20 decisions is too much work. For example, lets say your library have a flag that can speed up processing 2x in some cases but with extra overhead most of the time. You could think that forcing the developer to decide in each case to ensure we aren't missing any performance improvements would be a good thing, but in reality a 2x performance improvement rarely matters. So the cost of having every developer making this decision outweighs the performance benefit. Instead have it as an optional config that they can set when they actually need the performance.