Then I wrote an MVC framework from scratch because I thought others were too heavy (ZF, Symfony, etc). It helped me understand an application development at a deeper level, figuring out the best strategy for code modularization, organization, and performance. It helped me to learn creating something that looks so painlessly easy to use is a pretty difficult task (there will always be something ugly sticking out). It took me on a ride of what happens with extreme level of abstractions breaking up code for each roles and a simpler route much like the Sinatra / Express bootstrap approach.
I've found a new level of respect for those who develops codes at a macro-level (frameworks / libraries). There are lot of thoughts and love been applied to them. DHL (Rails) is up there, as well as John Resig (jQuery). I'll also give a shout out to Sequel (Jeremy Evans) while at it for creating an ORM that is almost perfect. Vue.js (Evan You) also particularly deserves a lot of credit for what he is doing in the front-end world. I like React too, can be fun, but I did not like the whole Redux experience.
I think what makes a great programmer is that they are able to think at a macro-level, thinking about how 'others' will use and apply the code, and making it look deceptively simple.
Out of curiosity, anything specific that concerned you?
If you haven't looked at Redux lately, a lot of stuff has changed. We have a new official Redux Toolkit package  that is now our recommended approach for writing Redux logic, our new React-Redux hooks API  is easier to work with than the classic `connect` API, and we have an updated list of recommended patterns in practices in our Style Guide docs page  that should result in simpler and easier to understand code.
To me, Redux is just a pattern for how to handle immutable updates. Redux itself isn’t even important. Every layer of abstraction over these plain, pure functions and serializable objects just obscure the simplicity of the architecture, in my opinion. Some of the recommendations expressed in the docs are reasonable, but you don’t need a library for that, just the patterns.
In my experience a lot of the confusion beginners have with Redux is because they think it is a framework that will do things for them. In that light, it looks like there is an unnecessarily large number of parts to understand. I think they often miss the fact that there is no magic, just function composition. The clearer that is, the easier it will be to work with the code.
I dug around, I looked at Facebook's documentation and propaganda for the pattern, and I came to the conclusion that Facebook people didn't know how MVC worked and decided to invent their own thing as a replacement. Then, I started building something, and realized that Facebook's Flux dispatcher library was getting in the way.
I built my own from scratch, and suddenly I understood the elegance of the pattern. It was not at all what Facebook's marketing about it said. It was not an MVC replacement at all. It was something else entirely, and should never be treated as an MVC replacement. The explanations about how it solved problems with MVC were nonsense: it was just a different pattern, for wildly different use cases, and the MVC-replacement propaganda I read still made it seem like people at Facebook just didn't understand how to use MVC.
My dispatcher was lightweight and clean. It was one of the nicer bits of code I've written, and I'm still quite proud of it, years after leaving that job.
I was the principal dev on that project. After a couple of weeks of building a complete application with a deceptively simple architecture that did all the heavy lifting with ease, I took a day off.
It was one day. It was a Friday, which meant that for the people there a quarter of the day was taken up with meetings. Somehow, in the midst of this, disaster struck.
They told me I hadn't documented how to use the architecture in place well enough, so they had to rewrite it to get anything done, but the result was half-broken, and I had to finish what they started to be able to continue myself on Monday, and they never actually did anything with what they created that justified that claim.
A couple months later, I learned they hadn't actually looked at the documentation I wrote at all. They just decided that my bespoke implementation must be wrong because I didn't use existing tools. Well, shit, I didn't need existing tools. The whole dispatcher was something like twenty lines of code. The rest of the basic structure was in parts even smaller than that. The "rewrite" of the system was probably a thousand lines of library code (I'm guessing; I didn't count) and close to two hundred lines of junk around them to make things fit together, all so we could write a bunch of boilerplate for every single thing we wanted to do thereafter.
(Note that my references to all this code excludes the React tools themselves, installed directly from npm because yarn wasn't available yet.)
long story short:
I think I understand what you mean about a pattern, as opposed to a framework or toolset. Also, even if the origins of Flux (and, thus, ultimately of Redux) were in some Facebookers possibly grossly misunderstanding how to apply the MVC pattern, Flux as a pattern was an excellent approach to addressing some specific needs for application architecture.
- Babel transpilation aspect (time it takes to build, difficulties associated with ad-hoc debugging in production)
- Bundling/packaging aspect (same reasons as above; time it takes to build, debugging difficulties).
- Huge number of unnecessary dependencies which adds risks/vulnerabilities to projects.
- Transpilation and bundling doesn't have any costs.
- Dependencies (those needed to do all that fancy transpilation and essentially re-implementing the way the DOM works) don't carry any additional risk and are not a threat to application security, maintenance...
For background, please read my two "Tao of Redux" posts posts   to understand why Redux was built this way to begin with. Specifically, it was designed to have a minimal core, and be extensible.
In addition, given the existing Flux library APIs, _and_ that Immer didn't exist, there's no way the Redux Toolkit API would have been feasible at the time. We were only able to come up with it after years of watching how people actually used Redux in practice, and what libraries they were writing on top of Redux to solve specific problems. (People have pointed out that RTK feels kind of like Vuex, which makes sense - Vuex was inspired from Redux's API to begin with.)
Software design is an iterative process. It's based on trying to solve problems, at a specific point in time, with inspiration from existing tools and technologies, and constrained by what is actually available at the time. AngularJS, Backbone, Flux, Redux, Vuex, Immer, NgRx, Redux Toolkit... all of these tools have specific inspirations and problems they were trying to solve. Understanding _why_ tools were created is hugely important.
On that note, my post on "Redux Toolkit 1.0"  covers how and why we were able to design RTK's API this way.
Thanks for the explanation.
The only problem with Toolkit is that it's not Redux. People don't know it yet, and I have to fight to convince people it's good enough. In an ideal world for me you'd be able to deprecate old Redux and just replace it with Toolkit.
It's still 100% Redux. You create reducers, add them to a store, dispatch actions, and read that data in your components. None of that has changed. You're just writing less code to do it.
What RTK does is eliminate the "incidental complexity" that came along with the original Redux usage patterns: writing action types and action creators by hand, writing complex nested immutable update logic by hand, making a mistake in that immutable update logic and causing accidental mutations, etc .
I do understand the suggestion to "replace the Redux core with RTK", and I take it as a great compliment that people suggest that. However, it can't happen, because a lot of people are already using Redux with their own customizations and abstractions. Changing the Redux core to suddenly include a bunch of other dependencies would not be what they want.
The other aspect is that the Redux core was designed from the beginning to be unopinionated, while RTK is _deliberately_ opinionated. Not all Redux users want RTK's opinions. As it is, we've had folks who didn't like the fact that RTK _is_ opinionated and requires use of Immer in our `createReducer` and `createSlice` APIs.
We had a long discussion issue about whether it made sense to add things like this to the Redux core (which was actually part of what prompted us to create RTK in the first place) .
FWIW, we explicitly recommend RTK as the standard way to write Redux logic , and now that Redux Toolkit 1.3 is out with some new APIs , my next task is to create a new "Quick Start" tutorial page for the Redux core docs  that does actually teach Redux using RTK as the default syntax, ie, "this _is_ how you write Redux code", in the same way that the Apollo docs teach Apollo-Boost as the default way to use Apollo.
That's kind of fine! I don't really mind using different architectures, or even different frameworks. But in large projects the inconsistencies from different approaches generates bit rot and lot of bikeshedding.
I find RTK really good (maybe I'm biased because I've been using Immer from day 1), but that's beside the point: the best part is that it removes the least productive and most boring parts IMO of building software in large teams: bikeshedding, boilerplate and lack of structure in large projects.
There's also nothing we can do about the 5 million existing tutorials out there on the internet, and it's not like the core APIs would be _removed_. All those are still syntactically valid. Even if we shipped all these additional APIs as part of the core, people would have to look to see that they exist.
Otherwise, I'm really not sure what that "it's not Redux" comment means, either.
edit: Oh, crap, I just made a software/car analogy. I hate those.
Vanilla Redux on its own is like Linux without an userland, so you have to implement yourself.
Redux Toolkit is a proper Linux with GNU userland. It's ready to use, and people can even use each other's computers without reading through the source code!
It doesn't make a lot of difference for smaller projects, but the thing about Redux is that as soon as you have 20+ or 30+ people working on the same codebase you get a hodgepodge of different patterns and structures. Toolkit solves it.
As I left the ecosystem, I realised I approached writing code in a very defensive, overly structured way and have written less than half as many libraries ever since. The jury is out on whether that's a good thing or not.
As a sometimes-front-end dev, part of me quietly yearns to go back to the days of serving full html pages, especially since data transfer speeds are higher than they've ever been.
Edit: that's not a rejection of React et al. more an acknowledgement that everyone wants to do server-side rendering "because performance" which was... what we used to do. If we serverside render most of the page, we only need to enhance _some_ of the UI clientside? Yup. Exactly what we used to do.
C imposes similarly rigorous proving grounds for developers, but without pushing new developers into bad habits that would come back to bite them later. If you want people to start with more requirements for hard thinking about how to not build a deathtrap, choose C instead.
Mind you for my current application I may still go for Redux, it's going to be very large with possibly a lot of interdependent components and form fields. We'll see.
To take your point to absurdity, you could also just say "nope, write a native client instead."
That's a very single-dimensional decision making progress. You're nowhere near convincing that all UIs must be made with a website.
Even if we restrict our view to UIs made for businesses for the purpose of making a profit, native apps make a lot of sense. Just think of any big tech company; they all have robust apps and consumers largely prefer the apps. You can order door dash or an Uber in a web browser, but people almost always use the app because a good native app is a better experience than a good website app.
I don't fully agree. Yes, some programming needs to be done at a meta level, but far from all of it. Sometimes a software engineer is just there to solve a business problem, and in that case what makes them great is efficiently solving that particular problem, and not a meta-problem.
Sometimes shorter code is easier to write, less buggy, and more maintainable than meta code.
So it's still thinking about how others (that could even be yourself two years down the road) use the code is very important for such applications.
The definition for code to me is a language used to explain other programmers what you are trying to do that can be parsed by a computer.
- I stopped worrying about best practices and just do it.
- I hardly refactor my code unless I'm using the same thing for the fifth time. I just copy-paste it instead.
- I never think about optimizing code until I really really need to.
- I write what gets the shit done quickly. I don't care if writing a Class would have been better than a function.
- I love old boring technologies and believe you can still make amazing websites with just PHP and jQuery.
- I don't come up with clever one-liners anymore. I write a 10 line for loop which I could easily replace with a clever one-line code golf.
- Instead of writing my own code, I usually search Stackoverflow first to see if I can do some copy-paste it instead.
Substitute software engineering with any other engineering descipline with "I stopped worrying about best practices" and you will be able to see what's wrong with your approach. But then again, if it's php, your domain doesn't require too much care.
It's not being more productive, it just seems that you're getting things done faster because you're taking out a loan on the quality of your software (i.e. technical debt) with every line you write.
Management may be willing to spend that time and not be unhappy with what you perceive as time wasted because they have seen greater than expected ROI on the product already so it's not a loss to them.
1. Code you wrote you cannot understand a week later - no comments.
2. Code with no specifications.
3. Code that is shipped as soon as it runs and before it's beautiful.
4. Code with added features.
5. Code that is very very very fast and very obscure.
6. Code that is not beautiful.
7. Code you wrote without understanding the problem
I do not look down on any programming language. What I meant there was that php is often used for Web applications. Not server side performant critical or security critical components or systems programming. There's also a practical reality that some kind of coding doesn't require too much regulation on how we need to write code. I realised that bit came out wrong but I couldn't edit it.
In fact I'd say that until someone is able to quantify the revenue lost and be able to directly tie it against the choices you are making then stay the course.
- Use long expressive variable/function/class names that tell me exactly what this variable/function/class does
I have to constantly fight the urge to have code fit in the littlest amount of space possible.
I do this to some degree - I pick and choose my best practises because, quite frankly, some "best practices" are actually bad IMHO ("anti-patterns").
This is sometimes very context-dependent, but I'm now more sceptical of adopting a common practise that I personally haven't seen the benefit of.
Maybe the problem isn't YAGNI, but that you keep building useless shit. Rather than assume things aren't needed in general, maybe work on why your predictions on what YGN are so poor.
Sometimes, the stuff I predict I'll need Are needed after all, and I was glad I started early. Projects can live and die on the how perceptive wrt future changes its devs are; this is just a developed skill, no maxims needed.
Though I'll agree with you on two points - boring old technologies and searching Stack Overflow.
Optimizing code is something you don't need to worry about a lot of the time. Having understandable code is a lot more important in my opinion, but you don't seem to care about that.
In saying that there are plenty of jobs where code doesn't need to be maintained. Most bioinformatics research just need to produce a graph or heatmap and gets used once, so maintainability isn't so important.
The point of 'really really need to' has long been passed and you didn't notice it.
(Unless you use an entry-level $200 notebook for development, which you probably don't.)
In other words, please please eat your own dogfood. Your users aren't browsing your 'amazing websites' with a development notebook.
Also - as long as your code is written to be easily readable and refactorable, refactoring can be done when needed. It’s dense complex code that’s too tightly coupled that’s the long-term burden.
It’s often about how the code fits together rather than the individual syntax itself.
I have not used OOP or programming patterns or whatever for years and no one got hurt. In various code bases I never used inheritance (abstract classes) or method/dependency injection or whatever someone might ask me in an interview.
The code I copied from elsewhere would just get adapted to the company (Spartan) style and that's it.
Depends on the team. If it’s a team of code ninnies that hem and haw over the “correct” way to do something while deadlines are missed, this is bad. Worse even than ugly code.
Balance. There has to be balance.
Be kind on yourself, your teammates and whoever follows in your footsteps down the line and spend some time tidying up after yourself if you're not building throwaway things as personal projects.
If it is for a team, you need to take into account if other people can understand your code. Practices like SOLID and just using the type system well don't make it slower to write code but will help maintenance down the road.
Yet, I find most of the code snippets posted to Stackoverflow to be garbage. I despair if these are being copy and pasted into production stystems :(
Found the Magento developer.
However, typing this out, I realise I would have said the same thing 10 years ago, and 5 years ago, and I probably will again another 5 years from now.
I think the truth is that we're always missing the forest for all the trees -- only at increasing levels of abstraction: at first, everything is foreign, then syntax becomes fluff, then APIs become fluff, then patterns, then languages, then compilers, then ... I look forward to learning what the next level of fluff will be.
> The majority of the stuff being released every day is just a rehash of the same ideas. Truly revolutionary stuff only happens every few years.
Maybe not even that often. I can recommend Old Is the New New by Kevlin Henney for a good, historical perspective on how little actually changes sometimes: https://youtube.com/watch?v=AbgsfeGvg3E
Edit: now this was more interesting than I anticipated. What is the next level of fluff? Concurrency? Advanced data structures? Phrasing problems for SMT solvers? Probabilistic modeling?
Or are the things I picture as fluff just specific techniques, because the next level of fluff is so meta I wouldn't even recognise it now?
Perhaps a slight digression, but: I think there are very few people who can really say they can 'see through' the difference between languages.
C++, Erlang, Prolog, and Haskell, are very different languages, all the way from the shallow matter of syntax, through the type system, and even down to the fundamental model of computation. When I hear someone say If you learn to program in one language, it's easy to learn another, I assume that person doesn't know much about programming.
This is flat out wrong.
If you know Python, it'll be easier to learn Rust than having to learn Rust from zero. Engineering is about problem solving and formulating solutions with algorithms and data structures. If you don't know what those are yet, it'll take much longer to learn.
Learning one of anything makes learning the next one easier. Programming languages, spoken languages, musical instruments, sports cars, martial arts, sports, weapons, drawing, etc. Your brain builds a rich scaffold you can hang new concepts from.
> Learning one of anything makes learning the next one easier
Agreed, but I put easy, not easier. I suspect we really agree on all points.
If you know C#, it's easy to learn Java, as most the concepts are shared, and even the syntax is very similar. When someone says If you learn to program in one language, it's easy to learn another, it betrays that they don't understand that some languages are similar, but some are very different and have unavoidable learning curves.
C++ is a good example. Learning C++ is not easy, no matter what your background knowledge of other concepts and languages. It's a huge language, in which nothing is simple. Even the build model is a minefield. Having a solid understanding of imperative programming and OOP (from knowing Java, say) is certainly an important head-start. I wouldn't recommend C++ as someone's first programming language. There's still an enormous amount to learn about C++ specifically, no matter your background knowledge.
If you’re building a web app, what’re language you use will still come down to HTML/css. You’ll also probably be using SQL, which is also unlikely to change.
But there’s the other side to that. If you’re graphics programming, OpenGL provides the same features regardless of language. The OS APIs are broadly the same - the steps to set up a TCP listener don’t change with the language.
The APIs for all these may be subtlety different between languages, but not significantly. It’s a case of translation rather than relearning, and although I can’t speak for anyone else I find that translation quite painless.
After doing that 10+ times, it becomes easier to see the patterns in languages, and easier to understand how their underlying structure will affect how you have to code. There's only so many ways to write a compiler or interpreter, and if you know how they work and you know that your language is, for example, optimized for tail recursion, then you get your experience from other similar languages as a bonus.
I see your point if you're saying, on a surface level, that, "you know one, you know them all," is wrong, but if you dig deeper I think you'll see that, for the experienced (as in: someone who's been around a while) developer, you kinda can say that, with some reservations.
I think anyone can do the same thing, it just takes a decade or more to get there. I'm kinda dumb, too, so if I can do it, anyone can.
What I've found that is that once you learn more than one language with different paradigms, it becomes easier to learn another one with a similar or another different paradigm.
As you pointed out, there are only so many ways a language can be designed/structured, and becoming multilingual means getting familiar with the shared (and implicit) models and operations underlying all languages.
I would however agree with you that those who say "it's easy to learn another" in an absolute way has usually only considered languages from the same group so far.
No offense to, my fork lover friends.
If you know Java, you'll not take very long to be productive in another JVM language or in C#.
Yes, deep specialized expertise will take time. Also, switching across paradigms or levels (C -> Prolog or Python -> assembly) will take much longer. But that's not what people usually have in mind when they say switching languages is easy. It's usually the domain that takes longer to learn.
That's often the case: many people who think that don't really use the strengths of the languages they use.
On the other hand, I've written a basic lambda calculus parser for fun and gone on to write (non-distributable, if I care about people) software using it, also for fun; I've written idiomatic, well-formed code in BASIC, C, Perl, Ruby, Scheme, Standard ML, and a wide range of other things; and I've basically added features by removing code. At some point, even if you know what you're doing, there comes a point where, kinda-sorta, it becomes easy to learn others.
It just takes a lot more than learning only one language to get to that point. Whether the person who first coined that comment about learning languages meant it as a simplification of a wise concept or was a blubdev who never really figured out the significant differences between programming languages is a question I'd like to see answered definitively, based on knowledge of the individual rather than guesses.
But I wouldn't say it is "easy" after learning one language. It definitely takes two or three, with step-changes each time you learn a language in a different paradigm than you have before.
If you know Python, Ruby or PHP will be pretty easy for you to pick up. You might have some difficulties with Haskell and Lisp.
Learning some Haskell was probably the most worthwhile time I have spent learning a language because it forced me to fundamentally reevaluate how I think about programming. Now I don't use Haskell at work, and likely never will; but, years later, the techniques I learned from it I still apply daily when I write C, Python, JS or whatever. It made my code far cleaner.
Crucially, I don't just think of languages as different syntax for Java.
So much of the software I work with could plausibly have been written by a machine instead.
I couldn't disagree more, the grown up programmers use a strategic approach, not a tactical one. As Jonh Ousterhout put it in his book A Philosophy of Software Design "The first step towards becoming a good software designer is to realize that working code isn't enough."
In that same job, the boss decided (after we switched to a pseudo-Flux, Frankenstein's monster of MVC component libs that slowed development to an excruciating crawl) to bring in a consultant to "help". He was the "crap out apparently-working code quickly" brand of "10x Programmer". Some basic functionality for new features got added in a hurry, over a couple weeks' time. The following couple weeks were dedicated to fixing everything he wrote.
Yes, I agree: "grown up" programmers think beyond what seems to work right now.
Sure, I use dirty hacks semi-regularly, but only when I'm first feeling out how something works. It doesn't last more than a couple hours, usually far less, because the point isn't to commit that code but to learn how to think about the problem so I can do it right, and a "done right" rewrite takes less time to get working than the initial dirty hack.
The "done right" rewrite also doesn't impose tremendous maintenance and continuing development costs over months to come the way the dirty hack would.
If the author means real programmers use dirty hacks to figure out what they're doing, then they make it right, that's fine. If not, that's not fine at all.
It's not about how fast you get to code to production, it's about how fast code/product can be adapted to new or unexpected requirements.
In short by learning how a business operates, how cash flows and how expensive things are (people, buildings, software, hardware, accountants, lawyers, ...) you'll begin to understand why an MVP is a powerful tool. you'll also appreciate Agile development practices more too.
With an MVP you're writing the least amount of code possible, and skipping optimisations, whilst also providing some value to people who can tell you what direction to go in next. That's a business thing - the business needs something in the market ASAP and it needs feedback ASAP. Without this you'd write your stack for three years and then release to a world that moved on two years ago.
When you learn how a business is run you begin to understand that getting to market with a less than ideal code base is actually desirable and something to embraced. All of sudden you'll come to understand that money burns quickly so you should learn to develop code to get to market and not to be the fastest code it can be (yet, at least.)
I find the best programmers I've worked with are those that can produce something quickly and optimise it later on.
Fred Brooks has a chapter in Mythical Man Month where he mentions different programmers having different roles within "Surgical Software Team" but that never took off.
I wonder if updating that to 2020 would make more sense to have a group of people shipping features full-steam ahead, another reviewing the code for mistakes, and then having other folks refactoring the mess, behind both of them.
I would frankly enjoy being in just one the three positions, any of them, much more than the position I'm currently: having to be all three at the same time.
I'm still relatively new to the industry (3 years programming, 1.5 professionally), but I've found that I'm a BIG fan of refactoring. To me, it's like writing or communicating ideas in general, which I really enjoy. I love taking a complicated idea and trying to explain it in the most simple and readable of terms. It also pushes me to really understand the intent behind the code/text.
To that extent, I enjoy editing and revision much more than creation. So, taking a role as a 'refactorer' actually sounds great!
I hope I'll be able to learn that part of it too, but perhaps I am more naturally inclined towards writing the second version of their MVP instead.
Absolutely. Modeling what you're trying to do in code is a critically important task, and a most surface-level grime is forgivable if you have an understandable foundation.
However, I worry that if you focus on the first half of this block, you might think the abstractions aren't important. To a lot of new programmers, abstractions and models are themselves merely "nice" -- if you can get it done without them, clearly they're non-essential, right?
I believe that most code is read more often than it's written. (You pass through a lot of code while tracing down bugs!) It pays to reduce the brainpower necessary to understand a block of code, and good abstractions and good models are key to this.
Bad abstractions and models make it much harder to understand code, though. So for junior devs, yeah, it would often be better to be very, very careful, and think very, very hard about your abstractions before creating them.
Everyone wants to turn every problem into some sort of framework loaded with interfaces and different implementations of them vs being ok with "feature X needs to do thing A, B, and D; feature Y needs to do things A, B, C, and E; let's just have them as two separate endpoints with separate biz logical calling those methods sequentially instead of trying to coerce them together."
True! The point I'm trying to make is more that code clarity is important and worth striving for, regardless of the means you use to achieve it. Clarity is a non-negotiable part of my definition of "nice". If it's also part of others' idea of "nice" -- especially those of novice developers who look to more experienced developers for guidance -- then this paragraph might suggest that clarity can be down-played. It can't.
If you're an experienced developer, you should be able to find good abstractions for achieving clarity in-context. But novice developers can still identify code that's unclear (to them, especially).
Blindly applying abstractions rarely helps with code clarity. Some developers apply certain abstractions because "it's a best practice", rather than because it's actually more clear. That's definitely not what I'm advocating for.
Most value is from consistency, if you make the same mistakes in the same way it is easier to find them. If you make the same mistake in a different way each time, good luck finding it.
However the old "Learn C" and "Write a compiler" which are within meme territory tbh. It is more important generally to understand what a particular in whatever language you are using to know what the compiler / interpreter / run-time is doing under the hood as most developers will be working with a high level language.
I've seen a lot of "how to improve as a programmer" articles and IMO what seems to be left out is learning how to do things as simply as possible. Many of the software systems I've worked on is so over-engineered, many simple websites are huge both client and server side and they usually don't warrant it.
Well, thanks a lot. You just made your code much harder to read, improve, change and check for correctness. This is actually a self-contradiction. If the code changes often, then its even more important to make it "look nice".
This is something that beginner programmers don't get on several levels:
* can be reviewed more quickly (broadcasting effect)
* can be checked for correctness much more easily
* is pleasant to work with
* can be changed more easily
* can often even compiled into more efficient machine code
The reason why beginners (and this is not about time invested! Many programmers still are beginners after 30 years in the field) think code doesn't need to look nice is mostly because
1. They don't consider that most of the cost of code comes from maintenance. Writing it is really A FRACTION of the time people will spend with your code over time.
2. They don't consider that creating bugs is one of the most expensive parts of software development.
3. They don't consider that code is read far more often than it is being written.
4. They are simply not ABLE to produce good code and doing so would take them a lot of time.
Now giving the advice to not focus on good/nice code is a recipe for staying a beginner for the rest of your life and making every project your work on a nightmare for everyone else involved.
Personally I dislike some the default formats that prettier have chosen but if it prevents a single minute of discussion on my team about how the should be formatted, it pays dividends over time. And eventually we, as humans seem to get used to just about anything.
Excellent advice right there!
> Scheme is the only language to implement them, and while you will never use them in production, they will change how you think about control flow. I wrote a blog post trying to explain them.
Hmm, well, many languages compile to a continuation passing style (CPS) intermediate. And in many languages it is important to understand continuations to understand how they work. For example, laziness in Haskell is essentially all about continuations and continuation passing style.
What is CPS? Anyone who has been in callback hell knows. In CPS a continuation is just the closure you pass to some function that it will call when it's done. In fact, even in C, the act of returning from a function is very much like the act of calling a continuation, except that by unwinding the stack, the magic of Scheme continuations is lost.
I'm a Haskell programmer. Continuations are part of my everyday toolbox.
It doesn't have to be fancy, I recommend using Forth or Lisp as a starting point; but write something real, do your thing.
I find that most compilers are toys and replicas of existing languages on top of that, which misses the point of designing your own tool.
Don’t know why, but the whole struggle of using it changed the way I thought about code and I gained the power to code at the speed of thought.
Sane defaults, know the tool and how to use it, but don’t waste time on configuration. When possible choose the standard tool for the job (IntelliJ for me).
It’s still fun to yak shave, mess with things, play with the terminal etc. but at least for me I found that the time spent doing that was the opposite of productive.
Then again, these days I just use whatever vim-mode solution there is in VSCode, IntelliJ, etc. and move on. Kind of like how I prefer a GUI for Git these days.
It would blow the mind of my younger self. I just have so many other things going on in my life and on my plate.
Personally this typically ends up in me just dropping the problem altogether
If I didn't understand a bit of code I wrote a few months ago, it was probably too complex and worth refactoring. As it was my own system, I couldn't blame anyone else's poor design decisions.
I noticed that keeping the logic as close to the database was generally the way to go. Avoiding changing the database schema and making up for it at the application level was basically adding technical debt.
I don't think it provides more fundamental benefits than any other lisp than access to a plethora of standard Java libraries. E.g. I cannot Java, but knowing clojure has allowed me to maintain a large inherited Java codebase. Afaik I could not have done that in common lisp (and not only because I don't know cl).
I think almost everything is harder when you are not using lisp. Also lisp tends to bring joy to its users for some reason, so there is that.
Finally there is a huge difference between Python and clojure data wrangling. Pandas and spark are fundamentally table oriented, but if you are using nested dictionaries (clojure maps) you do not have those tools to help you. No matter what I was doing in clojure, it always involved processing nested maps, it's just the natural approach in the language, kinda like using classes in Java.
Hope I managed to clarify something :)
Concurrency primitives and immutable data structures are part of the core language. This is probably more interesting than the lisp part itself, which is kind of a bonus. Rich Hickey's talks stress the former aspects a lot more than the later.
Also Clojure is created by a fairly respected dude (afaik) Rich Hickey. His talks and Clojure design decision articles on the site are really insightful.
Also this talk also pretty good overview of different paradigms and `Clojure way`, imo.
When devs talk about data in Python their are referring to data science, it has a big ecosystem to support that, Clojure doesn't. Data-oriented in Clojure means using generic data structures like maps/vectors/sets to represent information, instead of creating classes for everything.
I feel a similar way about Python too though.
The hype probably comes mostly from good concurrency tools + immutable data structures, but at least partly it comes from people who really enjoy working in it.