Hacker News new | past | comments | ask | show | jobs | submit login
How I became a better programmer (2017) (jlongster.com)
423 points by _ttg 5 days ago | hide | past | web | favorite | 146 comments

About a decade ago when OOP was just booming in the PHP world, I wrote my own database abstraction layer because I was a bit tired of the ORM's that existed at the time (Doctrine, Propel, ZF). Taught me what makes a beautiful fluent interface and the difficulties of achieving them. Helped me to think about UX experience at a code level, trying to achieve an interface for developers that fits all ages. That was an eye opening experience.

Then I wrote an MVC framework from scratch because I thought others were too heavy (ZF, Symfony, etc). It helped me understand an application development at a deeper level, figuring out the best strategy for code modularization, organization, and performance. It helped me to learn creating something that looks so painlessly easy to use is a pretty difficult task (there will always be something ugly sticking out). It took me on a ride of what happens with extreme level of abstractions breaking up code for each roles and a simpler route much like the Sinatra / Express bootstrap approach.

I've found a new level of respect for those who develops codes at a macro-level (frameworks / libraries). There are lot of thoughts and love been applied to them. DHL (Rails) is up there, as well as John Resig (jQuery). I'll also give a shout out to Sequel (Jeremy Evans) while at it for creating an ORM that is almost perfect. Vue.js (Evan You) also particularly deserves a lot of credit for what he is doing in the front-end world. I like React too, can be fun, but I did not like the whole Redux experience.

I think what makes a great programmer is that they are able to think at a macro-level, thinking about how 'others' will use and apply the code, and making it look deceptively simple.

> I did not like the whole Redux experience

Out of curiosity, anything specific that concerned you?

If you haven't looked at Redux lately, a lot of stuff has changed. We have a new official Redux Toolkit package [0] that is now our recommended approach for writing Redux logic, our new React-Redux hooks API [1] is easier to work with than the classic `connect` API, and we have an updated list of recommended patterns in practices in our Style Guide docs page [2] that should result in simpler and easier to understand code.

[0] https://redux-toolkit.js.org

[1] https://react-redux.js.org/api/hooks

[2] https://redux.js.org/style-guide/style-guide

I’ve used Redux for years, and tried many different approaches to it. But when I look at Redux Toolkit I don’t see much of what I’ve enjoyed about Redux.

To me, Redux is just a pattern for how to handle immutable updates. Redux itself isn’t even important. Every layer of abstraction over these plain, pure functions and serializable objects just obscure the simplicity of the architecture, in my opinion. Some of the recommendations expressed in the docs are reasonable, but you don’t need a library for that, just the patterns.

In my experience a lot of the confusion beginners have with Redux is because they think it is a framework that will do things for them. In that light, it looks like there is an unnecessarily large number of parts to understand. I think they often miss the fact that there is no magic, just function composition. The clearer that is, the easier it will be to work with the code.

Before Redux was probably even an idea, when Flux was reasonably new, my boss at the time basically said this new Flux thing looked like what we needed for a project and told me to learn about what made Flux tick and start building the project on that pattern.

I dug around, I looked at Facebook's documentation and propaganda for the pattern, and I came to the conclusion that Facebook people didn't know how MVC worked and decided to invent their own thing as a replacement. Then, I started building something, and realized that Facebook's Flux dispatcher library was getting in the way.

I built my own from scratch, and suddenly I understood the elegance of the pattern. It was not at all what Facebook's marketing about it said. It was not an MVC replacement at all. It was something else entirely, and should never be treated as an MVC replacement. The explanations about how it solved problems with MVC were nonsense: it was just a different pattern, for wildly different use cases, and the MVC-replacement propaganda I read still made it seem like people at Facebook just didn't understand how to use MVC.

My dispatcher was lightweight and clean. It was one of the nicer bits of code I've written, and I'm still quite proud of it, years after leaving that job.

I was the principal dev on that project. After a couple of weeks of building a complete application with a deceptively simple architecture that did all the heavy lifting with ease, I took a day off.

It was one day. It was a Friday, which meant that for the people there a quarter of the day was taken up with meetings. Somehow, in the midst of this, disaster struck.

When I came back on Monday, I discovered that the boss and a co-worker had decided they would just replace all the guts of the application with a pile of libraries designed for MVC, but bend them into a Flux shape, kinda, sorta. The result was slower to build (q.v. JavaScript "build" crapola), slower to deploy, slower to test, and slower to operate. It was slower to debug, slower to extend, and generally miserable. It required fixing or changing something in the plumbing almost every time a feature had to get added. My work on a small, beautiful, efficient architecture was destroyed, and nothing we ever did with that application ever actually required any of what they did to it.

They told me I hadn't documented how to use the architecture in place well enough, so they had to rewrite it to get anything done, but the result was half-broken, and I had to finish what they started to be able to continue myself on Monday, and they never actually did anything with what they created that justified that claim.

A couple months later, I learned they hadn't actually looked at the documentation I wrote at all. They just decided that my bespoke implementation must be wrong because I didn't use existing tools. Well, shit, I didn't need existing tools. The whole dispatcher was something like twenty lines of code. The rest of the basic structure was in parts even smaller than that. The "rewrite" of the system was probably a thousand lines of library code (I'm guessing; I didn't count) and close to two hundred lines of junk around them to make things fit together, all so we could write a bunch of boilerplate for every single thing we wanted to do thereafter.

(Note that my references to all this code excludes the React tools themselves, installed directly from npm because yarn wasn't available yet.)

long story short:

I think I understand what you mean about a pattern, as opposed to a framework or toolset. Also, even if the origins of Flux (and, thus, ultimately of Redux) were in some Facebookers possibly grossly misunderstanding how to apply the MVC pattern, Flux as a pattern was an excellent approach to addressing some specific needs for application architecture.

For me it's simply the bulkiness of it:

- Babel transpilation aspect (time it takes to build, difficulties associated with ad-hoc debugging in production)

- Bundling/packaging aspect (same reasons as above; time it takes to build, debugging difficulties).

- Huge number of unnecessary dependencies which adds risks/vulnerabilities to projects.

You could argue that it's not React's fault, that it's more about the tooling but I think that it's React's stance on inlining HTML with JavaScript (JSX) which had paved the way for all that bulkiness. From the beginning the project wrongly assumed that:

- Transpilation and bundling doesn't have any costs.

- Dependencies (those needed to do all that fancy transpilation and essentially re-implementing the way the DOM works) don't carry any additional risk and are not a threat to application security, maintenance...

I personally loathe JSX, not because of JavaScript, but because it's a wholly unnecessary different syntax for JavaScript that just forces me to deal with SGML/XML problems in a new and more painful way, because it has crept into the application logic as well.

Yeah, the toolkit does help a lot actually, I do appreciate you guys for coming up with that. But I wished that it was more like that from day 1. As others have mentioned, it was the "Redux Experience", the sheer verbosity and redundancies that went along with it, for not much gain imo. I'd have liked a lighter abstraction layer.

Yeah, a number of folks have said "we wish RTK is how Redux always was". I wish that too, but there's no way we (or more specifically Dan and Andrew) could have come up with Redux Toolkit's API at the beginning.

For background, please read my two "Tao of Redux" posts posts [0] [1] to understand why Redux was built this way to begin with. Specifically, it was designed to have a minimal core, and be extensible.

In addition, given the existing Flux library APIs, _and_ that Immer didn't exist, there's no way the Redux Toolkit API would have been feasible at the time. We were only able to come up with it after years of watching how people actually used Redux in practice, and what libraries they were writing on top of Redux to solve specific problems. (People have pointed out that RTK feels kind of like Vuex, which makes sense - Vuex was inspired from Redux's API to begin with.)

Software design is an iterative process. It's based on trying to solve problems, at a specific point in time, with inspiration from existing tools and technologies, and constrained by what is actually available at the time. AngularJS, Backbone, Flux, Redux, Vuex, Immer, NgRx, Redux Toolkit... all of these tools have specific inspirations and problems they were trying to solve. Understanding _why_ tools were created is hugely important.

On that note, my post on "Redux Toolkit 1.0" [2] covers how and why we were able to design RTK's API this way.

[0] https://blog.isquaredsoftware.com/2017/05/idiomatic-redux-ta...

[1] https://blog.isquaredsoftware.com/2017/05/idiomatic-redux-ta...


I'll have to look at that the next time I need to build the kind of JavaScript app for which it's appropriate. I haven't done that kind of work in a couple years or so.

Thanks for the explanation.

Not OP but: You probably know the answer to your own question, since I'm guessing you made Toolkit to scratch your own itch! :) The "Redux Experience" is a bit overwhelming, boilerplatey, verbose and full of gotchas. Structures are all over the place. IMO that is solved by Toolkit.

The only problem with Toolkit is that it's not Redux. People don't know it yet, and I have to fight to convince people it's good enough. In an ideal world for me you'd be able to deprecate old Redux and just replace it with Toolkit.

I'm not sure what you mean by "it's not Redux".

It's still 100% Redux. You create reducers, add them to a store, dispatch actions, and read that data in your components. None of that has changed. You're just writing less code to do it.

What RTK does is eliminate the "incidental complexity" that came along with the original Redux usage patterns: writing action types and action creators by hand, writing complex nested immutable update logic by hand, making a mistake in that immutable update logic and causing accidental mutations, etc [0].

I do understand the suggestion to "replace the Redux core with RTK", and I take it as a great compliment that people suggest that. However, it can't happen, because a lot of people are already using Redux with their own customizations and abstractions. Changing the Redux core to suddenly include a bunch of other dependencies would not be what they want.

The other aspect is that the Redux core was designed from the beginning to be unopinionated, while RTK is _deliberately_ opinionated. Not all Redux users want RTK's opinions. As it is, we've had folks who didn't like the fact that RTK _is_ opinionated and requires use of Immer in our `createReducer` and `createSlice` APIs.

We had a long discussion issue about whether it made sense to add things like this to the Redux core (which was actually part of what prompted us to create RTK in the first place) [1].

FWIW, we explicitly recommend RTK as the standard way to write Redux logic [2], and now that Redux Toolkit 1.3 is out with some new APIs [3], my next task is to create a new "Quick Start" tutorial page for the Redux core docs [4] that does actually teach Redux using RTK as the default syntax, ie, "this _is_ how you write Redux code", in the same way that the Apollo docs teach Apollo-Boost as the default way to use Apollo.

[0] https://blog.isquaredsoftware.com/2019/10/redux-starter-kit-...

[1] https://github.com/reduxjs/redux/issues/3321

[2] https://redux.js.org/style-guide/style-guide#use-redux-toolk...

[3] https://github.com/reduxjs/redux-toolkit/releases/tag/v1.3.0

[4] https://github.com/reduxjs/redux/issues/3674

I mean the problems is that it's not Redux itself. People have to actively look for RTK to use it, and few do. I wish you guys had replaced Redux with RKT and moved redux to @reduxjs/core like proposed on the issue and proclaimed "Redux is dead, long live Redux!"

Starting a project with Vanilla Redux is like starting a Vanilla PHP or Vanilla Javascript project: you're on your own. And most people don't really have a very big grasp on the full concept and just blindly follow some rando tutorial they saw in 2017 or 2018, because that's the best they can do.

That's kind of fine! I don't really mind using different architectures, or even different frameworks. But in large projects the inconsistencies from different approaches generates bit rot and lot of bikeshedding.

I find RTK really good (maybe I'm biased because I've been using Immer from day 1), but that's beside the point: the best part is that it removes the least productive and most boring parts IMO of building software in large teams: bikeshedding, boilerplate and lack of structure in large projects.

As I said, we're in the process of rewriting the docs (and by "we", I mean "me", because I'm not getting any assistance with it so far). The "Quick Start" page I'm about to write will show RTK as the default, and we'll emphasize that elsewhere throughout the docs as they get rewritten.

There's also nothing we can do about the 5 million existing tutorials out there on the internet, and it's not like the core APIs would be _removed_. All those are still syntactically valid. Even if we shipped all these additional APIs as part of the core, people would have to look to see that they exist.

Maybe "it's not Redux" is meant in the same way as "Toyota Camry isn't Car". It's a car, but you can have a car that isn't a Camry.

Otherwise, I'm really not sure what that "it's not Redux" comment means, either.

edit: Oh, crap, I just made a software/car analogy. I hate those.

To stay on analogies, it's more like GNU and Linux.

Vanilla Redux on its own is like Linux without an userland, so you have to implement yourself.

Redux Toolkit is a proper Linux with GNU userland. It's ready to use, and people can even use each other's computers without reading through the source code!

It doesn't make a lot of difference for smaller projects, but the thing about Redux is that as soon as you have 20+ or 30+ people working on the same codebase you get a hodgepodge of different patterns and structures. Toolkit solves it.

I initially followed the same path... then I just ditched completely PHP and its ecosystem and then never felt the need to write my own ORM/Router/libs again.

Haha me too. I think PHP was a great rite-of-passage as a developer back then: inconsistent standard API, all the session and cookie access built in, with all the security gotchyas, evolution from major versions 4 to 5, obsession with Frameworks, growth and development of ZF and Symfony etc. Great experience.

As I left the ecosystem, I realised I approached writing code in a very defensive, overly structured way and have written less than half as many libraries ever since. The jury is out on whether that's a good thing or not.

As a sometimes-front-end dev, part of me quietly yearns to go back to the days of serving full html pages, especially since data transfer speeds are higher than they've ever been.

Edit: that's not a rejection of React et al. more an acknowledgement that everyone wants to do server-side rendering "because performance" which was... what we used to do. If we serverside render most of the page, we only need to enhance _some_ of the UI clientside? Yup. Exactly what we used to do.

Regarding PHP, if you want it as a rite of passage it should come after writing software in better languages with better tools. PHP-first means bad-habits-first.

C imposes similarly rigorous proving grounds for developers, but without pushing new developers into bad habits that would come back to bite them later. If you want people to start with more requirements for hard thinking about how to not build a deathtrap, choose C instead.

As for the benefits of rendering server-side, they are legion, and most important among them is the fact that pushing everything to the client-side part of the system means not really knowing what your code will do in the wild, imposing whole new classes of frustration on users, and piles of other horrors. Yes, we could benefit from falling back on far more default-server-side technologies and using JavaScript only where it's needed, I think, for the sake of the users first and foremost.

JavaScript is great for some things. It's terrible for others, and all too often trying to use JavaScript for everything means coupling the wrong tools for many jobs together with the wrong place to run code, too. Even when I enjoy JavaScript work, the simple act of checking online documentation for the JavaScript tools I'm using ruins the whole experience for me as I realize how we're punishing our users when we don't think about how we're using JavaScript from a user's perspective.

My first language was Pascal. I'd like to think I learned good habits up front. But I'll fully admit, when I first started writing PHP, I definitely treated it as a "script". My code barely used functions at first. It was a big top-down autoexec.bat style recipe. It took quite awhile for me to come around to treating PHP like a real programming language. I think a lot of it came from the fact that I'd edit, and refresh the browser page. Whereas with Turbo Pascal, I'd save my work, and compile. Once I did come around to writing more application-style with PHP, I was more interested in other things (Java and Ruby at the time)

autoexec.bat, those were some good times, loved hacking the config.sys file

ahh... the good ol' days of magic_quotes enabled by default... because, yeah, why would anyone want to use data for anything else than putting it in a mysql db ? mm ? :-)

As a recent convert to redux who ended up recreating it from first principals but in a much much clunkier way, I highly recommend anyone writing a UI in 2020 to have a damn good reason to not use react and redux. It's so well designed today. Well worth the time to learn.

From my experience a few years ago, Redux adds a lot of indirection to your application without adding much value. My last project was 'pure' React, and we didn't miss Redux because React now supported contexts to share state throughout the application.

Mind you for my current application I may still go for Redux, it's going to be very large with possibly a lot of interdependent components and form fields. We'll see.

> anyone writing a UI in 2020 to have a damn good reason to not use react and redux.

Avoiding JavaScript is all the reason anyone needs. UI =\= website

The most charitable interpretation of their point is any UI where you were going to use Javascript heavily anyways which is what Redux+Redux competes with.

To take your point to absurdity, you could also just say "nope, write a native client instead."

I thought the point to absurdity was what they mean by UI =\= website?

on edit: =\= is the prolog version of !=, I guess they didn't want to use != in case anyone thought they used JavaScript in real life.

What is absurd about native programs? I think trying to turn a web browser into an OS is absurd.

It's orders of magnitude harder to get users to download a program than it is to just give them a link and have everything right there, no?

Its a trade off though - its orders of magnitude harder to develop a web browser app, and they are slower, and there are limits to what you can do. So yes the link is easier but its a trade off, so just saying you can just send a link glosses over a lot of decisions, you can just send a link to something on an App Store to, and the power difference in the apps is significant and really how hard is it to install something from an App Store

Sure, if the only metric you care about is "how easy is it to try out my project" then making a website makes sense.

That's a very single-dimensional decision making progress. You're nowhere near convincing that all UIs must be made with a website.

Even if we restrict our view to UIs made for businesses for the purpose of making a profit, native apps make a lot of sense. Just think of any big tech company; they all have robust apps and consumers largely prefer the apps. You can order door dash or an Uber in a web browser, but people almost always use the app because a good native app is a better experience than a good website app.

My point is that web applications have their place. The absurdity is the dismissal of Javascript in a conversation about building web applications with Javascript.

indeed the default assumption on hacker news is web site, I feel saddened that so much developer time is spent on getting round the limitations of browsers. There are so many constraints on UI in a browser that any discussion seems moot.

The existence of Elm is a damn good reason not to use React and Redux.

> I think what makes a great programmer is that they are able to think at a macro-level, thinking about how 'others' will use and apply the code, and making it look deceptively simple.

I don't fully agree. Yes, some programming needs to be done at a meta level, but far from all of it. Sometimes a software engineer is just there to solve a business problem, and in that case what makes them great is efficiently solving that particular problem, and not a meta-problem.

Sometimes shorter code is easier to write, less buggy, and more maintainable than meta code.

I don't see these two as opposed. If you're building a framework that means you need to do some meta thinking about its future users and their needs. If it's a special niche problem then you need some meta thinking about the clients business needs. It's all about deeply understanding the problem that you're attempting to solve.

How you solve that particular business problem should be really easy to deduce from the code that you wrote to solve it. 2000 line functions that are one big switch / case statement don't fall into that category for example.

So it's still thinking about how others (that could even be yourself two years down the road) use the code is very important for such applications.

The definition for code to me is a language used to explain other programmers what you are trying to do that can be parsed by a computer.

I would say that's the difference between making something work. And Making at work in an elegant way.

My comment is a little tongue in cheek but Here is "How I became a more productive by becoming a worse programmer"

- I stopped worrying about best practices and just do it.

- I hardly refactor my code unless I'm using the same thing for the fifth time. I just copy-paste it instead.

- I never think about optimizing code until I really really need to.

- I write what gets the shit done quickly. I don't care if writing a Class would have been better than a function.

- I love old boring technologies and believe you can still make amazing websites with just PHP and jQuery.

- I don't come up with clever one-liners anymore. I write a 10 line for loop which I could easily replace with a clever one-line code golf.

- Instead of writing my own code, I usually search Stackoverflow first to see if I can do some copy-paste it instead.

Not to be rude or anything but if you continue down this road productivity might be the only thing going for you and you'll soon realise you are the only one who can work with your code.

Substitute software engineering with any other engineering descipline with "I stopped worrying about best practices" and you will be able to see what's wrong with your approach. But then again, if it's php, your domain doesn't require too much care.

Exactly. I had coworkers like this, and their code was impossible to understand. To make a simple change to it, it would require days of refactoring just to know you weren't going to introduce a bug.

It's not being more productive, it just seems that you're getting things done faster because you're taking out a loan on the quality of your software (i.e. technical debt) with every line you write.

"It's not being more productive, it just seems that you're getting things done faster because you're taking out a loan on the quality of your software (i.e. technical debt) with every line you write." This. When someone does this in a hurry or because they didn't know better, it's understandable. But when someone thinks that this style of working is superior, that's a huge problem.

Paying down tech debt is like brushing your teeth, if you don't do it regularly, you'll end up with more pain and a higher bill at the end.

How old is the software and what's the change that needed. If the software is past it's ROI and changes are happening due to scope creep then days of refactoring isn't refactoring it's just days of dev work against new requirements. It's just happening on an old product.

Management may be willing to spend that time and not be unhappy with what you perceive as time wasted because they have seen greater than expected ROI on the product already so it's not a loss to them.

I've ran into these problems with software that wasn't even released yet. Some of these practices will likely bite you in the ass in months or even weeks, not years.

They generally get the next person to work in the code without understanding what was going through your head: and 90% of the time, that’s you, three days from now.

I like Joe Armstrongs 7 deadly sins.

1. Code you wrote you cannot understand a week later - no comments.

2. Code with no specifications.

3. Code that is shipped as soon as it runs and before it's beautiful.

4. Code with added features.

5. Code that is very very very fast and very obscure.

6. Code that is not beautiful.

7. Code you wrote without understanding the problem

I agree with your point but I find the dig at programming in PHP devaluating it.

I apologise for that. Actually that wasn't my intention. I meant to type "May be it doesn't require too much care" but I was typing on a mobile and I missed it.

I do not look down on any programming language. What I meant there was that php is often used for Web applications. Not server side performant critical or security critical components or systems programming. There's also a practical reality that some kind of coding doesn't require too much regulation on how we need to write code. I realised that bit came out wrong but I couldn't edit it.

Although I agree in general, a 10 line for loop is usually more readable than a clever one liner.

Sure but 10 lines is often more places for bugs to hide.

If it is decomposed one-liner -- then likely not often

What about a dumb one liner? Or, you know, using HOFs.

Everything your doing is perfectly ok as long as the productivity gains are there. The lack of a generic function where something occurs 5 times has potential future risk only because it makes refactoring harder. But if you don't deliver a product because your lost building generic things then you don't get to have legacy and support problems.

In fact I'd say that until someone is able to quantify the revenue lost and be able to directly tie it against the choices you are making then stay the course.

+1. Add:

- Use long expressive variable/function/class names that tell me exactly what this variable/function/class does

I have to constantly fight the urge to have code fit in the littlest amount of space possible.

Variable names sound be succinct . They don't need to be short, but they also shouldn't be essays. I had one coworker who would essentially write everything the function did. Variable names would be 100+ chars. It was impossible. Variable names should be as long as needed , but no longer.

I use variable, function and class names as a rough guide on whether I need to refactor. If I name it accurately and the name gets too long, its probably doing too much.

> I stopped worrying about best practices and just do it.

I do this to some degree - I pick and choose my best practises because, quite frankly, some "best practices" are actually bad IMHO ("anti-patterns").

This is sometimes very context-dependent, but I'm now more sceptical of adopting a common practise that I personally haven't seen the benefit of.

e.g. YAGNI:

Maybe the problem isn't YAGNI, but that you keep building useless shit. Rather than assume things aren't needed in general, maybe work on why your predictions on what YGN are so poor.

Sometimes, the stuff I predict I'll need Are needed after all, and I was glad I started early. Projects can live and die on the how perceptive wrt future changes its devs are; this is just a developed skill, no maxims needed.

Your code is probably a nightmare for the next person to maintain. Have you ever worked on the same project with your own code for more than a year or two?

Though I'll agree with you on two points - boring old technologies and searching Stack Overflow.

Optimizing code is something you don't need to worry about a lot of the time. Having understandable code is a lot more important in my opinion, but you don't seem to care about that.

In saying that there are plenty of jobs where code doesn't need to be maintained. Most bioinformatics research just need to produce a graph or heatmap and gets used once, so maintainability isn't so important.

> I never think about optimizing code until I really really need to.

The point of 'really really need to' has long been passed and you didn't notice it.

(Unless you use an entry-level $200 notebook for development, which you probably don't.)

In other words, please please eat your own dogfood. Your users aren't browsing your 'amazing websites' with a development notebook.

5 times is perhaps a bit high, but many best practices advocate for not abstracting until you re-use the code >3 times, since you may end up adding too many conditionals in to handle all the slight variations.

Also - as long as your code is written to be easily readable and refactorable, refactoring can be done when needed. It’s dense complex code that’s too tightly coupled that’s the long-term burden.

It’s often about how the code fits together rather than the individual syntax itself.

The so-called best practices are wrong sometimes. Or maybe they just talk about generic cases. In which case they often are wrong as well. Having the same code twice can already be highly problematic if it is a longer stretch of code. It is virtually guaranteed that the two places start diverging and then the question is why. Is it because it actually needs to be different? Is it because a bug was fixed in one place that should have been fixed in the other as well but was forgotten?

I must agree. If one takes a set of low level practices (like those defined for Spartan programming or something your company likes) and copy pastes that kind of code, it's going to be easy and readable and refactorable.

I have not used OOP or programming patterns or whatever for years and no one got hurt. In various code bases I never used inheritance (abstract classes) or method/dependency injection or whatever someone might ask me in an interview.

The code I copied from elsewhere would just get adapted to the company (Spartan) style and that's it.

I like your honesty and the nonlinear path you took. I find my road is similar.

Only works if you are writing for yourself and not with a team.

> only

Depends on the team. If it’s a team of code ninnies that hem and haw over the “correct” way to do something while deadlines are missed, this is bad. Worse even than ugly code.

Balance. There has to be balance.

This is fine as long as you don't stop there; the adage goes "Make it work", which is what you're describing, but don't forget "Make it pretty" and optionally "Make it fast".

Be kind on yourself, your teammates and whoever follows in your footsteps down the line and spend some time tidying up after yourself if you're not building throwaway things as personal projects.

Most of that is good. One factor is if you are writing code for yourself to maintain or a team.

If it is for a team, you need to take into account if other people can understand your code. Practices like SOLID and just using the type system well don't make it slower to write code but will help maintenance down the road.

>Instead of writing my own code, I usually search Stackoverflow first to see if I can do some copy-paste it instead.

Yet, I find most of the code snippets posted to Stackoverflow to be garbage. I despair if these are being copy and pasted into production stystems :(

Ironically, even this list is a set of best practices that are sometimes specific to the use case.

“and believe you can still make amazing websites with just PHP and jQuery.”

Found the Magento developer.

congratulations you write horrible software

But do they ship, while the competitors are fretting over variable names?

Depends on the perspective doesn't it? If the software does what it advertises, and gets to market faster than the competition, then by definition it is good software. It's only in our blinkered views that we ignore these considerations.

Software is never write once and done. You may get to the market faster in the first iteration, but the follow up iterations can come to a grinding halt because modifying, extending and patching the existing code now becomes an enormous challenge.

The points on the fluff in particular resonate with me because idespite being a fairly good software engineer, I would say it was only recently I truly started seeing past the fluff.

However, typing this out, I realise I would have said the same thing 10 years ago, and 5 years ago, and I probably will again another 5 years from now.

I think the truth is that we're always missing the forest for all the trees -- only at increasing levels of abstraction: at first, everything is foreign, then syntax becomes fluff, then APIs become fluff, then patterns, then languages, then compilers, then ... I look forward to learning what the next level of fluff will be.

> The majority of the stuff being released every day is just a rehash of the same ideas. Truly revolutionary stuff only happens every few years.

Maybe not even that often. I can recommend Old Is the New New by Kevlin Henney for a good, historical perspective on how little actually changes sometimes: https://youtube.com/watch?v=AbgsfeGvg3E

Edit: now this was more interesting than I anticipated. What is the next level of fluff? Concurrency? Advanced data structures? Phrasing problems for SMT solvers? Probabilistic modeling?

Or are the things I picture as fluff just specific techniques, because the next level of fluff is so meta I wouldn't even recognise it now?

> then languages

Perhaps a slight digression, but: I think there are very few people who can really say they can 'see through' the difference between languages.

C++, Erlang, Prolog, and Haskell, are very different languages, all the way from the shallow matter of syntax, through the type system, and even down to the fundamental model of computation. When I hear someone say If you learn to program in one language, it's easy to learn another, I assume that person doesn't know much about programming.

> If you learn to program in one language, it's easy to learn another, I assume that person doesn't know much about programming.

This is flat out wrong.

If you know Python, it'll be easier to learn Rust than having to learn Rust from zero. Engineering is about problem solving and formulating solutions with algorithms and data structures. If you don't know what those are yet, it'll take much longer to learn.

Learning one of anything makes learning the next one easier. Programming languages, spoken languages, musical instruments, sports cars, martial arts, sports, weapons, drawing, etc. Your brain builds a rich scaffold you can hang new concepts from.

> If you know Python, it'll be easier to learn Rust than having to learn Rust from zero.

> Learning one of anything makes learning the next one easier

Agreed, but I put easy, not easier. I suspect we really agree on all points.

If you know C#, it's easy to learn Java, as most the concepts are shared, and even the syntax is very similar. When someone says If you learn to program in one language, it's easy to learn another, it betrays that they don't understand that some languages are similar, but some are very different and have unavoidable learning curves.

C++ is a good example. Learning C++ is not easy, no matter what your background knowledge of other concepts and languages. It's a huge language, in which nothing is simple. Even the build model is a minefield. Having a solid understanding of imperative programming and OOP (from knowing Java, say) is certainly an important head-start. I wouldn't recommend C++ as someone's first programming language. There's still an enormous amount to learn about C++ specifically, no matter your background knowledge.

It’s also worth noting that projects are rarely contained within the bounds of a programming language.

If you’re building a web app, what’re language you use will still come down to HTML/css. You’ll also probably be using SQL, which is also unlikely to change.

But there’s the other side to that. If you’re graphics programming, OpenGL provides the same features regardless of language. The OS APIs are broadly the same - the steps to set up a TCP listener don’t change with the language.

The APIs for all these may be subtlety different between languages, but not significantly. It’s a case of translation rather than relearning, and although I can’t speak for anyone else I find that translation quite painless.

Learning Python before Haskell also made learning Haskell significantly harder for me, because I kept trying to write Python in Haskell

Sure, but it will still be a while before you aren't just writing Python in Rust.

It’s still easier than if you had never previously written an if statement. If there weren’t a lot of differences between languages then we wouldn’t have so many languages, but that doesn’t mean that there are not a lot of concepts that are used widely across many languages.

However the more languages you know and the more they're spread across different paradigms the shallower the learning curve will be for any language one wants to pick up.

Be fair to the original comment, which referenced people saying something about learning a language, and it making learning any others easy, not about learning many languages, and it making learning any others easier.

Writing Python in Rust unlike say writing C++ in Rust, might actually be a desirable thing. Put it this way, It will be a while before you are writing Rust like Python.

I'm one of those people, but there's more nuance than that to it. I've learned about 20 different languages to some degree, from Python to brainfuck to Elixir currently. When I learn a new language, I go to the source material, so for elixir I used the tutorials at the elixir home page and their API documentation. I read the tutorials, writing each line of code into my editor, running each, modifying it to see how it works, etc. Then, everytime the tutorial links to or mentions an API, I'll go read that documentation page in full and experimenting. Then I'll make some projects with that language, look at how the more experienced Elixir (or whatever) developers code by exploring their repos, etc.

After doing that 10+ times, it becomes easier to see the patterns in languages, and easier to understand how their underlying structure will affect how you have to code. There's only so many ways to write a compiler or interpreter, and if you know how they work and you know that your language is, for example, optimized for tail recursion, then you get your experience from other similar languages as a bonus.

I see your point if you're saying, on a surface level, that, "you know one, you know them all," is wrong, but if you dig deeper I think you'll see that, for the experienced (as in: someone who's been around a while) developer, you kinda can say that, with some reservations.

I think anyone can do the same thing, it just takes a decade or more to get there. I'm kinda dumb, too, so if I can do it, anyone can.

I'm with you on this - I don't know so many languages but have studied a wide range, and am fluent in several programming and human languages.

What I've found that is that once you learn more than one language with different paradigms, it becomes easier to learn another one with a similar or another different paradigm.

As you pointed out, there are only so many ways a language can be designed/structured, and becoming multilingual means getting familiar with the shared (and implicit) models and operations underlying all languages.

Your 2nd paragraph says what I meant to say, but much more sussinctly. Thank you.

Well, as someone who has taught different languages as first, second and third languages to students and young pupils I would say it becomes easier. Not easy in an absolute sense. It still requires effort.

I would however agree with you that those who say "it's easy to learn another" in an absolute way has usually only considered languages from the same group so far.

I also agree with this. Some people treats languages like forks. It's just a fork why pick another it will do just fine right? Well languages are more like artist tools I think. Of course you can draw a perfect horse with pen/pastel/charcoals as in writing really good i.e Chat Server with Java/Erlang, but each of them has different taste, feeling, community. Picking a language is pretty different than picking a fork. And those all the way Javascript / Java lovers feels like 0.7/0.5 mechanical pencil users that tries draw everything with that. How about trying a brush? (Maybe I shouldn't have pushed this analogy this far)

No offense to, my fork lover friends.

Or they program different things than you do. In data science for example it's not a big deal to jump across Matlab, Python, Julia, Lua etc.

If you know Java, you'll not take very long to be productive in another JVM language or in C#.

Yes, deep specialized expertise will take time. Also, switching across paradigms or levels (C -> Prolog or Python -> assembly) will take much longer. But that's not what people usually have in mind when they say switching languages is easy. It's usually the domain that takes longer to learn.

> If you learn to program in one language, it's easy to learn another.

That's often the case: many people who think that don't really use the strengths of the languages they use.

On the other hand, I've written a basic lambda calculus parser for fun and gone on to write (non-distributable, if I care about people) software using it, also for fun; I've written idiomatic, well-formed code in BASIC, C, Perl, Ruby, Scheme, Standard ML, and a wide range of other things; and I've basically added features by removing code. At some point, even if you know what you're doing, there comes a point where, kinda-sorta, it becomes easy to learn others.

It just takes a lot more than learning only one language to get to that point. Whether the person who first coined that comment about learning languages meant it as a simplification of a wise concept or was a blubdev who never really figured out the significant differences between programming languages is a question I'd like to see answered definitively, based on knowledge of the individual rather than guesses.

They might not mean "easy" in the same way you mean. I sometimes find myself saying "easy" when what I mean is something more like "approachable" or "without many ambiguous roadblocks". I very much believe it is "easy" in that way to learn new programming languages. It is still hard work, requiring lots of reading and trial and error and plenty of failure, but it doesn't have that same experience of complete stuckness as the first time learning to program.

But I wouldn't say it is "easy" after learning one language. It definitely takes two or three, with step-changes each time you learn a language in a different paradigm than you have before.

I would expand that to "If you learn to program in one language, it's easy to learn another within the same paradigm".

If you know Python, Ruby or PHP will be pretty easy for you to pick up. You might have some difficulties with Haskell and Lisp.

I'd add that spending some time to play with the languages that are hard or different will pay dividends down the road.

Learning some Haskell was probably the most worthwhile time I have spent learning a language because it forced me to fundamentally reevaluate how I think about programming. Now I don't use Haskell at work, and likely never will; but, years later, the techniques I learned from it I still apply daily when I write C, Python, JS or whatever. It made my code far cleaner.

Whenever a newish programmer has asked me what language they should learn next I always say prolog. I fully expect them to never touch the language afterwards, but writing prolog was a truly a mind opening experience. More so than learning functional programming imo.

I feel this same way about Clojure, I won't likely ever use it professionally, but it's made me a better programmer having learned it.

I routinely write in no less than 4 languages for work... comfortably in all of them. Each certainly have their own nuances but the skills translate and made learning the others easier. At worst I (have had to and do) sometimes look up syntax or end up using a design that's less than optimal for that languadge, but that still works.

I absolutely agree with you. Being able to call languages fluff comes from me having tried to build real applications with C, Python, Prolog, Erlang, Haskell, Ada, POSIX shell, AWK, Common Lisp, Perl, R, and so on.

Crucially, I don't just think of languages as different syntax for Java.

I've been having this experience recently. I used to be a voracious reader of programming books. But for a long time starting, gosh, maybe over 5 years ago, I just couldn't get excited to read them anymore. Just recently I thoroughly enjoyed reading Designing Data Intensive Applications, and I have just started Streaming Systems, which I'm also enjoying a lot. I think this idea of fluff at different levels of abstraction explains it: I used to enjoy reading books about programming languages and frameworks, but at some point it just felt like fluff and I could no longer get through it. But there are books about techniques for solving particular classes of problem (in the case of my recent reading: data processing) which don't feel like fluff to me. But maybe they will eventually, and maybe you're already further on this continuum and would find these books fluffy as well. I'll definitely be thinking about this fluff at different levels of abstraction model and checking my self-education against it as I go now!

Not sure how to write this. I am hoping the next level of fluff to be removed will be implementing concrete algorithms in procedural or functional style in day to day coding. Instead we should be leveraging other programs (such as SMT solvers, but not limited to those) to produce a suitable implementations of solutions. That is, we should try to focus more on specifications, in one form or another.

So much of the software I work with could plausibly have been written by a machine instead.

"The most experienced programmer uses hacks all the time; the important part is that you're getting stuff done."

I couldn't disagree more, the grown up programmers use a strategic approach, not a tactical one. As Jonh Ousterhout put it in his book A Philosophy of Software Design "The first step towards becoming a good software designer is to realize that working code isn't enough."

Elsewhere in this discussion, I described my first Flux experience.

In that same job, the boss decided (after we switched to a pseudo-Flux, Frankenstein's monster of MVC component libs that slowed development to an excruciating crawl) to bring in a consultant to "help". He was the "crap out apparently-working code quickly" brand of "10x Programmer". Some basic functionality for new features got added in a hurry, over a couple weeks' time. The following couple weeks were dedicated to fixing everything he wrote.

Yes, I agree: "grown up" programmers think beyond what seems to work right now.

Sure, I use dirty hacks semi-regularly, but only when I'm first feeling out how something works. It doesn't last more than a couple hours, usually far less, because the point isn't to commit that code but to learn how to think about the problem so I can do it right, and a "done right" rewrite takes less time to get working than the initial dirty hack.

The "done right" rewrite also doesn't impose tremendous maintenance and continuing development costs over months to come the way the dirty hack would.

If the author means real programmers use dirty hacks to figure out what they're doing, then they make it right, that's fine. If not, that's not fine at all.

100% agree. The "hacks" and "get stuff done" approach gains quick and short benefit, which probably fits a freelancer/contractor role. Maintainable and testable code should be implied.

It's not about how fast you get to code to production, it's about how fast code/product can be adapted to new or unexpected requirements.

Judging by the rest of the article, I don't think the author is advocating pushing hacks to production. But rather to use hacks to move forward, and then improve once you have a working version.

Also learn how to run a business. A weird suggestion on the surface but actually very powerful when understood.

In short by learning how a business operates, how cash flows and how expensive things are (people, buildings, software, hardware, accountants, lawyers, ...) you'll begin to understand why an MVP is a powerful tool. you'll also appreciate Agile development practices more too.

With an MVP you're writing the least amount of code possible, and skipping optimisations, whilst also providing some value to people who can tell you what direction to go in next. That's a business thing - the business needs something in the market ASAP and it needs feedback ASAP. Without this you'd write your stack for three years and then release to a world that moved on two years ago.

When you learn how a business is run you begin to understand that getting to market with a less than ideal code base is actually desirable and something to embraced. All of sudden you'll come to understand that money burns quickly so you should learn to develop code to get to market and not to be the fastest code it can be (yet, at least.)

I find the best programmers I've worked with are those that can produce something quickly and optimise it later on.

I've found the ability to produce something quickly is almost orthogonal to being a "good programmer" (unless you define goodness that way.) Some are good quick-and-dirty implementers, some are deep design thinkers, some are just good at reviewing and giving advice.

> Some are good quick-and-dirty implementers, some are deep design thinkers, some are just good at reviewing and giving advice.

Fred Brooks has a chapter in Mythical Man Month where he mentions different programmers having different roles within "Surgical Software Team" but that never took off.

I wonder if updating that to 2020 would make more sense to have a group of people shipping features full-steam ahead, another reviewing the code for mistakes, and then having other folks refactoring the mess, behind both of them.

I would frankly enjoy being in just one the three positions, any of them, much more than the position I'm currently: having to be all three at the same time.

I've often thought about a similar idea.

I'm still relatively new to the industry (3 years programming, 1.5 professionally), but I've found that I'm a BIG fan of refactoring. To me, it's like writing or communicating ideas in general, which I really enjoy. I love taking a complicated idea and trying to explain it in the most simple and readable of terms. It also pushes me to really understand the intent behind the code/text.

To that extent, I enjoy editing and revision much more than creation. So, taking a role as a 'refactorer' actually sounds great!

This is an interesting idea. I've frequently found myself in the slow and methodical camp, but I admire the way in which one of the co-founders at my current job can just throw together some amazing proof of concepts in very little time. Of course, the code they write is in general littered with assumptions, missing error handling, and potential performance problems.

I hope I'll be able to learn that part of it too, but perhaps I am more naturally inclined towards writing the second version of their MVP instead.

> Here's a question I like to ask: do you spend most of your time making your code look "nice"? If so, I recommend not focusing on it so much. [...] It's better to focus hard on the core problems you're trying to solve and think hard about your layers of abstractions.

Absolutely. Modeling what you're trying to do in code is a critically important task, and a most surface-level grime is forgivable if you have an understandable foundation.

However, I worry that if you focus on the first half of this block, you might think the abstractions aren't important. To a lot of new programmers, abstractions and models are themselves merely "nice" -- if you can get it done without them, clearly they're non-essential, right?

I believe that most code is read more often than it's written. (You pass through a lot of code while tracing down bugs!) It pays to reduce the brainpower necessary to understand a block of code, and good abstractions and good models are key to this.

> I believe that most code is read more often than it's written. (You pass through a lot of code while tracing down bugs!) It pays to reduce the brainpower necessary to understand a block of code, and good abstractions and good models are key to this.

Bad abstractions and models make it much harder to understand code, though. So for junior devs, yeah, it would often be better to be very, very careful, and think very, very hard about your abstractions before creating them.

Everyone wants to turn every problem into some sort of framework loaded with interfaces and different implementations of them vs being ok with "feature X needs to do thing A, B, and D; feature Y needs to do things A, B, C, and E; let's just have them as two separate endpoints with separate biz logical calling those methods sequentially instead of trying to coerce them together."

> Bad abstractions and models make it much harder to understand code, though.

True! The point I'm trying to make is more that code clarity is important and worth striving for, regardless of the means you use to achieve it. Clarity is a non-negotiable part of my definition of "nice". If it's also part of others' idea of "nice" -- especially those of novice developers who look to more experienced developers for guidance -- then this paragraph might suggest that clarity can be down-played. It can't.

If you're an experienced developer, you should be able to find good abstractions for achieving clarity in-context. But novice developers can still identify code that's unclear (to them, especially).

Blindly applying abstractions rarely helps with code clarity. Some developers apply certain abstractions because "it's a best practice", rather than because it's actually more clear. That's definitely not what I'm advocating for.

Main point is to be consistent, if it is ugly the same way it is OK. If each time it is ugly different way start thinking about making it consistent.

Most value is from consistency, if you make the same mistakes in the same way it is easier to find them. If you make the same mistake in a different way each time, good luck finding it.

...and also: bad abstractions can be major impediments to it.

Clarity is primary; good abstractions allow you to achieve it. The fact that bad abstractions harm clarity is something we agree on very strongly.

Some of it this was spot on, especially doing research. Very rarely you will be doing something completely novel and it always good to look at how other people have solved a problem.

However the old "Learn C" and "Write a compiler" which are within meme territory tbh. It is more important generally to understand what a particular in whatever language you are using to know what the compiler / interpreter / run-time is doing under the hood as most developers will be working with a high level language.

I've seen a lot of "how to improve as a programmer" articles and IMO what seems to be left out is learning how to do things as simply as possible. Many of the software systems I've worked on is so over-engineered, many simple websites are huge both client and server side and they usually don't warrant it.

"Don't spend much time on making your code look nice"...

Well, thanks a lot. You just made your code much harder to read, improve, change and check for correctness. This is actually a self-contradiction. If the code changes often, then its even more important to make it "look nice". This is something that beginner programmers don't get on several levels: Nice code

* can be reviewed more quickly (broadcasting effect)

* can be checked for correctness much more easily

* is pleasant to work with

* can be changed more easily

* can often even compiled into more efficient machine code

The reason why beginners (and this is not about time invested! Many programmers still are beginners after 30 years in the field) think code doesn't need to look nice is mostly because

1. They don't consider that most of the cost of code comes from maintenance. Writing it is really A FRACTION of the time people will spend with your code over time.

2. They don't consider that creating bugs is one of the most expensive parts of software development.

3. They don't consider that code is read far more often than it is being written.

4. They are simply not ABLE to produce good code and doing so would take them a lot of time.

Now giving the advice to not focus on good/nice code is a recipe for staying a beginner for the rest of your life and making every project your work on a nightmare for everyone else involved.

FYI - the author of the blog post created https://prettier.io/ which is a very opinionated (automatic) javascript code formatter. His assertion about "nice code" was that you shouldn't spend effort on making it look nice. Let a tool do that for you.

Personally I dislike some the default formats that prettier have chosen but if it prevents a single minute of discussion on my team about how the should be formatted, it pays dividends over time. And eventually we, as humans seem to get used to just about anything.

While I agree, what they actually said was "if you spend most of your time making your code look pretty", which is obviously too much time, IMO. There needs to be balance.

"I stick things on the global object until I know what I'm doing." Magical sentence that.

> Understand continuations - Continuations are a low-level control flow mechanism.

Excellent advice right there!

> Scheme is the only language to implement them, and while you will never use them in production, they will change how you think about control flow. I wrote a blog post trying to explain them.

Hmm, well, many languages compile to a continuation passing style (CPS) intermediate. And in many languages it is important to understand continuations to understand how they work. For example, laziness in Haskell is essentially all about continuations and continuation passing style.

What is CPS? Anyone who has been in callback hell knows. In CPS a continuation is just the closure you pass to some function that it will call when it's done. In fact, even in C, the act of returning from a function is very much like the act of calling a continuation, except that by unwinding the stack, the magic of Scheme continuations is lost.

"... you will never use them in production ..."

I'm a Haskell programmer. Continuations are part of my everyday toolbox.

Once you know a handful of different programming languages, write an interpreted language embedded in your preferred host language.

It doesn't have to be fancy, I recommend using Forth or Lisp as a starting point; but write something real, do your thing.

I find that most compilers are toys and replicas of existing languages on top of that, which misses the point of designing your own tool.


Your reasoning for learning C is biased (perhaps you don't know it well as you suggested the others to know the basics)! At the end, hardware understands only values. C is a minimal, efficient and readable language which survived for decades although there are hundreds of other programming languages. It remains an active programming language as long as there is a good compiler support.

Many of the complaints I read about C these days often boil down to a lack of understanding, I think you're right to suggest that the author doesn't know it very well. I also suspect that they haven't taken the time to find and read good C code.

I became a better programmer once I committed to mastering Vim.

Don’t know why, but the whole struggle of using it changed the way I thought about code and I gained the power to code at the speed of thought.

I think I actually became better when I stopped spending time messing around with tools and environments and focused on just the programming part.

Sane defaults, know the tool and how to use it, but don’t waste time on configuration. When possible choose the standard tool for the job (IntelliJ for me).

It’s still fun to yak shave, mess with things, play with the terminal etc. but at least for me I found that the time spent doing that was the opposite of productive.

Like playing an instrument or learning a language, learning Vim/Emacs is something you're glad younger-you did because now you certainly couldn't be bothered.

Then again, these days I just use whatever vim-mode solution there is in VSCode, IntelliJ, etc. and move on. Kind of like how I prefer a GUI for Git these days.

It would blow the mind of my younger self. I just have so many other things going on in my life and on my plate.

> But you shouldn't do that until you've done some cursory research about how people have solved it before. Spending a few days researching the topic always completely changes how I am going to solve it.

Personally this typically ends up in me just dropping the problem altogether

How I became a bitter programmer. (1995-2020)

Doesn’t expound upon two of the most important things: (1) focus on business value over technical challenge (if anything this preaches the opposite) and (2) learn how to get as close as possible to the global optimum in the 3D space formed by time, cost, and quality, depending on the situation.

And (3) read Deming to find out that the 3D space formed by time, cost, and quality does not look like one might think it does – for anything but the quickest of prototypes, increased quality reduces total time and cost!

One thing that helped me a to was to build a system myself and maintain it for a few years. That way you see what areas come back and cause you trouble over and over again.

If I didn't understand a bit of code I wrote a few months ago, it was probably too complex and worth refactoring. As it was my own system, I couldn't blame anyone else's poor design decisions.

I noticed that keeping the logic as close to the database was generally the way to go. Avoiding changing the database schema and making up for it at the application level was basically adding technical debt.

I've heard so many things about Clojure. I am a CS student with some experience in the most used languages (JS, java, python). Can someone explain in simple cs terms why Clojure is sooo hyped? What can I do with this language that would be harder with other languages? From what I've gathered it's used in data wrangling and manipulation in general but most data-oriented tools are written in Python.

I believe it's hyped because is a more practical/modern lisp.

I don't think it provides more fundamental benefits than any other lisp than access to a plethora of standard Java libraries. E.g. I cannot Java, but knowing clojure has allowed me to maintain a large inherited Java codebase. Afaik I could not have done that in common lisp (and not only because I don't know cl).

I think almost everything is harder when you are not using lisp. Also lisp tends to bring joy to its users for some reason, so there is that.

Finally there is a huge difference between Python and clojure data wrangling. Pandas and spark are fundamentally table oriented, but if you are using nested dictionaries (clojure maps) you do not have those tools to help you. No matter what I was doing in clojure, it always involved processing nested maps, it's just the natural approach in the language, kinda like using classes in Java.

Hope I managed to clarify something :)

> I don't think it provides more fundamental benefits than any other lisp

Concurrency primitives and immutable data structures are part of the core language. This is probably more interesting than the lisp part itself, which is kind of a bonus. Rich Hickey's talks stress the former aspects a lot more than the later.

Well, those are so ingrained in clojure that I forgot about them :). I think you made a great point.

Maybe more of the hype comes from the fact that its a Lisp. To get a feeling for the "magic" of lisp, I would suggest either watching SICP video lecs[0] or reading the book. I started learning lisp, almost the same time I've started learning programming with Java (college). Honestly I was thinking why they don't just teach us that, instead of Java at that time.

Also Clojure is created by a fairly respected dude (afaik) Rich Hickey. His talks and Clojure design decision articles on the site are really insightful.

Also this talk[1] also pretty good overview of different paradigms and `Clojure way`, imo.

[0]https://youtu.be/-J_xL4IGhJA?list=PLE18841CABEA24090 [1]https://www.youtube.com/watch?v=vK1DazRK_a0

Clojure has a focus on immutable data. With Python, a list or dictionary is mutable and with parallel execution this can result in one function mutating a list that another is using. Or even without parallel execution one may wonder if a function somewhere in the call stack is mutating a list. Jokingly called spooky action at a distance. In general mutation makes reasoning about code more difficult. Clojure has a nice set of orthogonal functions for transforming persistent data structures. Once you wrap your head around how to program by transforming data instead of mutating data, you might enjoy that method of programming. Python has pyrsistent library, but Clojure is immutable first and in Python it is not really the norm.

Clojure doesn't have much over JS except for macros, but today you can achieve the same with Babel. If you stick to writing JS in a functional programming way like Clojure encourages to do so, you are pretty much just writing Clojure but with a little bit more verbosity for keeping things immutable and to compensate for JS's lack of standard library (fixable with libs like lodash and immerjs).

When devs talk about data in Python their are referring to data science, it has a big ecosystem to support that, Clojure doesn't. Data-oriented in Clojure means using generic data structures like maps/vectors/sets to represent information, instead of creating classes for everything.

Clojure is my favourite language to work in, although I only had the opportunity to use it professionally for 6 months. It gives me an overwhelming feeling of quality, forethought, and careful design. So does most of its surrounding ecosystem.

I feel a similar way about Python too though.

The hype probably comes mostly from good concurrency tools + immutable data structures, but at least partly it comes from people who really enjoy working in it.

SICP is really cool! I would follow up with Practical Common Lisp right after, for insight into a more usable Lisp dialect.

TIL Scheme is the only language that implements continuations.

I feel like learning is a continuous process, technology changes from time to time so might stay updated with all the latest trend.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact