Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What made you change your mind about a programming language/paradigm?
338 points by strangecasts 22 days ago | hide | past | web | favorite | 391 comments
Books, courses, practical experiences or just a-ha moments that made you feel differently about a language/design pattern/etc.


Unit testing to me seemed akin to drinking 8 glasses of water every day. A lot of people talk about how important it is for your health, but it really tends to get in the way, and it doesn't seem to really be necessary. Too frequently, code would change and mocks would need to change with it, removing a good chunk of the benefit of having the code under test.

Then I started writing integration testing while working on converting a bunch of code recently, and it has been eye-opening. Instead of testing individual models and functions, I was testing the API response and DB changes, and who really cares what the code in the middle does and how it interfaces with other internal code? So long as the API and DB are in the expected state, you can go muck about with the guts of your code all you want, while having the assurance that callers of your code are getting exactly out of it what you promise.

Unit test suites would break all the time for silly reasons, like someone optimizing a function would mean a spy wouldn't get called with the same intermediary data, and you'd have to stop and go fix the test code that was now broken, even though the actual code worked as intended.

Integration tests (mainly) only break when the code itself is broken and incorrect results are getting spit out. This has prevented all kinds of issues from actually reaching customers during the conversion process, and isn't nearly so brittle as our unit tests were.

I have the same experience. Integration tests are the best. They test only what really matters and allow you to keep flexibility over implementation details.

When your TDD approach revolves around integration tests, you have complete freedom to add, remove and shift around internal components. Having the flexibility to keep moving around the guts of a system to bring it closer to its intended behavior is what software engineering is all about.

This is also how evolution works; the guts and organs of every living creature were never independently tested by nature.

Nature only cares about very high-level functionality; can this specimen survive long enough to reproduce? Yes. Ok, then it works! It doesn't matter that this creature has ended up with intestines which are 1.5 meters long; that's an implementation detail. The specimen is good because it works within all imposed external/environmental constraints.

That's why there are so many different species in the world; when a system has well defined external requirements, it's possible to find many solutions (of varying complexity) which can perfectly meet those requirements.

I'll would like to watch you running around and trying out everything when integration tests fails and you don't know which part of the code base caused the failure, while I just run all the specs and figure out the exact unit of code that is causing the problem.

That would not happen because each of my integration test cases are properly isolated form each other. If any specific test case starts failing, I just disable all other test cases and start debugging the affected code path; it usually only takes me a few minutes to identify even the most complex problems. Also, because I use a TDD approach, I usually have a pretty good idea about what changes I might have made to the code which could have introduced the issue.

Unit tests on the other hand are useless at identifying complex issues like race conditions; they're only good at detecting issues that are already obvious to a skilled developer.

* I restart the system from scratch between each test case but if that's too time-consuming (I.e. more than a few milliseconds), I just clear the entire system state between each test case. If the system is truly massive and clearing the mock state between each test case takes too long, then I break it up into smaller microservices; each with their own integration tests.

Apologies sir...! When I said unit tests, I mean a specific unit of code (in TDD manner).

I mistunderstood your Integration tests. I understand them as something like checking from the end user contract. If so, without TDD, it won't be possible to track down the bug easily as with TDD.

Fyi, I'm definitely stealing your analogy.

I will probably be the lone voice here defending mock testing but probably not for the reasons you'd expect. Does mock testing end up being brittle? Yes. Do you have to refactor the tests immediately after making small changes? Yes. Is there a cost to this? Yes.

Mock based testing however is the only thing I've ever encountered that forces me to think very, very clearly about what my code is doing. It makes me inspect the code and think about what dependencies are being called, how they're being called, and why they're being called.

I have found that this process is extremely valuable for creating code that is more elegant and more correct. I value mock tests not for the tests that I end up with at the end, but for the better production code that I wrote because of them.

What's stopping you from thinking clearly about the code while/after you write it? As I'm writing this sentence I'm thinking clearly about it and I'll probably reread it after I hit save. There's no reason you can't do the same w/ code.

It's not that it can't be done, it's just that sometimes it is nice to have a tool that helps you think about the code. Unit tested code isn't better in any way, it's just a style of writing code that can produce good quality code, but not the only coding style that can do that. But I'd like to add that there have been quite a few refactors that I have done that would have been way harder if it hadn't been for the unit tests. It might not work for everyone, but I'd recommend you give it a serious try, it might be useful.

The tests don't just help you think about the code, they force you to do so. For those who are tempted to think "I can just do this quick change this one time; nothing will go wrong", tests force you to actually think.

Or they don't. If the tests just continue to work, then you ask yourself whether the change should have broken them. If not, you go on your way with a fair amount of confidence. But if the change should have broken the tests, then you have to look at why it didn't...

Outside in development with mocks is awesome. Just replace the mocks when you know what you are building ;-)

My experience has been very much the opposite - integration tests are often very brittle, and finding the root cause is often almost a fools errand since stack traces are often no good for figuring out what went wrong, at least for web UI.

While yes, unit tests do have a maintenance burden, they are often reproducible, less flaky, give you targeted debug information, and run extremely fast.

There are heavy costs to integration and e2e testing that often gets dismissed by developers who often have not experienced the fast feedback loop a good fat unit suite gives you.

Although I agree with you in general, integration tests that are really focused on the contract between two services can be pretty easy to debug, ESPECIALLY if you use request ids that are passed between services, and included in every log. It’s often a matter of simply searching for the request id in your logs and quickly seeing what happened.

E2E UI tests are definitely hard to debug though, in most cases. They still have value, but I’m a fan of the “testing pyramid” here. A small number of E2E UI tests, a decently large number of integration tests that each try to test a single contract between 2 services, and a tonne of unit tests to cover sad paths, detailed behaviour, etc.

Totally agreed - the problem is when integration tests are used in lieu of what unit tests are for, which has been an increasing trend in the frontend web world I’ve seen, in large part because of UI frameworks not providing the tools to do testing well.

Definitely. E2E UI tests are almost always brittle, flakey and hard to debug, you want a very small number of them just to test really core workflows. Drives me crazy too when I see frontend devs using them like UI unit tests, while writing no actual unit tests.

Ideally, you should have both unit and integration tests. See "Test Pyramid".

I almost only use mocks for external calls (usually external APIs). Very rarely, I use them for some internal code that is unusually time consuming or difficult to set up.

A few months ago, I did a contracting job and the team I worked with used the "mock everything" approach, without even one integration test, which to me seemed crazy (especially for a component which was calle "integration layer").

I tried hard to find the advantages in this approach, studying what the rationale and the best practices were, and questioned my previous assumptions. In the end, I had to confirm those assumptions: even if there were hundreds of tests and they all passed, many logical errors weren't caught.

Even worse, they gave a false sense of confidence to the team, and made refactoring super-slow. But the team leader was super-convinced it was the best idea since sliced bread.

People I’ve worked with have had that idea, and I’ve found they’re most frequently from the games industry or from contract shops. In both of those, the maintenance of the code over time is far less important than that it works on the ship date.

Additionally, people with large state machines with too-complex sets of possible states (games, big frontends without a top level state management system) tend to only unit test because it’s frequently too much of a pain to set up an integration runtime environment. Places with lots and lots of manual QA testers.

Yes, it's for sure a quicker way to have 100% test coverage...

> Too frequently, code would change and mocks would need to change with it, removing a good chunk of the benefit of having the code under test.

A lot of people overuse mocks when testing. In fact, using mocks enforces coupling between different methods because a lot of people use them to assert that a method with a specific name was called with specific parameters or they create one that asserts a method with a certain name returns a certain value. So when one wants to refactor, they not only have change the code; they need to update all the mocks that reference it as well.

I've found that a better way to structure code is to take the result of an external dependency and pass it in as a parameter to a method that will process it. Then when I unit test that method, I just pass in what I expect from that dependency and assert on the return value of that method. I don't try to unit test the outer method that calls the dependency by creating a mock call for it.

> Then I started writing integration testing while working on converting a bunch of code recently, and it has been eye-opening. Instead of testing individual models and functions, I was testing the API response and DB changes, and who really cares what the code in the middle does and how it interfaces with other internal code?

It makes it easier to isolate the cause of the error rather than having to search to the entire call chain to find it (especially if it's a logic error that doesn't result in an exception). Plus, integration test suites take a lot longer to run and can have timing issues due to caching or other reasons which can result in sporadic failures.

Both unit and integration tests have their place. This talk was excellent in explaining it to me (with functional cores, imperative shells). Unit test functional cores, use integration tests for imperative shells: https://www.destroyallsoftware.com/talks/boundaries

I am fan of Gary Bernhardt ever since viewing this talk.

My first job I wrote c++ for a win32 desktop app. I hated unit testing. My workflow was write the code, compile it, trigger the scenario, step through the code, write some unit tests after I knew it worked. It was the expectation on my team to write UTs so I did it. Fast forward to a different team I learned from a co-worker how UTs can help you influence your design. If you find yourself doing a ton of frustrating work to set up your UT scenarios there's probably a way of fixing the design to be less coupled that would allow you to test the code better. Now I think of UTs more as a way to help me understand how the design of my code is working and I get some extra validation out of it as well.

My trick question to TDD advocates is how to develop native apps following those principles, after all no GUI change is allowed without a test.

GUIs don't lend themselves to unit testing at all, because the requirements aren't mathematical, but are instead based on human factors.

For GUIs, the proper approach is to unit-test the functionality underneath, not the GUI itself.

Which leads to convoluted architectures just to be able to follow "only write code which there is a failing test for".

How are these convoluted?

After all, any program out there does something. If you know what it's doing, you know what it should do, and what it shouldn't. That means you can test it.

Because given how GUI frameworks are implemented, one needs to add explicit workarounds to follow "only write code for which there is a failing test".

After all, writing the test needs to be possible, to start with.

So adapters, views, commands and what have you need to exist only to fulfill such purpose, and even then, their interactions with the GUI layer don't get tested.

So one is creating them, without knowing if they are the right elements for the GUI layout.

Hence why testing, 100% behind it, TDD not so much.

In my book, the UI isn't part of unit testing - at least, the views aren't. The models and controllers (to use the MVC paradigm) might be, though.

The integration tests that you are describing behave much like I would expect unit tests to behave. Give an input, assert the results (return value or state change). The example of optimizations breaking the unit test could be either a poor test (If the function doesn't need to call that intermediate with that data, why is it being tested?). Oftentimes the spy is added to the test because there is unnecessary tight coupling between two units.

This isn't to say that integration tests aren't great, too. They all tell different stories. I like unit tests, as they force me to think about modules as units. If a unit is hard to test or brittle, it may need more attention to its design.

A good use of mocks/spies is if you need to test a side-effect. For instance: did your code call that service with the right parameters?

I had this experience as well, but then I began to realize that, as the codebase got bigger (let's say, for your metaphor, I was exercising more) that I really actually did want unit tests (that I needed more water).

As the codebase gets larger and more complex (and interesting!), I want unit tests to fail because of small changes. That's actually useful feedback, whereas the simple, brutal failure of an integration test is just not granular enough to quickly help me understand the details of the change.

This has been my experience too. On a project I'm working on now, I went majority integration tests. I have about 200 of them and they now take 2 minutes per run. VirtualBox and Jest doesn't help. To stay sane, I've had to run only relevant tests. They're great for a final sweep but they do slow down the feedback loop quite a lot.

For future tests and projects, unit tests do have a place and I'll make better use of them.

A good learning experience, none the less.

I would disagree. Unit tests make sure your code is clean and composable. It's hard to write unit tests if you have hundred line methods etc. It usually forces you to write smaller chunks of code. Integration tests make sure it works in test/prod. You can have a complete spaghetti mess and still pass your integration tests. You can also have good unit test coverage but break prod.

They serve two different purposes.

I think most would agree that integration tests are better. The problem is they tend to be slower. Having to initialize the system appropriately for every test (e.g. writing to the database) tends to limit the number of tests you can have.

Unit tests scale a lot better. That's why most generally use a pyramid structure: lots of unit tests, a moderate amount integration tests, and a few end-to-end tests.

An approach of “almost” integration tests can be much faster. I.e. rather than spinning up a full-blown web server or database, use fake objects for those. Uncle bob describes refactoring the architecture of a project to make integration tests faster by decoupling the web server.

Mentally, something like clojure’s ring framework make this easier to grasp: the abstraction it provides is dictionary-in, dictionary-out. Once you have something like this, there’s no need to spin up a web server to do integration testing: you just shove a bunch of dictionaries and and make sure the output dictionaries are what you expected.

A good approach is to use each technique where you get the most bang-for-buck: use unit tests only for pure functions, and decouple your system in a way that integration tests can be reduced to simple data-in data-out (which also makes them very fast)

Ah I forgot the link https://youtu.be/WpkDN78P884

Unit tests also tend to have fewer dependencies and are therefore more portable and robust.

If your integration tests can be reasonably be set up with a couple containers, great, but not every system is that flexible. And not every data store is that simple to provision.

With an in-memory database like Derby I find my test cases run fast enough

Just adding to this, for me I think the benefits of testing became really clear in situations where you end up fixing the same bug twice.

Knowing about that bug is really valuable knowledge and not adding a test for it is basically like throwing away that knowledge instead of sharing it for future people working on the project.

I completely agree. If you have limited time to write tests, then I've found that integration tests provide the most value by far. If you're happy with your suite of integration tests, then it's also great to have a set of unit tests.

> Unit test suites would break all the time for silly reasons, like someone optimizing a function would mean a spy wouldn't get called with the same intermediary data, and you'd have to stop and go fix the test code that was now broken, even though the actual code worked as intended.

Can you or others speak more about this? I was taught that verifying function calls for spies/mocks was good practice. But, I encountered this problem just the other day when I refactored some Java code for a personal project. Everything still worked perfectly, but, exactly as you said, the intermediate function calls changed so the tests would fail due to spies/mocks calling different "unexpected" functions.

I'm an intermediate programmer so can someone with more experience fill me in with what's best practice here and why? Do I update the test code to reflect the new intermediate function calls? But this whole approach now seems silly since a refactoring that doesn't affect the ultimate behavior of the function that is under test will break the test and that seems wrong. So do I instead not verify function calls when using spies/mocks? In that case, what is the use case for verifying spies/mocks?

What you want to spy on are side-effects that are part of the function's contract.

If you have a function that fetches data, you shouldn't test that its hitting the data layer, only that the correct data is returned. This way when you improve the function to not hit the data layer at all under some conditions, your tests will keep passing.

On the other hand, if that function is supposed to log metrics or details about its execution, you should test that ,as it is part of its contract and can't be inferred from the return value.

Your tests should be driven by your contracts. Is there a contract for the component being tested that says, "it calls function X if Y"? If yes, then test that. If not, then you shouldn't be testing such an implementation detail anymore so than you'd test values of local variables inside a function at some point in the middle.

The important part is to remember that not every function is a component, and not even every class by itself. Where to draw the boundary between components is itself an explicit design decision, and should be made consciously, not mechanically.

This is usually called the classical vs. mockist, or test interactions vs. test final state debate.

Instead of mocks, some people prefer to build fakes / stubs which are versions of a dependency which are "fully operative" in a sense but with a simplified internal implementation. For example a repository that keeps entities in memory. (Not the same as an in-memory database! The fake repository wouldn't use SQL at all.)

Tests would check the final state of the fakes after the interactions, or simply verify that the values returned by the tested component are correct.

The hope is that fakes, while possibly more laborious to set up, allow an style of testing that focuses less on the exact interactions between components, and therefore is less brittle.

Some links:

- Mocks aren't stubs https://martinfowler.com/articles/mocksArentStubs.html#Class...

- From interaction-based to state-based testing http://blog.ploeh.dk/2019/02/18/from-interaction-based-to-st...

What should be tested is that the function does what is promised to the caller. Which other function it uses to accomplish this task is a detail, which should generally not be part of this promise (encapsulation). So either the effects of the function must be returned by the function, or it is a side-effect, which must then be observable in some other way. Example: If a function is supposed to create new user, don't check that it calls some internal persistence or communication layer with some User info. Instead list the users in system before and after, check that the correct one was added. Try to make an action as this new user. This forces you to expose (a view of) internal state at the API surface, which makes the system more observable, usually very beneficial for debugging and fillings gaps in API.

Also, many of the assertions that are often put into unit-tests (especially at stub/mock boundaries) are better formulated as invariants, checked in the code itself. Design-by-contracts style pre/post-conditions is a sound and practical way of doing this. When this is done well, you get the localization part of low-level unit testing even when running high-level tests. Plus much better coverage, since these things are always checked (even in prod), not just in a couple of unit tests. And it is more natural when refactoring internal functions to update pre/post-conditions, since they are right there in the code. When a function disappears they also do.

I don't like the term "integration" tests though, as they hint at interactions between systems being the important thing to test. Integration between services / subsystems are just as much a detail as internal function calls. If using the real system during test is too complicated or slow, maybe it should be simplified or made faster? Only when that is not feasible do I build a mock.

If it is important to the operation of the function, from an outsider perspective, the intermediate call should be tested. Oftentimes this isn't the case, though. Most often, I see intermediate calls spied/mocked when they have side effects to be avoided. This is actually a sign of tight coupling between modules, and patterns like dependency injection can help make it easier to test.

The trick for me is focusing on what the unit does from a consumers perspective. Avoid testing implementation details (unless they are important side effects), and test the behavior that does not change. If you do that, then refactoring becomes easier, because tests will only break when the contract of the unit changes.

We buy software from a lot of different companies, sometimes we own the codebase but hire software companies to build, expand and maintain it, we also benchmark performance.

We have a lot of data that explicitly shows that automated unit-testing doesn’t work. One good example is one of our ESDH systems, which changed supplier in a bidding war, partly because we wanted higher code-quality.

It’s a two million line project, we paid for 5000 hours worth of refactoring and let the new supplier spend two years getting familiar with it and setting up their advanced testing platforms.

So far we’ve seen some nice performances increases thanks to the refactoring. It has more problems than it did before though, even though everything is tested by unit tests now and it wasn’t before.

We have a lot of stories like this by the way. I don’t think we have a single story where unit-testing actually made the product better, but we do have tested systems where we couldn’t say because they always had unit-testing.

Ironically we still do TDD ourselves on larger projects. Not because we can prove it works, but because everyone expects it.

> We have a lot of data that explicitly shows that automated unit-testing doesn’t work.

At best you have data that shows that a poor unit testing implementation failed to deliver. A bad experience doesn't prove a whole tech strategy doesn't work when the whole world shows otherwise.

There was a comment on HN which I keep thinking about. What is the value of expensive testing when you are promptly alerted of critical problems and can deploy in under a minute? Obviously not applicable for every scenario, but at this point I think for most non critical systems a good rollback/plan B is at least as important as testing.

> What is the value of expensive testing when you are promptly alerted of critical problems and can deploy in under a minute?

Because your users should not be your testers and you must catch bugs before deploying them in production.

You may be able to deploy in 3 minutes but it takes way more to debug and fix your bugs.

Testing helps the project to catch bugs prior to deploying them and also provides the infrastructure to avoid regressions.

You need to write a book and then get on the conference circuit with this message. You would be rowing against the tide but there is always hope!


Write tests. Not too many. Mostly integration.

It is all about balance.

Use system/integration/functional/unit tests wisely.

For precise stateless stuff, like making sure your custom input format parser/regexp covers all edge cases, I prefer unit tests - no need to init/rollback database state.

you can also unit test in a more blackbox way where you test your interal APIs instead of the implementation. It makes sense to have both in many cases.

As is the case for all software, unit tests should be well designed.

If a unit test fails when the code under test hasn't functionally changed then it means that your unit tests are flaky.

Microservices. They seemed really cool until I worked on a few large projects using them. Disaster so epic I watched most of engineering Management walk the plank. TLDR: The tooling available is not good enough.

The biggest cause lies in inter-service communication. You push transaction boundaries outside the database between services. At the same time, you lose whatever promise your language offers that compilation will "usually" fail if interdependent endpoints get changed.

Another big issue is the service explosion itself. Keeping 30 backend applications up to date and playing nice with each other is a full time job. CI pipelines for all of them, failover, backups.

The last was lack of promised benefits. Velocity was great until we got big enough that all the services needed to talk to each other. Then everything ground to a halt. Most of our work was eventually just keeping everything speaking the same language. It's also extremely hard to design something that works when "anything" can fail. When you have just a few services, it's easy to reason about and handle failures of one of them. When you have a monolith, it's really unlikely that some database calls will fail and others succeed. Unlikely enough that you can ignore it in practice. When you have 30+ services it becomes very likely that you will have calls randomly fail. The state explosion from dealing with this is real and deadly

> "all the services needed to talk to each other"

I'm not an expert by any means, but I'm pretty sure that statement indicates a problem.

Yeah, if you are going for a microservices architecture, you need at least one person or dedicated team in an oversight / architecture role that keeps the design and growth in check. Primarily that means saying "no" when someone wants to create a new service or open up a new line of communication. It's an exercise in limiting dependencies.

And the easiest way to do that is to not build a microservices architecture; instead (and I hope I'm preaching to the choir here) build a monolith (or "a regular application") and only if you have good numbers and actual problems with scaling and the like do you start considering splitting off a section of your application. If you iterate on that long enough, MAYBE you'll end up with a microservices architecture.

Yes, then you're just doing bad OOP with sockets instead of a language that was designed for it.

Heh, definitely some truth to this.

What saved us before, was our forest of code could depend on the database to maintain some sanity. And we leaned on it heavily. Hold a transaction open while 10,000 lines of code and a few N+1 queries do their business? Eh, okay, I guess.

Maybe we didn't have the descipine to make microservices work. But IMO our engineering team was pretty good compared to others I've seen. All our "traditional" apps chugged along fine during the same period

Really, whenever someone blames "discipline" be suspicious

Not even the army has perfect discipline, even with hard training. They have cross-checks, piles of processes and move slowly for the most part

(Software development shouldn't aim to be like the army though)

I don't think so. This kind of thing comes up constantly in RDBMS. New requirement means we need to join thneeds and widgets data together. In a regular database, even NoSql, this isn't a hard problem.

When the services have their own datastores, well now they need to talk to eachother

not necessarily if you design a decoupled event driven approach where every service can just subscribe to the data it needs.

We actually tried this as well. It never made it out of testing. We ended up with copies of data in many places, which was annoying. We duplicated a lot of work for consuming the same events across multiple services and making sure they updated the "projection" the same way.

However a much larger problem was overall bad tooling. Specifically the data storage requirements for an event stream eclipsed our wildest projections. We're talking many terabytes just on our local test nodes.

We tried to remedy this by "compressing" past events into snapshots but the tooling for this doesn't really exist. It was far too common for a few bad events to get into the stream and cause massive chaos. We couldn't find a reasonable solution to rewind and fix past events, and replays took far too long without reliable snapshots.

In the end I was convinced that the whole event driven approach was just a way of building your own "projection" databases on top of a "commit log" which was the event stream.

Keeping record of past events also wasn't nearly as useful as we originally believed. We couldn't think of a single worthwhile use for our past event data that we couldn't just duplicate with an "audit" table and some triggers for data we cared about in a traditional db.

Ironically we ended up tailing the commit log of a traditional db to build our projections. Around that time we all decided it was time to go back to normal RPC between services.

I appreciate you sharing this. I'm considering embarking on this approach with my team, and everything you are mentioning is what I was worried about when I first started reading up on the microservices architecture.

Now I'm seriously considering a somewhat hybrid approach: Collect all of my domain data in one giant normalized operational data store (using a fairly traditional ETL approach for this piece), and then having separate schemas for my services. The service schemas would have denormalized objects that are designed for the functional needs of the service, and would be implemented either as materialized views built off the upstream data store, or possibly with an additional "data pump" approach where activity in the upstream data store would trigger some sort of asynchronous process to copy the data into the service schemas. That way my services would be logically decoupled in the sense that if I wanted I could separate the entire schema for a given service into its own separate database later if needed. But by keeping it all in one database for now, it should make reconciliation and data quality checks easier. Note that I don't have a huge amount of data to worry about (~1-2TB) which could make this feasible.

There's two main approaches to handling "events". Using event sourcing vs direct RPC. After our disaster I highly recommend Google's approach, A structured gRPC layer between services with blocking calls. You might think you don't have much data, we didn't either, but when Kafka is firehosing updates to LoginStatus 24/7 data cost gets out of control fast.

I'm going against the Martin Fowler grain hard here, but Event Sourcing in practice is largely a failure. It's bad tooling mostly as I mentioned, but please stay away. It's so bad.

"every service can just subscribe to the data it needs."

Doesn't that imply that each service then has to store any data it receives in these events - potentially leading to a lot of duplication and all of the problems that can come with that (e.g. data stores getting out of sync).

Yes, that's exactly what it implies. Like I said at the top of this thread, I'm not an expert on this approach (I've done my reading, but haven't yet spent time in the trenches), but my understanding is that you would embrace the duplication and eventual consistency. I do wonder how well it works in practice though, and how much time you would spend running cross-service reconciliation checks to make sure your independent data stores are still in sync.

A micro service should not depend on another micro service! I see the same mistake in plugin and module design patterns. When you make one service depended on another services you add complexity. Some complexity is necessary, but everything (scaling, redundancy, resilience, replacing, rewriting, removing, etc) will be more easy without it.

Microservices doesn't necessarily mean you have a ton of services. A single microservice can encompass millions of lines of code.

Honestly, its a bad name for the architecture.

Service Oriented Architecture is a much more fitting name. ;)

Microservices are just SOA done right; services should have a single responsibility, be replaceable, have a schematic etc.

The problem is that your run of the mile buzzword-driven Microservice is basically a collection of FaaS (logout function, login function, ping function, get user function, update user function) behind an API Gateway, what constitutes a single-responsibility is up for careful consideration, but imho, microservices as perpetuated by the mainstream buzzword-cowboys high on cloud is very ill-informed and only suitable for very-very large teams with extreme loads.

Services having a single responsibility sounds like good advice, but how do you turn a number of services with a single responsibility into a working application? Any process that touches multiple systems will become a lot more complicated. Single-responsibility services is good advice but it's too easy for short-sighted developers to obsess over that - instead of the bigger picture. Yes it makes it easier to carve out your segment of an application, and yes that codebase will be easier to maintain, reason about, and maintain, but someone has to keep bigger picture in mind. That's often lacking.

> Keeping 30 backend applications up to date and playing nice with each other is a full time job. CI pipelines for all of them, failover, backups.

This isn't normal.

You should just have a single CI pipeline, failover and backup approach that is parameterised for each microservice.

It's not that easy in my experience. They use different databases. Different versions of frameworks. Some written in different languages. We tried to have a "one size fits all" CI pipeline but that fragmented over time.

The overhead was huge compared to "traditional" apps. Just updating a docker base image was a weeks long process.

Doesn’t that mean you have to use the same language, libraries, and DB for every service?

Is that really so bad? At edX all of our services were Django. After the third service was created we built templates in Ansible and cookiecutter to create future services and standardize existing ones. We created Python libraries with common functionality (e.g. auth).

We were a Django shop. Switching to SOA didn’t mean switching languages and frameworks.

If your services were all setup the same, what was the big advantage to have them separate? Wouldn't you get the same scalability from running 10x of the monolith in parallel with a lot less work?

The primary advantage was time to market. When I started five years ago edX had a monolith that was deployed weekly...after a weekend of manual testing. The organization was not ready to improve that process, so we opted for SOA. By the time the monolith had an improved process—2 years later—we had built about three separate services, all of which could be deployed multiple times per day.

haha I see you haven't worked with edx, basically a lot of services just go down and the main reason to have them separated is so they don't ALL go down, insights/metrics service infamously is hard to get up and steady.

While acknowledging the problem mentioned, I still believe in microservices, but in my opinion, it needs to be done with simpler tools. For example, next time, I will use firejail instead of docker.

Sorry for your pain, you have some studying up to do. There's already best practices on these. Checkout 12factor

I think that the issue is that microservices _require_ good practices and discipline.

These are attributes that 80% of projects and teams lack so when they decide to jump onto the microservices bandwagon the shit hits the fan pretty quickly.

Almost every time I experienced a shift like this in my thinking it was due to experiencing a problem I hadn't experienced until that point.

I discovered the value of compile time type checks when I worked on large codebases in dynamic languages where every change was stressful. In comparison having the compiler tell you that you missed a spot was life changing.

I discovered the value of immutable objects when I worked on my first codebase with lots of threading. Being able to know that this value most definitely didn't change out from under me made debugging certain problems much easier.

I discovered the value of lazy evaluation the first time I had to work with files that wouldn't fit in entirely in memory. Stream processing was the only way you could reasonably solve that problem.

Pretty much every paradigm shift or opinion change I've had was caused by encountering a problem I hadn't yet run into and finding a tool that made solving that problem practical instead of impractical.

I might even go farther. I wonder if most of the techniques (and languages) that we think are stupid are instead aimed at problems that we don't have. (Of course, those techniques become stupid when people try to apply them on the wrong problems...)

I suspect you are correct. I've definitely been guilty of drinking a particular brand of kool-aid and applying it inappropriately before.

That in itself can turn into a learning experience if you stick around for the aftermath.

>I discovered the value of compile time type checks when I worked on large codebases in dynamic languages where every change was stressful. In comparison having the compiler tell you that you missed a spot was life changing.

Sounds like me in reverse: I discovered that value when I had to do work in a dynamic language after working only in C and C++. It's like that old saying but not knowing the value of something you have until you lose it.

I used to hate C and thought it was primitive, ugly, dangerous, tedious to write in and annoying to read. While writing a big project in it (https://github.com/RedisLabsModules/RediSearch/) , I've discovered the zen of C I guess. Instead of primitive I started seeing it as minimalist; I've found beauty in it; and of course the great power that comes with the great responsibility of managing your own memory. And working in and around the Redis codebase, I also learned to enjoy reading C. While I wouldn't choose it for most projects, I really enjoy C now.

I also used to hate C and thought it was primitive, ugly, dangerous, tedious to write in and annoying to read. Later, I too, discovered the actual point of C, the beauty and minimalism of it. Then I became a better coder, and once again saw the nature of C as primitive, ugly, dangerous, tedious to write in and annoying to read. I suppose that's what enlightenment feels like.

"Before I learned the art, a punch was just a punch, and a kick, just a kick. After I learned the art, a punch was no longer a punch, a kick, no longer a kick. Now that I understand the art, a punch is just a punch and a kick is just a kick." -- Bruce Lee

Wait until you go back again. Then you'll really be enlightened.

Same, except I realized this after starting my first (and current) job as a C developer.

I now get way more annoyed by the in-house build system than the huge C codebase.

I made the decision to do that project in C, in part to be better at it (I did a bunch of small things with it, but nothing serious). Since I had to interact with Redis' C API, the choices were basically C or C++. I hated C++ much more than C having worked with a lot, so C it was. I can't say I didn't miss things like having shared_ptr and, you know, having destructors, but all in all it was a good experience. (side note - I now work a lot in C++ and sort of liking the newer stuff like lambdas for example)

I don't miss, `shared_ptr`. It is a disaster for anything other sharing a thing between threads (in which case, it is a way of managing the disaster you already have).

Now `unique_ptr` is worth missing. And destructors are good too -- you couldn't have unique_ptr without them. But in it's own way, C has both: its just you just have to remember to call the destructor yourself, every single time.

Doesn't have to be threads, different contexts.

That's exactly the path to hell.

You start using `shared_ptr` because you are too confused about the code to know which of the two "contexts" is supposed to own the thing. So shared_ptr (might) fix your memory freeing problem, but it maintains (and sometimes worsens) the problems caused by having two different contexts that might or might not be alive at any given time.

With two different threads however, such problems are often unremoveable, which is why shared_ptr is the right solution mitigation.

90% of my dislike for C comes from a) that it is too tedious to work with strings and b) the non-existing standardized module/build system.

The C language itself is _beautiful_ but I am missing a beautiful standard library! Things which are trivial one-liners in other languages are sometimes 10-20 lines of brittle boilerplate code in C. If the standard library would have a bit more batteries included it would make trivial task easier and I could concentrate on actually getting work done. Opening a file and doing some text processing take usually a few lines in Python/PHP but in C you have 3 screens of funky code which will explode if something unforseen happens.

And working with additional libraries also a nightmare compared to composer/cargo. Adding a new library (and keeping all your libraries up to date) is dead-simple in basically any language besides C/C++.

tldr: I love the language itself but the tooling around it sucks.

You might like zig, it's looking to compete in the C space, while fixing lot's of these kind of issues...

What’s hard about adding a library? You include the header, tell the linker about it, and update the library/include patches if it’s not already on it. Job done.

That's relying on distribution package management to save you. Here is what happens when you add libraries in any other context:

The library has a dependency on another library, and when you go look at the dependency, it tells you that it needs to use a specific build system. So you have to build the library and its dependency, and then you might be able to link it.

But then, turns out, the dependency also has dependencies. And you have to build them too. Each dependency comes with its own unique fussiness about the build environment, leading to extensive time spent on configuration.

Hours to days later, you have wrangled a way of making it work. But you have another library to add, and then the same thing happens.

In comparison, dependencies in most any language with a notion of modules are a matter of "installation and import" - once the compiler is aware of where the module is, it can do the rest. When the dependencies require C code, as they often do, you may still have to build or borrow binaries, but the promise of Rust, D, Zig et al. is that this is only going to become less troublesome.

I guess... though I’ve always found C++ libraries to be light years easier to manage than python.

To be honest, the per-language package management seems wasteful and chaotic. I must have a dozen (mutually compatible) of numpy scattered around. And why is pip responsible for building FORTRAN anyway?

You think dealing with C++ libraries is easier than "pip install <library>"?

The happy path (apt-get install, or even ./configure && make && make install) is about the same.

When things go sideways, I find troubleshooting pip a little trickier. Some of this might be the tooling, but there's a cultural part too: C++ libraries seem fairly self-contained, with fewer dependencies, whereas a lot of python code pulls in everything under the sun.

Yes, definitely. In C or C++, I use my distro's package manager to manage the libraries, not some other tool.

The only thing at the moment that comes close to filling this cross-language gap is the nix package manager I think.

Cadillac languages with a bunch of stuff in the std lib take away a lot of fussy bike shedding problems. I like not having to worry about library selection in my first through fifth revisions.

I’m willing to put up with a lot more using a built in. It’s built in, a junior dev can be expected to cope with that. Adding dependencies has a cost. Usually the cost is bigger than realized at the time.

It was the Postgresql codebase that did this for me, but C is a mixed bag. Because it gives you so much space, it's more on the folks managing a project to establish a specific design mindset (design patterns, documentation, naming conventions, etc...), and ensure that its enforced throughout the project. Codebases where this is done right are a joy to work with. Others, less so.

What would you pick instead of C?

Depends on the project.

Using it as my main language: Python (2.7). How the hell did this thing become so popular?

I've used it for all sorts of stuff earlier, less complex than the other. Scripts, devops, ETL... But then I got into a company that is using it for some quite serious stuff, a large codebase. Holly smokes this thing does not scale (in terms of development efficiency and quality) well. I swear at least 70% of our bugs is because of the language and half of the abstractions are there just to lessen the chance of some stupid human mistake.

Sorry, but I will not pick it ever again for even a side project.

I started using python 3 with mypy, which provides (optional) static typing, and my gosh has it reduced the time I spend looking for stupid problems by orders of magnitude.

I got a somewhat direct comparison when I mypy-ified a small program where I used a lot of async and await (basically implementing own event loops and schedulers, it was interfacing very custom hardware that handled very different but interacting streams at once).

So basically I did it because I was tired of not noticing mixing up "foo()" and "await foo()", but then the static typing continued to catch a myriad of other, unrelated problems that would have ruined my day way late (and often obscurely) at runtime.

For small ("scripty") to moderate sized things, mypy absolutely recovered my faith in it.

I also completely switched to python 3 after being in python 2 unicode hell just once. There are very good reasons for python 3.

Agreed. As part of the transition process from Python 2 to 3, I pushed for typing on all of the critical components. I had to fight the management a little on the commitment, but in the long run it's saved us a ton of time identifying and preventing bugs and its made the entire codebase much more cohesive.

does the management agree with that assessment? these kind of changes are often hard to evaluate because ot is not easy to compare. did development really go faster because of types, or do we just think it did?

i agree that typing saves time but i am struggling to produce evidence for that

It's not very hard to find the evidence before you move to static typing.

Just think about this not uncommon scenario: "The program crashed two hours into testing because apparently, we somewhere set this element in this deep structure to an integer instead of a list, and are now trying to iterate over that integer. But we cannot easily fix it, because we don't know yet where and why we set it to an integer."

The compiler would have immediately given you an error instead.

So, collect all the errors that are, e.g.:

- Addressing the wrong element in a tuple,

- any problems arising from changing the type of something; not just the fundamental highest level type ("list" or "integer"), but small changes in its deeper structure as well, e.g. changing an integer deep in a complex structure to be another tuple instead,

- "missed spots" when changing what parameters a function accepts; this overlaps with the former point if it still accepts the same arguments on the surface, but their types change (in obvious, or subtle "deep" ways),

- any problems arising from nesting of promises and non-promise values,

and many, many other problems where you can trivially conclude that the compiler would have spit out an error immediately, and explain to management how various multi-hour debugging session could have been resolved before even running your thing.

as a serious question, why even use python if you have to go through hoops to make it work even half as well as other languages? are you reliant upon some python only library?

My experience when switching to other languages is "why do you have to go through so many hoops to make this work even half as well as Python"...

In my experience, Python is usually used because of a low barrier-to-entry and the availability of a lot of libraries. It's great for slapping something together quickly to do something useful. However, if you want something that's high-performance and well-engineered, it just isn't the right tool for the job usually.

> go through hoops

mypy and types are only "hoops" in comparison to non-annotated Python (in terms of added syntax / coding effort), yet compared to explicitly typed languages where you have to declare types, that's just standard thing you have to do so there is no extra effort in comparison to these other languages (and then the judgment that it only works "half as well" is controversial (Edit: or at least needs qualification)).

i prefer ml dialects like f# or ocaml or something like racket that gives you both static and dynamic languages that can interoperate. in f# and ocaml, the type inference handles most things, although you do need to manually declare types sometimes. it is often a good idea anyway.

and what i mean by hoops is that python is not designed to have static types. thus, any type system added is tacked on by definition and will lead to "hoops". in something like f#, at no point will the existence of its type system and inference be a surprise. the language is built around and with the type system.

I suppose you are right. But with a huge code base, migration is not an overnight process...

Couldn't agree enough. I have seen Perl (with automated PerlTidy on check-in) scale to levels I would never even attempt with Python.

Having worked on Quartz, probably in the handful of biggest Python code bases on the planet, that's just not my experience at all. It scaled very well and was a fantastic environment to work in. It certainly wasn't ideal for every use case, but then what is?

I think dynamically types languages are on their way out. There aren't many good arguments remaining in favor of them.

I don’t think that’s the problem. I wouldn’t use Python (especially 2.x), but I’d have no hesitation with using Common Lisp. It has lots of great features for programming in-the-large that Python simply lacks.

Genuinely asking: could you provide some specific example? What does CL has that Pythos doesn’t? (Disclaimer: i never worked on neither)

If I had to point to something, I would single out CL's powerful support for multiple dispatch. I'd hesitate to recommend CL to undisciplined programmers, because it is too easy to write code that works but you don't understand a week later.

(I hammered out that answer right before I had to run on stage. A couple other big items I forgot:)

A proper numeric tower. Python has complex numbers, but they don't seem well integrated (why is math.cos(1+2j) a TypeError?). Fractions are frequently very useful, too, and Python has them, in a library, but "import fractions; fractions.Fraction(1,2)" is so much more verbose than "1/2" that nobody seems to ever use them.

Conditions! Lisp's debugger capabilities are amazing. And JWZ was right: sometimes you want to signal without throwing. Once you've used conditions, you'll have trouble going back to exceptions. They feel restrictive.

(I've come to accept that in a language with immutable data types, like Clojure, exceptions make sense. Exceptions feel out of place, though, in a language with mutability.)

Other big wins: keywords, multiple return values, FORMAT (printf on steroids), compile-time evaluation, a native compiler (and disassembler) with optional type declarations.

Lisp is unique among the languages I've used in that it has lots of features that seem designed to make writing large programs easier, and the features are all (for the most part) incredibly cohesive.

Conditions, yes.

If you have multiple dispatch, then building/supporting a numeric tower is natural.

In your other reply you pointed out macros. They are a mixed blessing, easily misused. Other languages have them but use them more sparingly and making it harder to overlook their special status, which leads to better "code smell" in my opinion.

Do take a look at Julia. It has learned deeply from CL and innovated further.

I can imagine how multiple dispatch could make a numeric tower a little easier to implement, but the limitations I see in Python and other languages don't appear to stem from that. You can already take the math.cos of most types of numbers (int, float, even fractions.Fraction, ...) just fine, or add a complex and a Fraction. Python has long dealt with two types of strings, several types of numbers, etc., with the same interfaces. This isn't a difficult problem to solve with single dispatch.

Macros are easily misused, true, but so can any language feature. I can go on r/programminghorror and see misuses of if-statements. It's the classic argument against macros, and I hear it a lot, but I can't say I've seen it happen.

25 years ago, conventional wisdom said that closures were too complex for the average programmer, and today they're a necessary part of virtually every language. Could we be reaching the point where syntactic abstraction is simply a concept that every programmer needs to be able to understand?

I think "macros are easy to misuse" comes from viewing macros as an island. In some languages (like C), they are: they don't really integrate with any other part of the language. In Lisp, they're a relatively small feature that builds on the overall design of the language. Omitting macros would be like a periodic table with one box missing. It'd mean we get language expressions as native data types, and can manipulate them, and control the time of evaluation, but we just can't manipulate them at compile time.

Multiple dispatch is a big one.

Also, macros, which are what (the view layer of) CLOS is built with. In most other object oriented languages, the language isn’t powerful enough to implement its own object system.

Having the full power of function calls in all cases (named or anonymous) is incredibly helpful. I know the Python party line is “just use def” but adding a line here and a line there adds up fast. A syntax for function objects is also great.

I’ll also call out the ability to use dynamic or lexical scoping on a per-variable basis. That has saved me hundreds of lines of work, and made an O(1) change actually an O(1) patch.

I think the dynamic/static typing dichotomy is on its way out (finally!). In dynamic languages optional/gradual typing is getting adopted (ex: Typescript/Flow for JS, Mypy for Python). In static languages type inference is getting standard (ex: Rust/Scala/Kotlin, auto in C++11).

I have a very high opinion of Julia's dynamic type system. Some static type systems are not very expressive, e.g., Elm, which encourages hacky workarounds. Julia's type system encourages specificity, which exposes problems early.

That's because it's a strong type system (as opposed to python's weak one where objects can change their shape anytime). Even calling it dynamic is a sort of lie since the compiler is always able to reason about the types it is given due to the way Julia's JIT compiler works. It's "dynamic" in the same way passing around `void*` (or `object`) is "static".

That being said Julia's type system is definitely the way of the future in my opinion.

Python is strongly typed.

See here: https://wiki.python.org/moin/Why%20is%20Python%20a%20dynamic...

"objects can change their shape anytime" is a function of dynamic typing and is orthogonal to strong or weak typing.

In a dynamic language "strong" vs "weak" typing really depends on the standard library. All the examples in the given link - for example - focus on the addition function in the standard library. So here are some counter examples:

Everything can be used as a bool. This was often used to check for None, but had some issues when used with - for example - datetimes which evaluated midnight as false. In part due to the fact the integer 0 evaluates to false.

Changing type unexpectedly is the key example given in your link (`"foo" + 3` is `"foo3"`). Meanwhile in python `foo.method()` can change the type of the variable foo. Which is a level of fuckery commonly found in javascript.

Let alone the fact that dynamic duck typing encourages weak typing over performing explicit conversions. This is embodied by "easier to ask forgiveness than permission" which says it's better to catch the type conversion exception than check if the type is an integer ahead of time. Which then leads to implementing javascript-esque add functions anyway.

I'll grant you python is stronger than some other dynamic languages, but it is still at least half an order of magnitude weaker than Julia, which is strong in ways approaching Haskell.

I agree, because type inference in some (Typescript) is good enough that there's almost no overhead. Back when it took twice as many lines with types it made sense. But the overhead in typescript is maybe 10%, worth it anymore? I don't think so

I think we have a ways to go. Typescript's type system is very feature complete (which is a good thing, but it also means there's a lot to learn, and I feel like I have to think about type definitions more than I ever did with other languages), but also, fantastic libraries like Ramda have a lot of trouble with expressing their types with Typescript. It's a tough problem to solve, and I still use both, but I'm just saying we're not there yet.

I agree that the type system is very complex. I've spent a lot of time wrapping my head around it and still get confused sometimes.

Unfortunately Typescript's type system is largely driven by the need to represent anything you can do in JS, for library compatibility reasons.

The main designer also wrote C# and Delphi. In a lot of ways, C#'s type system is better (less complex), but Typescript has the huge advantage of working with existing JS

Hah, I had the same thought when I first started using Python for some personal stuff many years ago. I enjoyed using it and it was quick to get some code out but I thought to myself "surely if you build anything large with this language it will be a massive pain to maintain".

Fortunately(?), I never worked for a company that used it for a large codebase so I never found out if my assumption was correct or not.

You need a QA strategy to match the technology in use. No static type checking means many mistakes will need to be caught in another way. For instance with automated tests, or design-by-contracts. Which might also catch some things that static typing would not cover.

In a non-trivial system with solid engineering, QA concerns quickly go beyond details like which programming language is used. Like how to QA entire systems, sub-system interactions, ensure low time from bug discovered in production to fix deployed, eliminating recurring sources of issues etc.

But if the culture that caused the choice of a dynamic language is "oh its just so much easier and productive to not have to write types all the time!", then you are going to be in for some serious mess and pain in a larger system. That is not a technical problem with dynamic typing/language though :)

That's a very good point: different languages require different QA strategies (do statically typed languages require less unit tests?).

My initial thought when working with Python wasn't from a bugs/QA point of view but merely looking at the productivity of a developer working on a large code-base. Things like accurate auto-complete, code discovery, architectural understanding, knowing what 'kind' of object a function returns and so on become more important once the codebase and the amount of engineers working on it increases.

As another example, consider unsafe/safe like C/C++ versus Java/C#/Rust. Serious systems are built in C (cars, airplanes, medical devices) and it can be done OK - but it requires serious investment in QA, targeted at weaknesses of the language.

I've mostly done Java in my career and I tend to stick to it (or Kotlin now). I've always said the power of java isn't in solving programming problems, it's in solving organizational problems. The killer feature that vaulted it to the top and still hasn't been beat is javadoc.

The killer feature that vaulted Java to the top was cross-platform compatibility (regardless of the flaws), including AWT and Swing.

Source: Have been coding in Java since 1.0.

Not javadoc the standard, but javadocs for the standard library. It explains behavior in much more detail than other languages' docs, so I'm rarely surprised.

Why do you think javadoc is better than docstrings? There are docstring-based languages that encourage good practice and can produce what I regard as excellent integrated docs. E.g., Julia's Documenter:


Is there a language that has a better (i.e. more user friendly and decently performant) implementation of iterators and stream processing?

Clojure's stream processing, and sequence functions are worlds better than Python. Clojure's sequences can be lazily evaluated which allows for much more performant computation. And for stream processing you can't beat composable reducer functions.

The user friendliness comes both with how uncomplicated it is to write them, but also how easy it is to process them in parallel (a nightmare in Python).

I've used it at in a few jobs. Once place, mostly a Java shop, used it almost exclusively in a system that had grown to the point that it'd be hard to replace. The most insightful comment I got about it was that it's not that Python doesn't scale in terms of service instances or performance--it doesn't scale with codebase size and onboarding new developers.

I quit around 2.2, never looked back. Python was a breakthrough scripting language ... in the 90s.

Realistically, how could you know if it's still any good if you haven't used it since 2.2? It's basically a different language now.

The same way everyone else does: fixing other people's broken code. It actually hasn't changed much since 2.2.

It's changed fundamentally since 2.2...

Also if you're fixing broken Python code, you're using Python, so no that doesn't really track.

You know, sometimes people have to fix trashfires; that doesn't mean they'd start one.

What fundamental changes do you see since 2.2? If you're talking about the object model; objects in python were garbage before and after 2.2, and as a paradigm, it's mostly useless bureaucracy. Bleeding edge 90s ideas.

The addition of type annotations alone makes it a hugely different language.

The annotations that nobody actually uses? Call me back in 2030, when they become halfway popular.

Depends on the size of the codebase. They're used extensively at my company (Facebook).


Whoa—you crossed into getting personal here and got personally nasty soon after. That's bannable territory. No more of that, please.


You very obviously didn't use pre 2.2 python. There were objects. They were shit! They were shit afterwords!

I'm guessing based on your incredulity about 15 or 20 years older than you.

Our age difference explains a lot of your recalcitrance towards new (to you) things.

It amazes me that you dismiss someone with 20 years more experience than you as somehow knowing less.

Python isn't new to me: it's old, and it's crap, and its "evolution" is towards a dead end. Stuff like nodejs will eventually supplant it if it hasn't already, and with good reason (not that I am a huge fan). Python was a novel design and a great choice ... back in the 90s. I mean, use it if you like it; use Forth or Lua or whatever you like. I think it's terrible and should be abandoned wherever possible.

You didn't die. Congrats, but that doesn't give you knowledge I don't have.

Python should be new to you, or at least newer than when you first encountered it. The fact that it's not means you have no clue what changed, so you have no clue if it's any better.

The only thing that needs to be abandoned is this fake idea that older people carry knowledge that can't be expressed except in the form of trust. If you've got reasons, let's hear 'em, but "I'm old" isn't a reason to do anything.

I already gave you reasons; the fact that you can't understand them because you're a language zealot isn't my fault.

Mind boggling really.

"Because it's garbage" isn't a reason.

what do you use now?

Can you give an example of one such abstraction?

You are using Python2.7 in 2019. there is your problem.

I understand if you inherited/maintaining a legacy application, you may not have had a choice about 2.7, but if it's a significantly large project and you are not making any attempts to use Python3 (and many of the improvements that come with it, including optional typing as one commenter mentioned, don't blame language, unless you have a solution that is magically going to fix all the problems from a language that is pretty much in "maintainance-mode! please use the new version" mode).

When I looked at Python, it looked interesting. Then I found out that -white space is important-. I thought about what it might be like to worry about that, and decided -not for me-.

When CPUs ran at 1 MHz, I wasn't so sure about FORTH with its RPN. But it ran a lot faster than interpreted BASIC, and was a lot faster to write than assembler. Once the world discovers the source of all its woes and goes back to wide-open, 8-bit systems, I'll go back to FORTH.

> Then I found out that -white space is important-.

I hear this a lot and I think it's a misunderstood statement. Python does not care if you do not have a space in assignments or arithmetic or between commas or parentheses.

What Python does care about is the indentation of the source code. The indentation is what guides the structure - which is already what we are doing with most languages that don't care about indentation!

What I really mean to say is there are plenty of valid complaints with Python, but white space just is not one of them. If you are writing good code in a language with C-syntax you are doing just as much indentation.

>which is already what we are doing with most languages that don't care about indentation!

No that's not what the other languages are doing. They have explicit structure defined in the code (with Lisps being at the extreme end), which allows the development environment to automatically present the code in a way that's easy to read. This frees the developer from the job of manually formatting their code like some sort of caveman.

As someone with 8+ years of experience of programming in Python for a job, I've seen countless of bugs spawned by incorrectly indented code, which is incredibly difficult to spot. I've seen people far more experienced than me make these bugs because they didn't notice something being misindented. To me, the fact that we have to deal with this is laughable. Especially considering it's so easy to fix Python the language so that indentation becomes unambiguous: add an "end" statement for each deindent (aka closing brace).

> No that's not what the other languages are doing.

Is this a claim that people do not indent in other languages and leave it to the dev environment? Yes, those languages don't rely on indentation but people do still manually indent or rely on their environment to indent it for them to make the code remotely readable. I for one cannot read Java without it also being indented correctly.

> This frees the developer from the job of manually formatting their code like some sort of caveman.

I honestly don't understand this part. What tools are you using that don't do indentation for you? Emacs and vim extensions, vscode, Pycharm, atom...all of them have very intuitive indentation for when you type. The most you have to do is hit backspace after finishing a block.

> Especially considering it's so easy to fix Python the language so that indentation becomes unambiguous: add an "end" statement for each deindent (aka closing brace).

As someone with a lot of Ruby experience, the "end" is absolutely not more clear than indentation. There's a reason environments highlight do-end pairs together: because it's hard to know which ones match which.

>people do still manually indent or rely on their environment to indent it for them

Nobody manually indents their code. Almost any language other than Python is unambiguous to indent, so the computer does it for you.

>I for one cannot read Java without it also being indented correctly.

That's easy, just copy paste Java code into an editor and press a button to indent everything. Voila! Good luck if you're dealing with Python code which got misindented somehow (e.g. copying from some social network website which uses markup that doesn't preserve whitespace, which is most of them).

>What tools are you using that don't do indentation for you?

Python code cannot be re-indented unambiguously. So if you copy paste a chunk of Python code from one place to another, you can't just press a button to reindent everything. You have to painstakingly move it to the right level and hope that it still works. In Common Lisp I just press Ctrl-Alt-\ and everything becomes indented correctly.

>The most you have to do is hit backspace after finishing a block.

That works if you only write new code and never have to change existing code.

>There's a reason environments highlight do-end pairs together: because it's hard to know which ones match which.

No, the open/close brackets allow the IDE to highlight them so that the programmer can clearly see the scope of the code block. This is a useful feature of the language. In Python it's almost impossible to see which deindent matches what if the function is long enough/deep enough.

>but white space just is not one of them.

This position is hard to maintain after you've spent an hour trying to debug a nonsensical error just to realize you opened the python file in an editor that used a different tab/space setting than the file was created in. Significant whitespace is one of the biggest misfeatures in programming history.

I've been writing Python for over 20 years, and I don't think this has happened to me one single time. I'm not some kind of super programmer; I make as many mistakes as anybody else and spend too long debugging stupid mistakes occasionally.

I've mixed spaces and tabs before on a handful of occasions, and it's always told me straight away what the problem is.

Here is an example of mixed spaces and tabs for indentation:


You'll see that it raises an error:

    TabError: inconsistent use of tabs and spaces in indentation
…and points you to the exact line with a problem.

Here's an example of an incomprehensible error related to mixing whitespace, where its not obvious its a mixed whitespace issue:


Are we seeing the same thing?

    IndentationError: unindent does not match any outer indentation level
The very first thing I would look at is the indentation if I got an error like that. You hardly need to "spend an hour trying to debug a nonsensical error" when you see an error like that.

Does "indentation error" immediately scream check for a whitespace mismatch to you? Because it doesn't to me.

"Indentation error" screams "fix the indentation", and code is indented with whitespace, so yeah.

When something tells you "indentation error", where's the first place you would look, if not the indentation? When you know you have a problem with the indentation, what do you think you have to change, if not the whitespace? There isn't a lot of opportunity to go in the wrong direction here.

Don't get me wrong, Python definitely has unexpected, difficult to debug behaviour for newbies (e.g. mutable default arguments). But this in particular isn't one of them. This is 2+2=4 level stuff.

This is a strange response. The only thing I can say is that "indentation issues" does not imply "mixed whitespace issues". Your responses are implicitly conflating the two.

>what do you think you have to change, if not the whitespace?

Note that the same error shows up when you have an inconsistent number of spaces to indent a block.

> > What do you think you have to change, if not the whitespace?

> Note that the same error shows up when you have an inconsistent number of spaces to indent a block.

And what would the solution to that be, if not changing the whitespace?

Every single piece of information available is strongly pointing in the same direction here.

I swear its like we're speaking two different languages. I really don't see the utility in further effort trying to find common ground.

It's nice when it does offer that specific error, but I've burned many hours over the years when it fails to recognize that as the issue.

Anecdotally, from several sources teaching in tech, it’s the significant white space that makes python much more approachable to non-nerds. For some reason matching nested braces isn’t palatable to them. I attribute Python’s wide adoption in the non-nerd world in part to this (the other part being the ecosystem).

Surly that could only happen if you used a mixture of tabs and spaces for indentation.

The point is that if you open a file with a different editor it was created with, its very easy to mix tabs and spaces without any kind of indication that that's what is going on.

But doesn't the whitespace issue gimp Python's lambda syntax? AFAIK you're limited to one expression or function call, to get around the fact that you can't just inline a completely new function (with its associated control flow structure).

And what about one-liners, a la Perl? Stack Overflow answers seem to imply either embedding newlines in your string, or using semi-colons (as a Python newbie... you can use semi-colons?!)

> AFAIK you're limited to one expression or function call, to get around the fact that you can't just inline a completely new function (with its associated control flow structure)

On the other hand, if your function needs a flow structure that's more complicated than a single line, should you really be inlining it?

And I think any such use case that you really want to inline can probably be accomplished with list comprehensions

Why should code in a REPL be indented correctly? What if you copy from your own terminal (which may only support spaces) to your codebase, which may be in tabs? It doesn't make sense for a language exclaiming the principle of we're all adults here to not allow you to schedule irrelevant formatting fixes yourself.

> When I looked at Python, it looked interesting. Then I found out that -white space is important-. I thought about what it might be like to worry about that, and decided -not for me-.

I did the same thing. In fact, this is why I came to comment on this thread: my initial reaction was to turn my nose up at the significant indentation. Then I gave it a try, and I got over it in about five minutes. It just wasn't the big deal I thought it was. I've heard a lot of other people say the same thing.

> Then I found out that -white space is important-.

The only time this should ever be an issue is when you're copy/pasting code from a web page or other source that doesn't preserve white space when you copy.

Otherwise, I'll never understand why this is so hard for people. No matter what language you're using, you should be properly indenting code blocks 100% of the time, and if you're doing that, Python's white-space-as-syntax will never be a problem.

My first big Erlang project made me completely rethink my acceptance of object oriented programming in C++, Java, Python, etc. I realized that I had blindly accepted OO because it was taught as part of my college curriculum. After several years in industry I had concluded that programming was just hard in general. It wasn't until my first project in Erlang, where an entire team of OO devs were ramping up on functional programming, that I discovered that the purported benefits of OO were lies. I also realized that the idea that concurrency and parallelism must be hard is untrue. OO simply makes it hard.

Now I see OO as something I have to deal with, like a tiger in my living room. Thankfully, so many new languages have come out recently; Go, Rust, and Elixir being the ones that I use regularly, that have called out OO for what it is and have gone in more compelling directions.

Hopefully one day they will teach OO alongside other schools of thought, as a relatively small faction of programming paradigms.

Erlang. Same here. Reading first few pages of a book describing principles of OTP (processes, share-nothing, messages, etc) was mind blowing. Company I worked for at the time (and I still do) decided to switch from Java to Erlang in middleware area. This decision seemed like a mixture of insanity and enlightment. Do you switch from one of the most popular languages in the world to something that most developers never heard of? Surely, exciting, but will it work? How do we hire new staff? After our R&D confirmed it was promising, me with couple of other developers were tasked with rewriting quite an expensive piece of middleware software that was unfortunately reaching its maximum capacity. We had no knowledge of how the software worked, we just knew its API. We were given time to learn erlang so we did. We all switched from eclipse to vim (some to emacs). After a bit of playing around with erlang we did our job in just 3 months. New app was much smaller and was easily capable of handling many more messages than the previous one. And it was written by erlang newbies! Then many more erlang apps we have created. It turned out to be a really good choice. Also the level of introspection you get out-of-the box with erlang is just amazing. I have never seen anything like this before.

Now I can compare Erlang to Java and it is really baffling how the heck Java took over the world. To do erlang I just need an editor with some plugins, ssh connection to linux with OTP installed and of course rebar3. To do Java I need 4GB of RAM to simply run an IDE with gazillion of plugins, maven to cater for thousands of dependencies for the simplest app and I need to know Spring, Hibernate, AOP, MVC and quite a chunk of other 26^3 3-letter abbreviations. No thanks.

I already asked about this in the parent post that refers to Erlang, but do you happen to have a write up by any chance, where you go into more details. I’m super interested! It would be really appreciated. (This is not the first time I hear people praising Erlang in comparison to popular OOP languages)

May I ask what was the book? Do you recommend it?

There are 2 erlang books. "OTP in action" and "Learn you some Erlang". I highly recommend both of them.

Ironically, Erlang’s message passing makes it more like Alan Kay’s original definition of OOP than the bowdlerised view of OO perpetrated by Java and C++.

Yep. That is why I can not ever interpret the term "OOP" anymore.

Same reaction but different type of project and language. Immutable data with persistent data structures was a game changer for me. There is place for OO but it does feel like the last 20 years our profession has been suffering from collective insanity.

Try code large a GUI project without OO.

As the other commenter already pointed out, functional reactive programming wiped the floor (React) of OO style approaches to GUI design. It turns out thinking about interfaces is made considerably easier with one way data flow.

If you’ll open https://reactjs.org/ you’ll read right on their main page:

> Build encapsulated components that manage their own state, then compose them to make complex UIs.

Components managing their own state is a textbook definition of OOP. They even use inheritance in their example on the main page:

> class HelloMessage extends React.Component

React isn't really object-oriented. Components rarely pass messages to each other. Instead, the way that data flows is through function/constructor arguments. You can directly invoke a method on a component, but that's only really used as an escape hatch. It's inconvenient, and IMO, a code smell.

For the React that I write, class components are used only when there's some trivial local state that I don't want to put into Redux (e.g. button hovering that can't be done in CSS), or when I need to use component lifecycle methods.

And yes, class components do inherit from React.Component, but they specifically discourage creating your own component base classes.

Calling a function of another component is a way of passing a message to another object, no matter what that message is, be it the data flows you mentioned, or anything else.

> React isn't really object-oriented.

I don't do web development but I've read react API docs and user guides.

Objects calling other objects is optional for OOP, I never saw a definition that requires them to do. OOP is about code and data organization.

Objects and methods are everywhere in react. Some are very complex.

Just because it uses a few lambdas doesn't mean it's not OOP.

For reference, here's now a non-OOP GUI library may look like: http://behindthepixels.io/IMGUI/ As you see not only it's hard to use, it doesn't scale.

Like it or not, OOP is the only way to deal with complex state invented so far. Even in functional languages: https://medium.com/@gaperton/let-me-start-from-the-less-obvi... And modern rich GUIs have very complex state.

> Objects calling other objects is optional for OOP, I never saw a definition that requires them to do. OOP is about code and data organization.

Smalltalk, which is the prototypical OO language, does the exact opposite of everything you said (all computations happen by message passing and all members are public).

> does the exact opposite of everything you said

No it doesn’t.

> all computations happen by message passing

I did not say message passing is required to be not present, I said it’s optional.

> and all members are public

I did not say anything about encapsulation. I said OO is about organization of code and data. If you have classes with properties and methods, it’s OOP.

The model is a data structure and the view is a series of functions on it. The rest is convenience and interface state (render cache, undo stack, etc).

Have a look at functional reactive programming.

Facebook is written in React. React is (almost) without oo. There are web and mobile guis written in React. Native aren't, yet, but that's mostly because it's young and there are no libraries for that.

This is a reply to @lallysingh (sorry I'm replying after your post is too old to get directly replied :)

Re "The model is a data structure and the view is a series of functions on it." This is exactly how I see my OO-based GUI programs. The object I define is firstly a data structure. Then I want some operations/views on it? define methods on it.

This sounds fascinating. Do you have a write-up where you go into more details? A couple of examples with before and after perhaps? Seriously that would be amazing.

Static Type Checking.

I used to think Node.js was the greatest thing ever. I won't bother explaining the benefits but suffice it to say I much prefer writing a server in Java compared to node.

I think it takes getting burned at least once for new developers to understand why a lot of seasoned developers like types.

Over time I've realized that there's a simple principle that applies to a lot of stuff in software and engineering in general:

"The bigger something is, the more structure it needs"

Writing a quick script or small application? Sure use Python, use Node, it doesn't matter, but as size increases, structure needs to increase to stop complexity from exploding.

It doesn't just apply to typing either. The bigger a project is, the more you'll want frameworks, abstractions, tests, etc.

If you look around this principle applies to a lot of things in life too, for example the bigger a company is, the more structure is added (stuff like HR, job roles, etc...).

As a corollary, the inverse of the principle is:

"Don't add too much structure to a small thing"

Yes! How much time have I wasted chasing some stupid error that "just worked" in perl which C or Java would have blocked at the get go. Just say NO to languages that try to guess what you were probably trying to do.

You could use Typescript with node.

Good insights. Modern platforms are neither full dynamic nor rigidly static, but gradual, https://en.wikipedia.org/wiki/Gradual_typing. Start with a dynamic script, add typing as you go to strengthen the system. Notable mentions: Typescript, Python + mypy, C#, Dart.

This doesn’t encompass all modern platforms, just those that decide to go that route.

Fair enough. 'Quite a few modern platforms...'

And Groovy!

you missed a big one: racket

Immutability. I didn't really understand the benefits of having immutable data structures until I tried building a service with no mutations whatsoever... and noticed I didn't get any weird, head-scratching bugs that took hours to reproduce and debug. That led me to go down the Functional Programming rabbit hole - thus changing my entire view on what code is/could be.

[edit: spelling]

To add to this: functional programming. Getting rid of state in objects was a DREAM for me.

I used to think: come on you better than though hipsters... this shit looks ridiculous, and it isn't intuitive... There's no way it's worth it to learn. It's just the new fad.

Oh, how I was so so wrong.

Tell us what was your starting point and references used to learn FP

First -- over a few years -- I had slowly started writing functional-ish code in Ruby on the backend and React/Redux on the front-end.

Ruby is kind of nice in that there's not an easy way to iterate over a list without functional code. You start mapping and reducing pretty regularly, and then discover the power of higher-order array functions, and how it lends itself nicely to functional programming.

React/Redux is nice in that it pretty much forces you to wrap your head around the way functional programming works.

React/Redux was definitely a step up from Spaghetti-jQuery for me, but I'd stop short of calling it an enjoyable experience. It wasn't until I started playing around with Elixir that I really fell in love with functional code.

In a lot of ways, Elixir is really similar to Ruby, which makes it pretty easy to dive in (for a Ruby-ist). But in subtle ways, its functional nature really shines. The |> is perhaps my favorite programming concept I've come across. It's so simple, but it forces you to think about -- and at the same time makes it natural to comprehend -- the way data flows through your application.

Don't get me wrong, Elixir is still very much a functional language. It's allure in that it looks like an imperative language and has a lot of similarity to Ruby is misleading.

The learning curve might not be as steep as say, Lisp, but it' still quite steep. And I think it'd take around the same time to be meaningfully proficient in either.

Glad I got my Elixir/BEAM rave out for the day.

A "me too" for Elixir. Coming from a Ruby background, picking up Elixir wasn't hard at all (disclaimer: I have done some FP in the past). What I found really good about Elixir is the solid documentation[1], easy comparing of libraties[2], the mix[3] tool that made starting a project really simple.

But what really blew me away were doctests[4]. Basically I ended up writing my unit tests where my code was. That was my documentation for the code, so there was no need to maintain unit tests and documentation separatetly.

[1]: https://elixir-lang.org & https://hexdocs.pm/elixir/Kernel.html [2]: https://elixir.libhunt.com [3]: https://elixir-lang.org/getting-started/mix-otp/introduction... [4]: https://elixir-lang.org/getting-started/mix-otp/docs-tests-a...

My own intro to FP happened via Ruby and Clojure.

I wrote a rinky-dink Web application while teaching myself Ruby and Ruby on Rails. It turned out most of the pages were constructed from a database query returning a batch of results which then needed filtering, sorting and various kinds of massaging. I ended up really enjoying doing this work by applying a series of higher order functions like for() and map() to the data set. This got me started on thinking functionally.

Years later, I decided I wanted to do more with Lisp while continuing to work with the Java ecosystem. Clojure is a Lisp that runs on the JVM and is rather solidly functional. It's possible but very awkward to do mutation. If you don't want to run with shackles, you need to embrace immutability and FP. I found myself fighting FP until eventually I saw the light and was able to productively embrace it.

Here's a "me too:" I'm working on a small-ish Java application with a couple of former FORTRAN developers. Last Thursday we got a bug report from our integration testers saying that some data from earlier messages was showing up in later, unrelated messages.

Sure thing, the data buffers are objects being re-used with mutation rather than being built from scratch per new message. Immutable objects, or a kind of "FP light" would have made this particular problem impossible.

I feel like Rust has shown that mutability is ok, as long as you don't have both mutability and aliasing. Having a mutable object accessible from many places at the same time definitely is a recipe for bugs.

How do you mitigate overhead from making copies all the time with immutability? By using moves and changing ownership a lot?

Some languages deal with this better than others through structural sharing, which reduces overhead significantly. The performance hit is usually unnoticeable in those languages and only becomes a problem in very specific cases ie. processing large strings, appending items to long lists, etc... some of those will cause you to rethink the way you do certain things (that is part of the FP journey).

In languages like JS or Ruby though, you might need to compromise. Generally I start with the immutable approach and refactor if performance becomes an issue.

Our introductory programming course at university used ML and I didn't like it or get it. I already knew some C++, BASIC and Java and was mostly interested in real time graphics programming and the kind of examples used in the ML course were not interesting to me and I didn't see how it would help me tackle the kinds of programming tasks I was interested in.

I found recursion pretty unintuitive and didn't find the way it was taught in that course worked for me. At the time it mostly seemed like the approach was to point at the recursive implementation and say "look how intuitive it is!" while I completely failed to get it.

Many years later after extensive experience with C++ in the games industry I discovered F# and now with an appreciation of some of the problems caused by mutable state, particularly for parallel and concurrent code I was better prepared to appreciate the advantages of an ML style language. Years of dealing with large, complex and often verbose production code also made me really appreciate the terseness and lack of ceremony of F# and experience with both statically typed C++ code and some experience with dynamically typed Python made me appreciate F#'s combination of type safety with type inference to give the benefits of static typing without the verbosity (C++ has since got better about this).

I still struggle to think recursively and my brain naturally goes to non recursive solutions first but I can appreciate the elegance of recursive solutions now.

I think it's unfortunate the first/often only exposure people get to FP is a build-up-from-the-foundations approach that emphasizes recursion so much, I think it leaves most students with a poor understanding of why functional paradigms and practices are practical and useful.

I've written primarily in a functional language (OCaml) for a long time now, and it's very rare I write a recursive function. Definitely less than once a month.

In most domains, almost every high-level programming task involves operating on a collection. In the same way that you generally don't do that with a while loop in an imperative language, you generally don't do it with recursion in a functional one, because it's an overly-powerful and overly-clunky tool for the problem.

For me the real key to starting to be comfortable and productive working in a functional language was realizing that they do actually all have for-each loops all over the place: the "fold" function.

(Although actually it turns out you don't end up writing "fold"s all that often either, because most functional languages have lots of helper functions for common cases--map, filter, partition, etc. If you're solving a simpler problem, the simpler tool is more concise and easier to think about.)

Agreed. Point-free code is a better hallmark of good FP than recursion.

The lack of secure composability in almost all existing languages. You cannot import untrusted code and run it in a sandbox. Unless you are completely functional, you cannot restrict the effects of your own code either.

The solution to this seems to be the object capability (key) security paradigm, where you combine authority and designation (say, the right to open a specific file, a path combined with an access right). There are only immutable globals. Sandboxing thus becomes only a matter of supplying the absolutely needed keys. This also enables the receiver to keep permissions apart like variables, thus preventing the https://en.wikipedia.org/wiki/Confused_deputy_problem (no ambient authority).

Even with languages that have security policies (Java, Tcl, ?), control is not fine grained, and other modes of interference are still possible: resource exhaustion for example. Most VMs/languages do not keep track of execution time, number of instructions or memory use. Those that do enable fascinating use cases: https://stackless.readthedocs.io/en/latest/library/stackless...

All of this seems to become extremely relevant, because sufficient security is a fundamental requirement for cooperation in a distributed world.

I recently had a lecture about something related to this in a course on advanced functional programming. Basically we were shown how you can implement a monad in Haskell that lets you preserve sensitive data when executed by untrusted code. Together with the SafeHaskell language extension, which disallows libraries to use operations that could potentially break the invariants, this seems like a very cool concept!

Functional pearl: http://www.cse.chalmers.se/~russo/publications_files/pearl-r... Slides: https://1drv.ms/p/s!Ahd2uwlk3jmIlCZr0spYc_I-OveR Source: https://bitbucket.org/russo/mac-lib

Spectre is another mode of interference, and perhaps one which means we're going to have to give up altogether on running untrusted code without at least a process boundary.

The JavaScript Realms API adds secure object capabilities https://github.com/tc39/proposal-realms/blob/master/README.m...

Tcl has the concept of "safe interpreters" which can be spawned as slaves of the main interpreter. There is a default safe interpreter policy, which is fully scriptable for customization.

Among the available customizations in a safe interpreter are: restriction of use of individual commands, ability to set a time limit for execution of code, a limit to number of commands executed, and depth of recursion.

Memory usage can't be directly limited, but Tcl has trace features that allow examination of a variable value when set, so one could write custom code that prevents total contents of variables from exceeding a specified limit.

Well put. I'd add that this is a case where the decoupling of OS and language design is bad: capabilities are much more valuable if the OS can provide a sufficiently expressive basis for them.

I'm watching Zircon with great interest.


Rich Hickey's "Value of Values"[0] is what finally sold me on the benefits of pure functional programming and immutable data structures. (It remains horrifying to continue working with MySQL in my day job, knowing that every UPDATE is potentially a destructive action with no history and no undo.)

[0]: https://www.youtube.com/watch?v=-6BsiVyC1kM

Often times MySQL is set up with auto-commit set to true, where every DML statement (like UPDATE) is wrapped in an implicit BEGIN and COMMIT. It doesn’t have to be that way though, you can manage the transaction yourself, and you don’t have to COMMIT if you don’t want to, you can undo (ROLLBACK) if necessary.

It’s true, transactions in MySQL work great. But once the change is committed, the previous value is overwritten permanently. If the user wants to undo five minutes later, or I want to audit based on the value a month ago, we’re hosed unless we’ve jumped backflips to bake versioning into the schema.

I think Hickey’s comparison to git is apt: we don’t stand for that in version control for our code, why should we find that acceptable for user data?

Because there is vastly more state in the world than there is code. And frankly, most state is not that important. Just wait until you work with a sufficiently large immutable system, they are an operational nightmare.

You should opt-in to immutability when the state calculations are very complex and very expensive to get wrong.

I do wish mainstream languages had better tools for safe, opt-in immutability. Something like a "Pure" attribute you assign to a function. It can only call other pure functions and the compiler can verify that it has no state changes in it's own code.

That feature's "coming" - part of SQL 2011 called System Versioning, currently supported in DB2 and SQL Server, but available in MariaDB 10.3 Beta according to this talk:


I thought the MySQL binlog was the history and mysqlbinlog the tool to help with the undo.

Well put - I came here to say the same :)

This is probably an unpopular opinion - Haskell, I used to think it must be cool (and useful) since people go on about it so much. I spent quite a bit of time learning it, and imho the usefulness to practicing programmers is marginal at best. It does present some useful techniques that are making their way into languages (e.g. swift optionals), but in general it didn't live up to the hype for me. I feel a lot of the things they go on about are overly complified to impress.

Haskell has some problems that mean it is not suited to a lot of tasks, esp. the difficulty of predicting space/time performance and integration with OS-level facilities. But it is a serious contender in many cases where correctness is essential: Haskell's type system is expressive and the language makes valuable promises about how code behaves.

I tinkered with it for years before finding the aha moments that really made it day more productive than my day job languages.

Now I feel I could race a team of programmers in those languages and be far in front.

In what sort of application would you say that its better?

Every one I've tried so far. Command line, server side, and some light web. I haven't tried mobile.

Could you comment on these “aha” moments?

Basically you start to ingrain how to use the tools there for problems that you solve differently in other systems. Then you start looking for ways to get rid of the awkward parts of your code

The old list of official aha's is: Monads, applicative functors, lenses. But really it's about spending time learning them well enough to use normally.

Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact