Hacker News new | past | comments | ask | show | jobs | submit login

To me it looks like lock-in. They chose a language good for prototyping and quick iteration, and then their codebase gets stuck with a permanent performance problem.

You see the same problem in Python with regards to correctness - it's hard to refactor Python or change a large codebase and have it keep working correctly, so huge Python projects tends to ossify.

It may be a rational solution only in the short-term, but still an objectively bad solution overall.




The performance problems arrive very late, you can go very far with python.

So it makes sense to start with something that let you be as agile as you can, when you don't have money, and then, once you are filthy rich, spend the resource you have massively to solve your new problem.

Starting with something that is less productive to avoid a problem you might have later if you are rich doesn't seem practical to me.


> Starting with something that is less productive to avoid a problem you might have later if you are rich doesn't seem practical to me.

this is purely engineers' qualification issue, the fact that the hired engineers weren't productive in safer and more performant languages so they had to fallback to an inferior toolchain to seem like a productive team.


Simple tasks like fiddling with an API json response with rust, haskell or ocaml are always going to take longer than with python.

Now, doing it well will take as much time in any language.

But a huge amount of time is spent not doing things well:

- you try out ideas

- you explore data shape, results and systems behavior

- you demos things to get a sense of the solutions or sell something to management or clients

Eventually, you will have a very well defined problem with an optimal solution. But this is the last mile, unless you work for nasa.

E.G: currenly one of my client is a big financial structure, and they are creating a calculation engine. They have 3 experts on site, but they don't have the full understanding of the problem they want to solve, and disagree on the part they do understand. The result ? We have to rewrite a lot of code, and write even more code in an exploratory manner. This will likely go on for the next few years. You can wish for a better situations, or you can use a tool that make you productive in that situation.


> Simple tasks like fiddling with an API json response with rust, haskell or ocaml are always going to take longer than with python.

This isn't true, you are generalising your own perception of the tools you're not as familiar with as python. For instance, here's how simple json api fiddling looks in Haskell, if written quick-and-dirty prototyping way:

    -- requires packages: aeson, wreq, lens
    {-# LANGUAGE OverloadedStrings #-}
    import Network.Wreq (get, responseBody)
    import Data.Aeson.Lens (key)
    import Control.Lens

    main = do
        result <- get "http://httpbin.org/get"
        print (result ^. responseBody)
        print "---"
        print (result ^? responseBody . key "url")

Now, how exactly is it taking longer than with Python?


While I disagree with your "qualification issue", even that can be caused by money. Surely, hiring "more qualified" engineers cost more money that you may not have in a humble beginning.


As a counter example: I once had a short conversation with Reddit co-founder Alexis Ohanian about he and Steve dropping Common Lisp in favor of Python. They loved Common Lisp but I think the expression he used was something like "the Common Lisp code kept falling down" and said that moving to the Python ecosystem stabilized things.

I share this experience, as a Common Lisp developer (for 40 years!). I feel like while Common Lisp is a fantastic language for many reasons, I also understand that in many situations the Python ecosystem is a better choice.


That needs expanding though. A good Common Lisp compiler, e.g. SBCL, does perform a bunch of checks for correctness during compilation, which can further improved upon by adding static type declarations. The (C)Python interpreter o.t.o.h. does verify only the syntax in the most shallow way. Type hints there can be used by a linter (and perhaps by an alternative or future implementation of the Python interpreter), but I don't think such type hints were available at the time Reddit chose Python.

Python's significant indentation might make the source code more (human) readable (and that's really it's magic power), but poses a challenge during refactoring.

Without question you'll find more easily competent Python programmers than Common Lisp programmers, but that Common Lisp would lead to more fragile code isn't as obvious.


> and that's really it's magic power

A lot of discussion here is missing this absolutely crucial aspect.

Code is meant for humans first, expressing what you want to do cleanly is magnitudes more important than maximum performance.


> it's hard to refactor Python or change a large codebase and have it keep working correctly, so huge Python projects tends to ossify.

Although I don't work in Python, in general, projects written in dynamic languages can be refactored if they've been developed in a disciplined fashion, that is, with substantial test coverage (I guess that in modern projects, with type checking, the need is significantly reduced). This is not unrealistic (I'm currently worked on a large-but-not-huge project, and it's perfectly possible to refactor), although of course, whether dynamic language with lots of testing is still more efficient (in terms of engineering resources) than static language with less testing, it's another story.


I was working on a code base where testing was inconvenient (re-striping a large RAID array took hours) and I can think of scenarios where testing is impossible (or at least very costly). Test driven development isn't always the right approach.


Having substantial test coverage is a different concept from TDD (which is an approach to software development).

It's not clear how restriping an array is related to a test suite, also because if an API is slow to test (e.g. restriping), it's slow with either static or dynamic languages.

In general, if a certain API is impossible to automatedly test (this is a common case in both static and dynamic languages, for example, for web services), one stubs the API; with dynamic languages one will be certain that the tested codepaths until the API are correct. With this strategy, the difference with a static language is minimized (or not existing at all).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: