I'm in awe of the audacity of slipping unlimited power to construct roads into a bill about parks, while planning to become the person who would have that power. The authority in question was the ability to construct parks anywhere, and the right to build access roads for them. So he created a series of long narrow parks, and built big fat access roads down the middle. That is why those roads got named, "parkways"!
But the Interstates weren't specifically designed to give an advantage to the government in street fighting. Haussmann's boulevards were designed for easy access for artillery (and cavalry?), and to make barricade-construction impossible, as vonnik noted -- not to facilitate movement and redeployment like the Interstates. (I've heard the rumor that the Interstates were meant to facilitate suburbia from the start -- on the theory that if the country's population and economic activity spread out into the suburbs, we'd be harder to ruin with a small number of nuclear weapons.)
That said, Robert Moses was certainly a tyrant of urban planning on the same level as Haussmann. White ethnic populations of the Northeast still remember him, and have not yet forgiven him for his habit of running freeways through their neighborhoods -- ruining the neighborhoods in the process, and in many cases driving them out to the ethnically-homogenous blandness of the suburbs.
I'm also reminded of Le Corbousier. Architecture can certainly attract autocrats...
Once upon a time, there was a man here who built stuff, in Berlin for Albert Speer his name was. Philip Johnson and he was a wonderful artist and a moral monster. And he said he went to work building buildings for the nazis because they had all the best graphics. And he meant it, because he was an artist, as Mr Jobs was an artist. But artistry is no guarantee of morality.
Your explanation is correct. That is the semantics of Unix stdio buffering as implemented by glibc. So across a wide variety of languages, you'll see a significant performance difference if you are doing lots of writes to the terminal.
On the other hand if you didn't do this, then interactive terminal programs would be entirely unusable.
The problem with Node.js and concurrency is that everything depends on trusting code to be perfectly written, and that perfectly written code must be written in a naturally confusing callback style that is really easy to screw up.
If you do it perfectly, then you get great scalability. But one bonehead mistake will ruin your concurrency. By contrast with a pre-emptive model you get decent scalability really easily, but now you've got a million and one possible race conditions that are hard to reason about.
This is not a new design tradeoff. Go back to the days of Windows 3.1 or the old MacOS versions below OS X. They all used cooperative multi-tasking, just like Node.js. Today what do we have? Pre-emptive multi-tasking in Windows, OS X, *nix and iOS.
Web development has actually gone back to a model that operating systems abandoned long ago. As long as your app is small, there is a chance that it will work. But as your app grows? Good luck! (That is why operating systems uniformly wound up choosing hard to debug race conditions over predictably impossible to solve latency problems from cooperative multi-tasking.)
The reason why people don't want you to ship your version of Python is that it takes a lot of space. Lua makes that hurdle much easier.
For example my local Python venv is about 130 MB. It symlinks in another 52 MB of standard Python libraries. And I know it won't run without some unknown amount of additional dependencies in C-libraries. If I wanted to send that to someone I'm starting at 200 MB. After compression that might be 40 MB or so. You're going to have to work to strip that down.
By comparison their download including Lua, code, and libraries is under 6 MB.
I have a minimal running Python application that was started recently and has done nothing. It takes close to 90 MB of RAM just to start.
Theirs is generally under 10 MB while it is running.
So at every step Python requires 5-10 times as much data. If you're trying to get other people to put you in their containers, this can matter a lot.
The point of the Distelli Agent is to deploy software. Think of it as the "bootstrapper". Serious customers would rather use that 200MB for their own software, not for the "bootstrapper" :). Here at Distelli we optimize for the customer, not for our own internal developer productivity.
Good soundbite, but if using 200MB of extra space would let you lower your prices while offering the same service (by increasing developer productivity) then I bet many customers would take that.
And did you try spending a person-week or so of dev time trying to minimize the size of the Python version? That would be a much smaller one-off cost than the costs of migrating everything to Lua, and I could easily imagine you'd get it down by one or two orders of magnitude. (You could do the same for Lua too I'm sure, but if we're talking about say 5mb vs 200k then that's "who cares?" size for most customers).
Nowhere in this article does it really state that they're less productive in Lua, though. In fact, the main engineer on it is well versed in it. Your 5% may actually be negative in this case.
From where do you get this impression that Python is the only high productivity language in the world? There are plenty of languages that can compete with Python in terms of productivity and a lot of them offer better abstractions for it. Lua is known for being one of the best escape hatches for scripting.
If you're more productive in Lua than in Python then I support using Lua 100%. It's switching languages to gain a measly 200MB of disk space that I'm objecting to. For 99% of projects 200MB is just an irrelevant consideration compared to how much difference language productivity makes.
For general cases, I agree that 200 MBs isn't enough to consider very important nowadays, but I still don't get how we're automatically talking about language productivity being a trade-off. Is the assumption that Python is a more productive language regardless of what it stands against?
btilly was talking about the disk space as if it were important.
My opinion is that Python is more productive than Lua, but if you disagree by all means use Lua. But if you're talking about saving 200mb of disk space as though it makes the difference, then I think you're using the wrong criteria for choosing your language.
The problem is how to debug sporadic production problems. It doesn't matter how good your tooling is to debug a problem in a dev environment if you have no idea how to reproduce it there.
You need some level of decent logging in the production environment (with optional extra logging that can be turned on) to capture WTF went wrong. THEN you can try to reproduce it. When a logged system goes boom, you need it to come out of production and remain around until those logs are saved somewhere.
You're right. Coming from the Erlang world, I almost implicitly assume good logging (and extremely useful task-granular crashdumps.) If you're implementing your app on a custom unikernel "platform" that isn't basically-an-OS, the first and most important step is logging the hell out of everything.
There's an alternate approach that I was mentally contrasting with. You tend to see it with e.g. Ruby apps on Heroku: you can't "log into" a live app, but you can launch an IRB REPL—with all the app's dependencies loaded—in the context of the live app.
Having this ability to "futz around in prod" frequently obviates the need for prescient levels of logging. You can poke at the problem until it crashes for you.
Ruby folks, don't let not being in Heroku stop you from taking this approach, because it's the best. pry-remote (are you using irb instead of pry? please stop) will give you the same fantastic behavior. It's part of every persistent Ruby service I deploy (guarded for internal access only, of course, don't be crazy).
I haven't tried deploying a unikernel in production yet - I've been mostly using/debugging only OCaml code on Linux in production - but it should be possible to implement the kind of logging you describe.
For example I've seen a project that would collect&dump its log ring-buffer when an exception was encountered in a unikernel, or one that collects very detailed profiling information from a unikernel.
It would be nice to have some kind of a "shell" to inspect details about the application when something goes wrong, but that applies equally to non-unikernel applications.
The difference is that with unikernels the debugging tool would be provided as a library, whereas with traditional applications debugging is provided as a language-independent OS tool.
2. Salaries are not being kept down in SV. The whole topic here is that they are out of control. Have you actually tried to hire anyone? I have. Even people that telecommute from low cost of living areas want SV salaries. I've had to go outside the US for my last 2 hires.
I think you're getting downvoted because you seem to have misunderstood the parent commenter. They were saying the same thing as what you're saying here. Ie. that salaries are high because "that was deemed illegal and they've stopped".