Hacker News new | comments | show | ask | jobs | submit | btilly's comments login

Both of those scenarios are more probable. But neither is anywhere nearly as probable as human error or human malice.

In the case of human error, it could be as simple as a bad cut-and-paste. In that case the number likely has a half-dozen or so factors, some of which should be found relatively easily.

Malice is, of course, impossible to verify.


Huge public outlay for wide roads, originally designed for military use and now used heavily by civilians. Where have I heard that before?

Oh right. The US Interstate System! :-)

Incidentally for a similarly influential person in US history you have to look no farther than https://en.wikipedia.org/wiki/Robert_Moses.

I'm in awe of the audacity of slipping unlimited power to construct roads into a bill about parks, while planning to become the person who would have that power. The authority in question was the ability to construct parks anywhere, and the right to build access roads for them. So he created a series of long narrow parks, and built big fat access roads down the middle. That is why those roads got named, "parkways"!


But the Interstates weren't specifically designed to give an advantage to the government in street fighting. Haussmann's boulevards were designed for easy access for artillery (and cavalry?), and to make barricade-construction impossible, as vonnik noted -- not to facilitate movement and redeployment like the Interstates. (I've heard the rumor that the Interstates were meant to facilitate suburbia from the start -- on the theory that if the country's population and economic activity spread out into the suburbs, we'd be harder to ruin with a small number of nuclear weapons.)

That said, Robert Moses was certainly a tyrant of urban planning on the same level as Haussmann. White ethnic populations of the Northeast still remember him, and have not yet forgiven him for his habit of running freeways through their neighborhoods -- ruining the neighborhoods in the process, and in many cases driving them out to the ethnically-homogenous blandness of the suburbs.

I'm also reminded of Le Corbousier. Architecture can certainly attract autocrats...


Once upon a time, there was a man here who built stuff, in Berlin for Albert Speer his name was. Philip Johnson and he was a wonderful artist and a moral monster. And he said he went to work building buildings for the nazis because they had all the best graphics. And he meant it, because he was an artist, as Mr Jobs was an artist. But artistry is no guarantee of morality.

https://benjamin.sonntag.fr/Moglen-at-Re-Publica-Freedom-of-...


How soon I forget the big one(s). Indeed, Hitler himself was more of an architect than anything else (talked about in a digression at http://www.leesandlin.com/articles/LosingTheWar.htm); but I didn't want to Godwin's Law the matter straight out of the gate.

The german autobahn. https://en.m.wikipedia.org/wiki/Autobahn#History

You will notice this sort of thing more and more as time goes on.

For me the "ah hah" moment was when it hit me that "Where were you when you heard about Challenger?" was my generation's version of, "Where were you when JFK was assassinated?"

Another easily observed one is music. Very few people are aware of much that happened in popular music after they hit 25.


I'm turning 25 in two weeks. So this is it? 2015 was my last year of music? Damn...it went by so fast!

Quick, hurry.

You can listen through 65 years of pop music made so far. Then, if you live 80 years, you only lose about 45,8% of all pop-music you could have heard during your lifetime. That's not half bad.

If you include some blues, jazz and classical, that percentage goes down significantly.

Regards, 29 year old with 4 year long amnesia about music.


Your explanation is correct. That is the semantics of Unix stdio buffering as implemented by glibc. So across a wide variety of languages, you'll see a significant performance difference if you are doing lots of writes to the terminal.

On the other hand if you didn't do this, then interactive terminal programs would be entirely unusable.


You can use SMP just fine in Node.js today with https://nodejs.org/api/cluster.html.

The problem with Node.js and concurrency is that everything depends on trusting code to be perfectly written, and that perfectly written code must be written in a naturally confusing callback style that is really easy to screw up.

If you do it perfectly, then you get great scalability. But one bonehead mistake will ruin your concurrency. By contrast with a pre-emptive model you get decent scalability really easily, but now you've got a million and one possible race conditions that are hard to reason about.

This is not a new design tradeoff. Go back to the days of Windows 3.1 or the old MacOS versions below OS X. They all used cooperative multi-tasking, just like Node.js. Today what do we have? Pre-emptive multi-tasking in Windows, OS X, *nix and iOS.

Web development has actually gone back to a model that operating systems abandoned long ago. As long as your app is small, there is a chance that it will work. But as your app grows? Good luck! (That is why operating systems uniformly wound up choosing hard to debug race conditions over predictably impossible to solve latency problems from cooperative multi-tasking.)


The reason why people don't want you to ship your version of Python is that it takes a lot of space. Lua makes that hurdle much easier.

For example my local Python venv is about 130 MB. It symlinks in another 52 MB of standard Python libraries. And I know it won't run without some unknown amount of additional dependencies in C-libraries. If I wanted to send that to someone I'm starting at 200 MB. After compression that might be 40 MB or so. You're going to have to work to strip that down.

By comparison their download including Lua, code, and libraries is under 6 MB.

I have a minimal running Python application that was started recently and has done nothing. It takes close to 90 MB of RAM just to start.

Theirs is generally under 10 MB while it is running.

So at every step Python requires 5-10 times as much data. If you're trying to get other people to put you in their containers, this can matter a lot.


PyInstaller will generate a ~minimal installation. I've written an app which has a full python runtime, and images, and dozens of python files and modules in 6MB.

If we're going minimal, the base Lua install is still measured in hundreds of kilobytes, not megabytes.

Minimal means, that will it bring in only those modules, that are really used by your application. And that includes lib*.so.

> It takes close to 90 MB of RAM just to start.

A minimal Python app on my machine takes 6MB RAM.


A minimal created lua_State on my machine takes about 4 kilobytes of RAM.

So what? It does more. Are you running it on a pocket calculator?

Brilliant.

OK, so the original post doesn't say that, which kind of misses the whole point in the section that explains why Python was a bad choice.

Or it can not matter at all. To the kind of enterprise that is still running RHEL 6, maybe shipping a 200mb install on a CD will show them that you're a serious business.

There are use cases where 200mb of disk space matters, but for the overwhelming majority of applications, a 5% (say) increase in developer productivity would be well worth adding 200mb to the install.


The point of the Distelli Agent is to deploy software. Think of it as the "bootstrapper". Serious customers would rather use that 200MB for their own software, not for the "bootstrapper" :). Here at Distelli we optimize for the customer, not for our own internal developer productivity.

Good soundbite, but if using 200MB of extra space would let you lower your prices while offering the same service (by increasing developer productivity) then I bet many customers would take that.

And did you try spending a person-week or so of dev time trying to minimize the size of the Python version? That would be a much smaller one-off cost than the costs of migrating everything to Lua, and I could easily imagine you'd get it down by one or two orders of magnitude. (You could do the same for Lua too I'm sure, but if we're talking about say 5mb vs 200k then that's "who cares?" size for most customers).


Nowhere in this article does it really state that they're less productive in Lua, though. In fact, the main engineer on it is well versed in it. Your 5% may actually be negative in this case.

From where do you get this impression that Python is the only high productivity language in the world? There are plenty of languages that can compete with Python in terms of productivity and a lot of them offer better abstractions for it. Lua is known for being one of the best escape hatches for scripting.


If you're more productive in Lua than in Python then I support using Lua 100%. It's switching languages to gain a measly 200MB of disk space that I'm objecting to. For 99% of projects 200MB is just an irrelevant consideration compared to how much difference language productivity makes.

For general cases, I agree that 200 MBs isn't enough to consider very important nowadays, but I still don't get how we're automatically talking about language productivity being a trade-off. Is the assumption that Python is a more productive language regardless of what it stands against?

btilly was talking about the disk space as if it were important.

My opinion is that Python is more productive than Lua, but if you disagree by all means use Lua. But if you're talking about saving 200mb of disk space as though it makes the difference, then I think you're using the wrong criteria for choosing your language.


He found it through a computer search that not only proved that axiom worked, but also proved that no smaller axiom could.

See https://en.wikipedia.org/wiki/Wolfram_axiom for more.


The most important modification being the editorializing, which is most of the value to be found in reading this version. :-)

The problem is how to debug sporadic production problems. It doesn't matter how good your tooling is to debug a problem in a dev environment if you have no idea how to reproduce it there.

You need some level of decent logging in the production environment (with optional extra logging that can be turned on) to capture WTF went wrong. THEN you can try to reproduce it. When a logged system goes boom, you need it to come out of production and remain around until those logs are saved somewhere.

It is old, but http://lkml.iu.edu/hypermail/linux/kernel/0009.1/1307.html is an interesting data point about IBM's experience. They implemented exactly the kind of logging that I advocate in all of their systems until OS/2. And the fact that they didn't have it in OS/2 was a big problem.

-----


You're right. Coming from the Erlang world, I almost implicitly assume good logging (and extremely useful task-granular crashdumps.) If you're implementing your app on a custom unikernel "platform" that isn't basically-an-OS, the first and most important step is logging the hell out of everything.

-----


But that's the case for every system of sufficient complexity (that I've seen).

-----


There's an alternate approach that I was mentally contrasting with. You tend to see it with e.g. Ruby apps on Heroku: you can't "log into" a live app, but you can launch an IRB REPL—with all the app's dependencies loaded—in the context of the live app.

Having this ability to "futz around in prod" frequently obviates the need for prescient levels of logging. You can poke at the problem until it crashes for you.

-----


Ruby folks, don't let not being in Heroku stop you from taking this approach, because it's the best. pry-remote (are you using irb instead of pry? please stop) will give you the same fantastic behavior. It's part of every persistent Ruby service I deploy (guarded for internal access only, of course, don't be crazy).

https://github.com/Mon-Ouie/pry-remote

-----


I haven't tried deploying a unikernel in production yet - I've been mostly using/debugging only OCaml code on Linux in production - but it should be possible to implement the kind of logging you describe. For example I've seen a project that would collect&dump its log ring-buffer when an exception was encountered in a unikernel, or one that collects very detailed profiling information from a unikernel.

It would be nice to have some kind of a "shell" to inspect details about the application when something goes wrong, but that applies equally to non-unikernel applications. The difference is that with unikernels the debugging tool would be provided as a library, whereas with traditional applications debugging is provided as a language-independent OS tool.

-----


Just to add some links for those (I assume these are the ones you mean):

Dumping the debug-level log ring-buffer on exception:

http://lists.xenproject.org/archives/html/mirageos-devel/201...

Detailed profiling (interactive JavaScript viewer for thread interactions):

https://mirage.io/wiki/profiling

-----


I use Go on App Engine and have never been able to SSH or GDB those machines. Nevertheless, I am still able to debug issues in my app.

I admit that debugging is easier on platforms where you get more control.

-----


A good chunk of that is not the bubble. It is due to ending the anti-poaching collusion between top tech companies that had been letting companies keep salaries down.

-----


This is wrong on two levels:

1. That was deemed illegal and they've stopped.

2. Salaries are not being kept down in SV. The whole topic here is that they are out of control. Have you actually tried to hire anyone? I have. Even people that telecommute from low cost of living areas want SV salaries. I've had to go outside the US for my last 2 hires.

-----


You misunderstood me. I'm saying that the fact that they stopped means that we should expect salaries to go up. Because they were artificially lowering salaries before, and now they stopped that.

-----


I highly doubt they have stopped. Just got smarter about how they do it

-----


Right. Thanks for the clarification.

-----


I think you're getting downvoted because you seem to have misunderstood the parent commenter. They were saying the same thing as what you're saying here. Ie. that salaries are high because "that was deemed illegal and they've stopped".

-----

More

Applications are open for YC Summer 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: