Hacker Newsnew | comments | show | ask | jobs | submit | ekidd's comments login

Looks interesting! How is ECS working out so far for you?

And what's your minimum per-application cost for single-container Docker web app with SSL?

ECS is working out great for us. We host all of our own internal infrastructure on Convox and have been very happy.

The minimum per-application cost would be for an ELB. Counting the slice of runtime cluster needed to run it I'd estimate around $20-25 per app as a minimum.

But I wonder how much resistance you would get from the military, veterans, military families, and so on who make the argument that for every robot we make a human soldier doesn't have to be put at risk.

On the other hand, do soldiers really want to defend themselves against flying, high-speed IEDs with target-recognition software? I mean, I've seen malfunctioning drones move so fast that I lose sight of them. Does anybody really want to see one of these things come over a compound wall carrying a payload of high explosives, and software for identifying groups of human targets and dodging defensive fire?

Once you start an arms race, and once several big powers do the R&D, this would not be an easily controlled technology.

> On the other hand, do soldiers really want to defend themselves against flying, high-speed IEDs with target-recognition software? I mean, I've seen malfunctioning drones move so fast that I lose sight of them. Does anybody really want to see one of these things come over a compound wall carrying a payload of high explosives, and software for identifying groups of human targets and dodging defensive fire?

You basically just described a fire-and-forget missile, which is a technology that has been on the battlefield for over three decades.

A fire-and-forget missile is a single directional device with minor corrections for targeting. It can't hover, back up, select its own target, avoid return fire, etc. So, no we haven't had this tech for three decades.

> A fire-and-forget missile is a single directional device with minor corrections for targeting.

No it isn't. A fire-and-forget missile is a missile capable of dealing with every issue between the launching platform and the target. This much more complex than "minor corrections for targeting".

> It can't hover, back up

These are a function of a particular propulsion system, not guidance system. The vast majority of non-rotorcraft cannot hover or back up.

> select its own target

That is exactly what a fire-and-forget weapon does. The firing platform directs the weapon at a particular target to start, but the weapon makes the decision about what to hit. If it loses lock, it tries to reacquire. It does not necessarily reacquire the same target. In fact, you could blindfire most FF weapons and let the seeker pick a target in its path of travel, if you really wanted to. Rules of engagement typically prohibit this, but it is technically feasible.

> avoid return fire

Evasion is certainly something current weapons are theoretically capable of. It is not typically in the package, though, because it adds cost, size, and weight. Once these systems get to the point that they can be added to drones in a cost-effective manner they will likely be added to single-use weapon systems as well.

> So, no we haven't had this tech for three decades.

It has been a constant march of progress, but yes we have had weapons that can make targeting decisions for themselves for over three decades. The Mk-48 torpedo[1] has been in service since 1972 and has had since then the ability to travel a predetermined search pattern looking for targets and automatically attacking whatever it finds. The Mk-60 CAPTOR mine has a similar capability to discriminate and engage targets. The RGM-84 Harpoon[3] is launched by providing one or more "legs", then activating the missile's seeker to find and acquire a target; it is not actually fired "at" a particular ship in the conventional sense of the word.

[1] https://en.wikipedia.org/wiki/Mark_48_torpedo

[2] https://en.wikipedia.org/wiki/Mark_60_CAPTOR

[3] https://en.wikipedia.org/wiki/Harpoon_(missile)

From the article: When was the last time you looked at /etc and thought - "I honestly know what every single file in here is". Or for example had a thought "Each file in here is configuration changes that I made".

I first started administering Linux servers and workstations in 1998, and /etc/ was already full of scores of config files, including such horrors as X86Config and M4-generated sendmail.cf files. (And sendmail.cf itself dates from the 80s.) There were conf.d directories and mysterious networking configuration files, nearly all of which were supplied by the OS.

In other words, this frog was thoroughly boiled 20 years ago. If the world you're looking for ever existed, it would have been back when VAXes still roamed the earth.


Ditto. My name's likely in your Linux distro, but I also started in 98, and even then you would never expect a given package to work without default config files.


I recently discovered that I can do this on Linux, too!

After over two decades of using Unix and Linux, I ran into jq, a tool for querying and transforming structured JSON pipelines: http://stedolan.github.io/jq/

This can be used to do many of things you demonstrate with PowerShell. Here's an example from the docs:

    curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | \
      jq '.[] | {message: .commit.message, name: .commit.committer.name}'
This outputs:

      "name": "Nicolas Williams",
      "message": "Add --tab and -indent n options"
    ... more JSON blobs...
You can output a series of individual JSON documents, or convert the output to a regular JSON array. You can also output raw lines of text to interoperate with existing tools. And this works especially well with commands like 'docker inspect' that produce JSON output.

I think that PowerShell and jq get this right: More command-line tools should produce explicitly-structured output, and we should have powerful query and transformation tools for working with that output. For day-to-day use, I do prefer jq's use of JSON over a binary format.


I've written a tool inspired by jq that builds on top of ramda and LiveScript. The motivation behind it was to be able to write pipelines with tools that already exist, instead of having to learn new syntax that can't be used outside that domain.


The example from before would look like this:

    curl https://api.github.com/repos/stedolan/jq/commits\?per_page\=5 |\
      R 'map -> message: it.commit.message, name: it.commit.committer.name'


That's cool, though powershell deals in actual objects which are more powerful than better structured input and output. For example, the objects returned by ps have methods for interacting with them.


I would argue the object approach limits the universality of the PowerShell way as a general purpose computer interface because it binds it to the necessity of a particular flavor of object system (e.g. .Net) and mutable data which is venom to general purpose pipe and filter systems.

jq looks very interesting though note that it builds upon an underlying notion of text to serialize JSON.


You can parse streams of text in PS too, so it's not like cmdlets are making the pipeline any less powerful.

As for binding to .NET, I don't think that's very limiting. A surprising amount of stuff is already easily hosted in .NET.

I would argue that all PS needs to be more competitive is: Ports to Linux/*BSD/OSX and a terminal implementation for Windows that doesn't suck.

Cmd.exe is a piece of shit that needs to die:

- Command editing. Is support for at least ^A, ^E, ^W, and friends too much to ask?

- Completion. Who wants to cycle through 4253 possibilities one at a time?

- Copy/paste. Programs actually STOP executing when you select text, like as if you ^Z in Unix. Even with quick edit enabled so you don't have to ALT-SPACE-E-ENTER<select stuff>ENTER, the fastest way to paste is 4 keys: ALT-SPACE-E-P.

- No SSH. Microsoft is addressing this. It's borderline criminal that Windows doesn't just ship with OpenSSH server already.

- No screen/tmux. I can't even talk about how it deals with resizing without using a lot of profanity.

- Lack of echo by default is seriously annoying.

In short, make the terminal feel like Putty and the editing/completion features feel like bash and I think PS could give all existing shells a run for their money.


When CS people use category theory, they're often looking for a framework which allows them to build analogies between the lambda calculus (via a cartesian closed category) and various categories for things like mathematical logic, posets or probability. And category theory is a very natural framework for thinking about these connections.

So it would be useful to have category textbooks for CS folks which spent less time on topology, and more time on familiar categories.

(Also, I've never quite figured out the motivation for adjoint functors. I can understand the definitions, but I don't understand why adjoints are useful. The motivating examples almost always involve unfamiliar categories.)


> So it would be useful to have category textbooks for CS folks which spent less time on topology, and more time on familiar categories.

Agreed, and I should have been clearer in my post above that I was just airing some thoughts on category theory rather than meaning to imply anything negative about the importance and usefulness of X-flavoured viewpoints on category theory for various values of X (including "computer science"). With that said, don't knock topology! See Baez and Stay's 'Rosetta stone' article http://arxiv.org/abs/0903.0340 for a plethora of connections between such seemingly abstruse concepts as topology, and computer science.

> (Also, I've never quite figured out the motivation for adjoint functors. I can understand the definitions, but I don't understand why adjoints are useful. The motivating examples almost always involve unfamiliar categories.)

It may be helpful to think first of Galois connexions (https://en.wikipedia.org/wiki/Galois_connection and http://ncatlab.org/nlab/show/Galois+connection). I can't find any relevant blog posts now, but I know I have seen them discussed in a CS context that may seem more natural than that of 'abstract' adjoint functors. Here is a non-free article: http://link.springer.com/chapter/10.1007%2F3-540-17162-2_130 .


IIRC Gershom Bazerman talk https://vimeo.com/72870861 is about that precisely. (galois, adjunctions)


They just show up everywhere, adjoints. Often you'll be able to then use their uniqueness and limit preservation properties. Typically CS style category theory is a little bereft of categories to make maximal use of their appearance, but they're there.

I gave a talk at LambdaConf this year mentioning that Free/Forgetful adjoints are a great way to understand free structures. There's also the fact that for all/exists arise as adjoints and you can use this to draw immediate conclusions like product preservation.


This is an amazing way to improve your knowledge of a foreign language. When I started reading French novels, I could mostly understand maybe 60% and guess another 30%. But after 10,000 pages, I was only running into unknown words every few pages, and my understanding of idioms was vastly better.

The same thing works for listening comprehension: Find a DVD box set of easy TV series, one where you can understand maybe 40% of the dialog, and just start watching. It's OK to use a good dub of a series you've already watched. Repeat this with, say, 5 series and you'll see an amazing boost in listening comprehension. TV series are usually better than films for this exercise, because they'll give you 50+ hours of mostly the same people speaking about a limited set of topics, which helps a lot in the beginning.

It seems like the brain is very good at upgrading partial comprehension to nearly complete comprehension. But the trick is getting to an enjoyable level of partial comprehension.


There is a technique called "Listening-Reading" that's fairly well known within the language-learning community.

The idea is straight-forward:

Get a novel-length text in your target language, a high-quality recording of that text and a literal translation of the text in your native language. Alternate between reading the original text and the translation while simultaneously listening to the recording.

The method is supposedly fantastic, but it's incredibly difficult to gather suitable materials. While widely-translated books like Harry Potter seem ideal, the translations are not literal and occasionally the audiobooks do not match the texts exactly.

Further info:




They have interlinear translations of lots of religious texts (and surely also audiobook recordings. But religious texts are not everyone's favourite reading material.


Thanks for the links! I'm going to try this out with French.


A Hungarian friend learned German with Star Trek The Next Generation this way.


Yeah, probably a third of my (arguably limited) German vocabulary comes from watching Star Trek at Sat.1.


I liked to watch subtitled shows for this reason. I seem to pick up words as I go. Nouns seem to be the easiest to figure out, followed by adjectives and verbs, but this might vary depending on just how those words get morphed by the grammar in some languages.


I'm unconvinced that accepting large amounts of technical debt allows startups to react faster.

I've definitely seen startups that have almost found a good idea, but can't tweak it, scale it, or make it stable, because they're overwhelmed by technical debt and even simple changes have become engineering death marches.

Letting your back-end code turn into a complicated, disgusting mess means that adding critical features may become a multi-week nightmare. Which is then followed by a series of additional multi-week nightmares for each successive feature.

Now, this isn't to say that obsessive code-polishing is a good idea, either. But startups require rapid iteration, and you can't iterate especially rapidly when your code is complicated, badly organized, and poorly tested.


One of the most valuable startup skills is to know which kind of debt has a low interest rate and which kind of technical debt has a high interest rate. Certain things - lack of tests, poor build system, poor deploy system, very poorly structured code - have a very high interest rate. Every time you create bugs, you waste time tracking it down and fixing it. Every time you check something in that breaks the dev environment for the rest of the team, you create bugs. On the other hand, messy code that is isolated to one system, or an architecture that has a bit of copy and paste, are usually not too big of a deal. Going on some deep dive to create some "generalized framework solution" is almost always an error.


Some people talk about Reversible Decisions, and I think what you're saying plays into that sentiment.

Unfortunately some people don't realize that in many situations not making a decision is itself a decision, and so they don't always notice it when it happens. That can be anything from the ones you mentioned, like testing and tools, to authentication, auditing, localization, robust error handling, resource/memory leaks, or monolithic designs that prevent scaling.


> I'm unconvinced that accepting large amounts of technical debt allows startups to react faster.

Accepting large amounts of technical debt enables you to deliver faster if your initial assumptions (both about what you want and how to achieve it) are correct, but slows you down if any of those turn out to be wrong.

OTOH, it might offset that slowness in whole or in part that by speeding up your ability to determine that they are wrong.


As Keynes famously observed, "Markets can remain irrational longer than you can remain solvent" (while betting against them). Markets can look bubbly for many years before the bubble bursts.

Having lived through both the first dot-com bubble and the recent real-estate bubble, I've noticed a pattern:

1. Natural cynics think things are looking a bit bubbly.

2. Years pass.

3. Society as whole becomes heavily invested in the bubble. I can't go to a cocktail party or family reunion without people buttonholing me to talk about tech stocks or real estate.

4. An ideology is invested explaining why assets of category X will undergo very long-term exponential growth. During the dotcom boom, it was "Dow 36,000". During the real estate boom, it was "If you don't buy a house now, you'll never be able to afford one." (If you're cynical, you might assume this kind of delusional hype is an effort to find one last fool to buy in before the party's over.)

5. The whole thing explodes messily.

So for me, those are the warning signs: Society-wide buy-in, and new ideologies that explain why "X will always go up." Oddly, this means I'm not too worried by a near-term tech bubble, despite the hype and money in San Francisco: I'm not seeing the widespread public buy-in and rampant wishful thinking I've seen before. Of course, this may be because few companies are undergoing IPOs, and the bubble is limited strictly to private capital.


I'm not seeing the widespread public buy-in and rampant wishful thinking I've seen before.

I am... there is way too much funny money floating around into companies that have virtually no chance of ever becoming profitable. There's also a ton of institutional money being funneled into late-state companies (in much the same way that it funneled into real estate in 2007), trying to replicate the Yuri Millner/Facebook model -- but how many of those companies will become Facebook? If you need a laugh, check out the latest interview with the Slack CEO:

It’s pretty straightforward. I’ve been in this industry for 20 years. This is the best time to raise money ever. It might be the best time for any kind of business in any industry to raise money for all of history, like since the time of the ancient Egyptians. It’s certainly the best time for late-stage start-ups to raise money from venture capitalists since this dynamic has been around.

And as a board member and a C.E.O., I have a responsibility to our employees, to our customers. And as a fiduciary, I think it would be almost imprudent for me not to accept $160 million bucks for 5-ish percent of the company when it’s offered on favorable terms.

We don’t have an immediate use for that money. But it increases the value of our stock and can allow potential employees to take our offers, and it reinforces the perception for our larger customers that we’ll be around for the long haul. All of that stuff.


That last sentence is rather valid. The US alone has been pumping trillions after trillions into the US and global economy, which has not resulted in general official inflation. But that money went somewhere, and it went into corporate coffers, financial houses, and the private pockets of the wealthy who are all "investing" it like it's not their money, which it isn't really.

I am not sure, but it does not strike me that there is another example of the current type of scenario in all of humanity's history. It's not like the lead up to the 1929 market crash because there are no margin calls since the government just gave away free money and even paid the wealthy and their shell organizations, aka, corporations to steal even more money. It's just unimaginable amounts of money sloshing around in the global market without anything productive to apply itself to. It doesn't want to construct residential in the US because that would depress prices and rents, it keeps being pumped into land and real assets and maintaining artificial housing and rent prices, ....

Although I think there is a great storage of inflation just barely being held back by a policy dam, if that dam doesn't break, I think the outcome, the effect will be a social one. Desperate regimes of the wealthy paranoid about losing the wealth they looted will resort to ever increasing repression to assure their Precious remains with them and their heirs for generation after generation.

Progress is made when there are constraints and problems need solving and things need improving. But what happens when we are all essentially trust fund babies that have no parameters, no limitations, losses are meaningless because it's not money we earned, and "success" is easily seen as such because it is measured by state rather than delta. By all current measures, if you have a successful business / economy no one really cares that you essentially bought the facade of success with your undeserved trust fund / made up money. We can see the effects on the poor, where the impact of our policies are being felt in silent suffering. I suspect the result of at least the last 10 years of policy will be an era of impoverishment not seen in 100 years in the USA in the short term, and another era in human history of stagnation and eventual oppression as humanity turns in on itself if the unearned, and unwarranted wealth is not recalled. Ultimately, what wealth, i.e., money is, is a relative measure of power and especially the US government gave the crooks that defrauded us and almost destroyed America the keys to the vault.


I'm hardly the expert to comment on this, but I'll make the response just for future historical reference (may be I am completely wrong.) Certainly there is a bubble in asset prices now, presuming interest rates do not remain zero or negative forever.

The great irony is that at some point extra money can deflate prices rather than increase depending on where that money is applied. E.g. build enough solar panel factories in China and eventually everyone gets to flood the market with cheap solar panels. So it is not clear that more cheap money will increase prices for a very long time. The second irony is if the Fed and other central banks hit their goal of increase prices, the wealthy won't feel anything but it will hit the poor and middle classes very, very hard. As in choosing when to eat meals hard.

The thing most analysts miss is there is all of this 'new' money but we still have huge future liability holes. This is both with debt which requires future cashflows to pay off along with future promises which may not be able to be met. This is very apparent in underfunded pension funds. Despite many different metrics pointing to all time highs in US stock market valuations (many ways to cut this pie) we still have big pensions funds that are 50% funded or less (Illinois & Chicago for one.) That is with expected 7%+ returns year after year in to the future. If their future gains are a lot lower its even worse. Pension fund problems are visible, insurance, annuities, etc are opaque. There could be huge problems there still. Banks aren't even that well capitalized still.

We could imagine quantitative easing, money printing, and cheap money as something that could be shoveled in to huge holes of "negative" money. As long as those huge holes exist, the stimulus is not going to perform as expected. If the stimulus is through more debt its plausible these holes are staying the same or getting larger. Certainly it is possible that all of this money has done a great deal already including preventing wide spread bank failures.

My general thesis is we are in a period of time where technology and monetary policy are in direct and absolute conflict with each other. Monetary policy is totally reliant on inflation to pay off future debt and claims. On the other hand, technology is about delivering more at a much cheaper price. We want a market for $1000 iPhones, not $1 billion iPhones. We want software that delivers 1000x more value to a customer, not that costs 1000x more. Fuck, hn wouldn't be here and we would all be living in a bizarre universe if things were the other way around. May be there would be the global market for 10 supercomputers or whatever was once predicted.


> presuming interest rates do not remain zero or negative forever

Slightly off topic but I don't believe US interest rates can go up without creating a massive issue for the US economy. From the link below: "a 5% increase in interest payments for the federal government would cause the level of federal debt to rise to $85 trillion over the next 20 year". So should the Fed reserve return interest rates to a historically normal level before the government significantly reduces debt (which seems unlikely) they would likely crash the US economy.



>a 5% increase in interest payments for the federal government would cause the level of federal debt to rise to $85 trillion over the next 20 year"

So if interest rates suddenly rise to 5% and the US didn't make a single debt payment for twenty years, total debt would explode? What insight! Then again, if nobody cares that no debt payment was made for 20 years, who cares what the total debt is!

The article says at 5% the debt payment is $900BB annually, which is 5% of GDP. High, but hardly debilitating. First you need to explain what purpose the would be served by the Fed raising rates to 5% tomorrow, though.


You must be totally horrified that the EU is doing their own QE.


I've been several docker components heavily, on a real system. This was the state of play just prior to the Docker 1.6 announcement:

- Docker registry, the old pre-2.0 version: I hate it. It's incredibly slow, and it raises lots of errors.

- Docker 1.5: Mostly stable and usable if you're on the right kernel, occasionally does something weird.

- docker-machine (from git): Very nice for provisioning basic docker hosts locally or on AWS. Nice feature: it's capable of automatically regenerating TLS certificates when Elastic IP addresses get remapped.

- docker-compose 1.1.0: Kind of a toy, but a fun toy, and it generally did what it advertised.

- docker swarm: With docker-compose 1.1.0 and docker 1.5, it was pretty much unusable. Simply running "docker-compose up -d" twice in a row was enough to make it fail with random errors.

I'm going to re-evaluate swarm with docker 1.6 and the new docker-compose.


I totally support Heroku's right to charge whatever they want, of course. They're expensive, but they provide good value for the money.

Historically, I have a bunch of personal free-tier apps which sleep most of the time, and a couple which are up most of the time. My consulting clients, on the other hand, have paid Heroku thousands of dollars per month at various points. I also maintain a buildpack.

But at this point, it's time I get off my backside, set up a docker host on EC2, and containerize the stuff I care about. I can probably pack everything onto a pretty small instance, and I've been meaning to deploy some non-HTTP services. Besides, it's closer to what I'm using in production.

Thank you, Heroku, for some very enjoyable years, for top-notch paid service, and for the free hosting!



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact