As a non-native English speaker, I find the omissions of "the" very prominent and interesting in SpaceX speech. "shortly before first stage shutdown", "resulting in loss of mission", "139 seconds into flight", "some period of time following separation", "data to determine root cause" -- is this a general theme in engineering or journalism? I wonder what linguists have to say about this.
In these cases the phrases are being used as proper nouns for preexisting events on the timeline. For example, I wouldn't say that I went for a walk "five minutes before the noon," I'd say "five minutes before noon." They're speaking in a jargon that treats these events the same way that you or I would talk about Monday or midnight and which also gives them precise meanings. If the main engine shut down on its own at an unexpected time then you could say "shortly before the main engine shut down." This lets you say things like "The main engine lost power 30 seconds before main engine shutdown" and be only slightly confusing rather than nonsensical.
It's standard in military debriefings (aerospace at least), especially for flight test debriefings. The flight test pilots I've worked with, mostly former U.S. Navy test pilots, all spoke like this during their debriefings and wrote in the same way for their flight logs, report-outs, etc.
"First stage shutdown". "Loss of mission". "Flight". I'm not sure what the term for it is, but these seem to refer to generalized phases/states of flight that the mission can be in. The use of "the" would refer to specific instances of the noun that the speaker has reason to believe the listener would know about. 
They almost feel like "uncountable" nouns, but I'm not sure. "Flight" can be countable, but as a generalized state/phase, I think it wouldn't be. It doesn't sound strange to me (a native speaker).
It would not be wrong to use "the" for many/all of these cases, but it would have changed the meaning, somewhat.
I see it like I see git commit messages. You need to convey as much information as possible in as few words as possible. Dropping articles that don't add anything to the context is a way to keep it sweet, short and punchy I guess.
Not a linguist but a writer here who loves language.
I'm seeing a lot of folks dropping articles. It's also becoming quite common to eliminate pronouns at the beginning of sentences, i.e., instead of "I went down to the store" you write "Went to the store"
I use this purposefully to jar the reader. I have no idea what the underlying linguistic reasons are.
Implied pronouns, perhaps a trend from Facebook usage? On Facebook and Twitter you have an implied "I" that makes saying you did something redundant. This could transfer over to other formats of conversation.
The mathematics generalizes to 3D quite easily, but the way the webgl pipeline is used in this case doesn't. It'd be easier I think to start from scratch. I recommend reading through the GPU Gem's article on the topic with code samples
Most points raised by the auther are criticism with specific Cucumber step implementations. I understand that this is not criticism of Cucumber-the-tool but Cucumber-the-process, but of course, if you're using your tools wrong, nobody's going to just fix your process.
The awkward step naming, and the routing issue, are all with the default steps of an integration of something like Webrat or Capybara. In recent projects (and I think this is the default for newer versions of Capybara), no steps are automatically generated for you. You have to choose the level of detail you want to operate at yourself. Comparing character count of a method call and a Gherkin step is also a rather useless metric.
The "doesn't share code with my test env" issue is a trivial fix: move your test-but-also-cucumber-env code from test_helper.rb into another file, then require it from both cucumber's env.rb and test_helper.rb.
Personally, I write Cucumber features exactly because they mean I don't have to think about routing, or paths, or syntax. I try to put myself in the role of the user and write down what I want to accomplish, and how. A key indicator for reasonably abstracted feature files might be the equivalence "I changed something in a .feature file" IFF "I have to communicate a change to my users."
But if you want to shoot yourself in the foot, you really have to do it yourself.
The entire idea is that, while keybase stores the pubkey, you don't have to trust them to deliver the right key. They have basically rolled their own type of digital certificate that's stored within a variety of social services, i.e. Twitter -- you tweet something like "I'm <fingerprint of your key> on keybase.io!". The keybase server says this to the keybase CLI: "Bob's pubkey is <key>, and I'm right because https://twitter.com/bob/tweets/1234 says so". The CLI then verifies that the tweet URL really contains the right fingerprint. This extends the trust root to the twitter user (and your local HTTPS CA store). Repeat for a variety of services similar to Twitter. This extends the trust root to the union of all the social site accounts of the keybase user. Whether you choose to trust those is (as always with trust roots) entirely up to you.
There's something in-between, though it doesn't actually break the paradox, just makes some different tradeoffs:
In cryptography, a ring signature is a type of digital signature that can be performed by any member of a group of users that each have keys. Therefore, a message signed with a ring signature is endorsed by someone in a particular group of people. One of the security properties of a ring signature is that it should be difficult to determine which of the group members' keys was used to produce the signature
That's pretty cool. I was under the impression that it was validating that the twitter/github/etc are owned by the same person, but using twitter/github/etc to validate that the keybase key is correct is much cooler.
However, it's not foolproof. If your attacker (e.g. the NSA) can compromise some of the proof sources, they can return only the compromised proof sources to the client so the client doesn't see anything that contradicts their malicious key. And if the user doesn't know offhand what twitter/github belong to the person their contacting, the attacker could even simply substitute their own sources that claim to be the same person without actually requiring any compromise of the proof site at all.
For example, if I don't know already that maria's Twitter is maria_h20, then the attacker who has compromised keybase can instead return maria_g20, a Twitter account under their control that has posted a proof of the malicious key.
I get that tracking a user will mitigate this, but that only helps if I've already interacted with the user before and chosen to track them.
What if you don't use Twitter, Facebook, GitHub, etc.? Do they have a way of establishing trust that does not involve signing up for various identity-harvesting social networking services? Maybe a certificate that you can host on your own web server or something?
but more types are in the works. Though proving your identity on more than just your personal website has its advantages. For one, I don't know anything about your hosting situation or your security practices.
This seems very sensible. For example, at a key-signing party, as I understand it, ideally everyone knows someone there, or can authenticate via two separate government ID. The benefit here is that keybase.io has removed "government" from the equation.
The client verifies the key by checking that the signed tweet, gists, etc., all exist and were signed by the private key that matches that public key. So to get the server to successfully lie, one would need to coordinate lies from twitter, github, etc., all at the same time.
And it would be really hard to get all those services to lie without the recipient being able to tell. They could see that their tweet contains a different fingerprint than they expected, unless you got all those services to lie only to the sender, not the recipient. Which is very hard to do without getting caught.
The byzantine Trust issue is really just all agreeing on a a value. A handle/name and public key pair is such a value. We can distribute public keys in a block-chain. This pushes MIM attacks down to the last mile between a block-chain server and client and to the identifier exchange. The user can run their of own Block-chain server and authenticate via sneakernet if need be and we can at least be assured that you are communicating with the person who's handle matches the key (even if it might not be the person we think it is).
A Blockchain makes a good backed for a web of trust because it lets us use the above to link an identifier with a public key and server. One of the primary scaling issues with a web of trust based authentication is that names get long fast: PersonA.PersonB.PersonC.Otherguy. A blockchain allows for a universal naming scheme and a central place to store high confidence links that is faster then querying the peers I trust and asking if they have a link to an arbitrary node.