For Google, ART seems bigger than just Android. Consider the degree to which their infrastructure depends on the Oracle JVM and the associated strategic risk. As one datapoint, recall the Oracle vs. Google Java lawsuit. How much additional ART development effort is required for correct execution of non-AWT (Abstract Windowing Toolkit) Java applications (essentially, headless server processes)? I know the Java/JVM ecosystem well, but I have not done any Android development. Surely, Google wants control over the destiny of its core software stack.
The article doesn't mention one seemingly huge benefit of JIT compilation: profile-guided optimization:
Perhaps the baby has been thrown out with the bathwater?
Little is mentioned about how ART compares to the JVM. For example, does ART perform escape analysis? Not all object allocations are equally bad. The Sun JVM can figure out which objects may be allocated on TLABs (Thread Local Allocation Buffers) - an optimization which reduces the burden placed on the garbage collector because TLAB-resident objects may be deallocated as the stack is popped. [Please fact-check me as I'm merely a long-time Java developer vs. an expert on JVM internals]
I learned that panhandlers, at least in Chicago, interpret polite negative responses as opportunity and will continue to bother me. So, I have taken to saying flatly, "Not happening." It's not so rude or demeaning that I inadvertently pick a fight, but it's blunt enough to let them know that they're just wasting their time with me.
When I'm in a place where these folks will follow you around, I turn as robotic as possible and answer once (or maybe twice to make things perfectly clear) with a simple "no". I try to leave no room for interpreting playfulness or agressiveness.
To add to the collection of work-arounds posted here, most hotels seem to have reasonably modern/common printers. Often, they are connected to the untrustworthy hotel PC by a USB cable. It seems faster to unplug the printer from the hotel PC and install drivers on one's own laptop than it is to figure out how to gain access to the hotel's crappy computer. Hotel printers connected to hotel computers via ethernet/WiFi also likely have working USB ports, so one simply could bring his own cable with a "B" plug. I'm sure there are ways a malicious person could install rogue printer firmware, etc., the likelihood of such threats existing in the wild is 1/1000th that of the sum total likelihood of evil existing on hotel PCs.
I suppose the relevance of my entire comment hinges on the presumption that anyone reading HN only uses hotel PCs for printing stuff. Valid?
I was thinking about the sorts of fraud categories AirBnB likely experiences. Most fraudsters want cash or cash equivalents, and the use of lodging on a particular night is nearly as illiquid as stolen fine art. So, those seeking stuff to resell will choose to defraud one of the zillion online marketers who ship stuff to doorsteps. A buyer who actually used the space he reserved could initiate a chargeback later claiming that the service promised via AirBnB wasn't provided -- couldn't access apartment, wasn't as described, etc. However, space providers likely will cooperate with AirBnB and provide evidence in their defense. Better to attempt a chargeback elsewhere if one is short on money. It seems that using AirBnB as a platform for crimes between buyer and space provider is possible, and there certainly has been at least one heavily publicized case, but we would hear a lot more about these events if they were happening much.
So, what's left? Collusion between buyer and space provider -- in all likelihood, they are one in the same, or identities have been stolen. For example, I list my condo on AirBnB for $100/night. Someone books it for the weekend, and then doesn't show up. AirBnB owes me $200 -- after all, I gave up other options to profit from its use. An honest buyer pays up. But, maybe the buyer is dishonest -- he used a stolen credit card, etc. In this case, AirBnB eats the loss and pays me as the space provider. Now, wouldn't it be convenient if I was also the buyer? Cash from stolen credit cards, funneled through AirBnB (much akin to the way online poker sites were used to transfer stolen money via bad heads-up play). This would work until AirBnB noticed that my listing seems to have a suspicious propensity to attract fraudulent buyers. Then, they'll shut me down. So, I'll pop-up elsewhere. After all, no need to actually have a space because no one I accept will ever show up!
I bet the usage patterns of the party/parties involved in this fraud are drastically different than those of legitimate market participants. Someone with a fraudulent listing could out himself by rejecting a bunch of legitimate AirBnB buyers, and this behavior would stand-out as it's the opposite of the behavior expected of an honest seller. So, he must protect against this risk by making his listing unappealing (high price, bad photos/description, unpopular location, etc.). The behavior of users browsing AirBnB when viewing this property could identify its relative undesirability (few clicks, etc.), and price outliers could be identified by comparing similar offerings by date/location/type. The click stream of the "buyer" likely is most revealing. Someone selecting an unappealing property without doing much comparison shopping likely isn't a legit buyer.
What other stuff might predict fraud? Vague descriptions might indicate a fraudulent listing. Most space providers love to tell buyers what's special about their offering. Could some scoring of a listing's prose prove a strong predictor? I've never listed with AirBnB. What do they do to verify listings? As a buyer, they verified my identity. Could this serve multiple purposes? Certainly, I'd feel better listing my guest room if I know that AirBnB will know the identity of the guy who rented the room and then stabbed me at 3AM. But, in addition, does identifying market participants in strong ways help keep fraudsters from repeating their crimes by setting up multiple accounts? Obviously, newer market participants are more risky than established ones, especially those who have interacted with known legit, long-time users. The social graph comes to the rescue here. Even astroturfing ought to show up as a small, disconnected graph unless legit users' identities are stolen.
Of course, this comment is all just conjecture. Obviously, AirBnB can't tell the public about specific fraud methods or how they identify suspicious activity. However, I like the concreteness of considering actual fraud scenarios, so I decided to put forth some ideas for discussion.
Most of the comments here presume that it's unacceptable for a drone ship to break down in the middle of the ocean and be without crew to repair it. What if these ships were designed so that nothing too awful would happen if they floated around in the middle of the ocean for awhile awaiting another ship with a human crew to perform repairs. Or, maybe another drone ship or two to tow a broken one back to land?
My understanding of container shipping is that customers make SLA choices much akin to us Americans choosing between UPS Ground/2nd Day Air/Next Day, etc. UPS uses these varied SLAs to smooth out its use of fleet capacity and for price discrimination. Shippers operate transshipment ports as part of distribution networks much like the hub & spoke designs of the major airlines. These ports have a bunch of shipping containers sitting around awaiting capacity.
Consider the needs of companies that must transport low-value, high weight/bulk cargo. These companies likely already choose the "UPS Ground" equivalent for container shipping. Due to low product value, inventory costs are low (in transit goods are inventory), so it's probably less expensive to have buffers of goods in the supply chain than it is to pay for tight shipping SLAs. Why should these companies care if the variance they experience in shipping duration is due to capacity constraints of manned-ships or that it took an extra two weeks to fix the ship upon which their cargo was in transit?
I've always seen local governments as the root cause of "last mile" providers' ability to turn their customers into the product. Why haven't elected officials made more stringent demands upon the companies given monopoly (or at best duopoly) rights to convey bits to and from my home?
I have a condo in Chicago and was delighted to learn that a company was offering our building last mile connectivity via microwave along with SLAs for not only bandwidth but latency as well (to which point I don't recall)! Sadly, the condo board didn't seem so enthralled. Unlike the suburbs, city folk have more options apparently.
In my experience, nothing stops a facilities provider from serving whomever they like. Video franchises--which don't apply to data service in most states--are almost always non-exclusive. For example, in Seattle, the dominant provider (Comcast) has a franchise that specifically says it is not the sole property of Comcast. Another company, Wave Broadband, also has a franchise to serve the city but only serves specific portions of it. Meanwhile, companies like Condo Internet are free to provide gigabit service to whomever they like, so they serve only the buildings that are the most profitable.
The biggest problem is that last-mile is a very, very, very expensive proposition. Ask Verizon. Their former CEO, Ivan Seidenberg, learned that the hard way. Digging up streets, attaching to poles, and plopping down equipment all costs a lot of money. It also, in the case of cabinet-sized telco equipment, can really irk the neighbors.
Ultimately, if we want real competition for more than just high-end condominiums, we need to start treating physical plant connections like roads, power lines, and water pipes: a central, neutral authority builds them for the benefit of a specific geographical area and then all comers are allowed to use them. (No, HOAs, building your own coax network and then having some cut-rate third-tier ISP become the exclusive owner of those wires doesn't count. That's not competition, either.)
With regards to netflix, it's not always the last mile that's the problem.
Specifically, Netflix's ISP has an agreement with your ISP, which says essentially "You know what? We send and receive a lot of data. How about I won't charge you for your data travelling across my network, and you don't charge me for my data on your network." But then Netflix's ISP transmits enormous amounts of bandwidth, and your ISP doesn't really get to take an equal advantage of that reciprocal deal.
It's actually about peering agreements between ISPs. Your ISP wants Netflix's ISP to foot the bill for the fact that their reciprocal agreement is statistically just one-sided.
The customers of the ISPs are the ones who obtain value and demand fast delivery of Netflix. The bandwidth from Netflix is something that can't really be avoided so it is advantageous for both the ISP and Netflix for the ISPs for cache the content and use Netflix's Open Connect CDN or whatever it's called.
The the ISPs don't deliver and throttle Netflix's bandwidth and it's noticeable to the customer then hopefully the customer has the option of going to another ISP.
A friend of mine lives in a large apartment building next to the Loop in Chicago. The building was unable to negotiate a competitive plan so they went with satellite. Clearly the local market has failed when satellite offers a better package.
What's notable is that no one has yet posted here saying, "Go stunk for me. Perf and reliability were both awful. I went back to blahblah and threw away all my Go code." It's usually easy to find detractors of any technology.
I actually have seen complaints about Go's performance relative to other compiled languages, like C++, D and Java. I don't think it's coincidental that most of the Go converts come from a dynamic-language background. If you're porting from Ruby, 2-3x slower than Java still seems blazing fast.
For me, Go misses the sweet spot entirely. A year ago I was curious about both Go and Clojure, my interest piqued by the strong concurrency support each has. I picked Clojure and have grown more and more confident in that choice.
Why not Go? In a language with nil pointers (Tony Hoare's "billion-dollar mistake"), no generics, and sub-JVM performance, static typing just doesn't seem worth it. Also, poor interop with other languages means Go can only draw from its own small community for libraries (contrast Clojure which makes using mature, well-tested Java libs trivial).
When Rust is more mature, it may be what I hoped Go would be: the safe language, with lightweight threads, that makes me never have to write C or C++ again.
Go is interesting and I really enjoyed channels, but no generic programming really disappointed me. I guess supporting the generic paradigm is pretty tricky (C++/Java templates/generics aren't dear to my heart), but not being able to implement map/filter/reduce functions...
I just did a prototype in Go. Python is my normal go-to language for prototyping. I concluded that for prototyping, I'm going back to Python.
My main objections were neither performance nor reliability (I don't care about either while prototyping), but developer productivity. I find there are just so many programming conveniences in Python that I wish I had in Go - list comprehensions, negative slice indexes, for- and with-statements over user-defined data types, heterogenous dictionaries, easy duck-typing. When I just want to get some ideas working to prove a concept, it's really annoying to flesh out all your types and error handling in full detail.
I do think Go would probably make a pretty nice production language, but I'm not at liberty to convert the large C++ and Java systems I work with to Go. Even if I were, I think I would prefer C or C++ because of its easy Python bindings.
Except that you do. You've just said it yourself! What the for command does in Python is essentially running two communicating coroutines, passing control between them. Yield-using generators actually make it very explicit, by creating special resumable stack frames. The only difference is that in Go, you're not limited to a single stack frame in the producer, nor do you have to use a loop or the new PEP-380 "yield from" syntax (for example, if you're iterating over a tree). The blocking channel takes care of the execution sequencing. (Also, in Go, the channel makes it possible for the consumer to use multiple stack frames, if you're re-building another data structure instead with the data you receive from the producer, but that seems to be a rarer case. It's a nice symmetry, though.)
The Python equivalent of two communicating goroutines is a bidirectional generator (PEP 342), not a for-loop. I try to use the language construct with the least power.
Basically what I wanted this for is that I have a user-defined type that is basically a list of some other user-defined type, plus some extra stuff. In Python it's trivially easy to implement __iter__ and delegate, and then I could manipulate these objects the way I thought of them, as containers for other objects. In Go...I wonder if I could've used type embedding (making the first member of A be B), but I don't think the built-in language constructs respect embedding when figuring out what's a legal expression. I ended up biting the bullet and explicitly looping over the slice member, but the point of prototyping is being able to think of your code in the terms of your problem domain.
I think that's because Go is reasonably good for, say, 90% of situations, and if you're in the 10% where it is not a good fit then you already know that and you know better than to try using Go in that situation, and thus few people find Go stinking.
On the other side is things like Python, which are reasonably good for a much lower fraction of things (e.g. 30%), but if you're in the 70% you very well might not realise it does not suit your situation and thus experience Python stinking.
The thing is, it is not a bad language. It's also written by great engineers, who deserve their place in history. My problem with it (and this is the critique you see very much outside the HN circles, where Go is not a bit of a hype), is that it doesn't offer much over the state of the art of twenty years ago (Java), which wasn't exactly state of the art either ;).
So, even though you hated your previous job, you obviously did well at it -- at least well enough to save-up a nice nest egg. Let's presume that you will progress over time from crappy programmer to a solid one. Anyone like you will get at least that far and likely much farther. The question at hand is simply what you should do to facilitate this progression. Have you considered solving some small but pesky problem which you know is common in your industry? A calculator for compliance with Governmental Rule XYZ? A converter between two 90's era (or worse) file formats still in use within your previous industry? I have no clue what you did before, and I probably wouldn't know what projects to suggest even if I did. My point is only that you can make-up for being a mediocre programmer with deep knowledge of a specific industry. If Patrick can make $30K+ selling bingo card software to teachers, you ought to be able to do similarly in the niche you know well. To avoid having to be a good salesman, maybe you put up a website with the file converter thingy and then sell premium advertising space to vendors in that niche.
I have the same number of feet but one more knee than this guy. From what I understand about above-the-knee prosthetics, walking up stairs with a "normal" gait is an impressive feat. Clearly, the technology is near-miraculous.
With all this said, I'd much rather see advances in bionic attachment techniques. If I could have a metal rod extending from the distal end of my tibia through skin, being without a foot would be much less annoying, and my physical abilities would improve significantly. I could just clamp on a prosthetic in the form of a carbon fiber spring -- the same sort I have now. Presuming the rod required little maintenance, I would require far fewer trips to the prosthetist for construction of new sockets as the shape of my residual limb (the politically correct term for "stump") changes over time. No risk of skin issues preventing me from using my prosthetic leg. No risk of catching my prosthetic foot on something while walking and pulling it off my body. Current socket-based attachment techniques create what effectively is an extra joint with very limited range of motion. Oddly, this is useful for subtle manipulation of a gas pedal (I'm missing my right foot), but it is mechanically inefficient, reduces my perception of stability, and keeps me from feeling like the prosthetic foot is "mine". Because of this extra joint, heavy shoes feel really heavy. Lots of effort has gone into making prosthetic feet light -- a much less valuable attribute if direct body attachment was possible. Reducing the value of making prosthetics lightweight would allow for all sorts of innovation.
My understanding of the current state of affairs is that, while it's quite easy to stick a metal rod into the distal end of a bone, it's quite difficult to allow it to protrude through skin without risking infection. My general take when reading yet another article about some amazing $100K prosthetic device is similar to my thoughts when hearing fuel cell folks talking up the technology in the early 2000's -- They all showed up at tech events talking about how fuel cells were going to change the world, how their own novel technology was going to make them more efficient, lighter, whatever. My question to them was always, "When am I going to be able to replace my laptop battery with a fuel cell so I can take a cross-country flight without worrying about my battery running low?" They always gave some vague answer and then went on talking about the improvements they were making to a technology which was not at all available to me. It's 2013, and I haven't yet owned a fuel cell. However, I'm writing this on a Mac with much better battery life than was available a decade ago even though the fundamental technology used in its battery is unchanged.