Hacker News new | past | comments | ask | show | jobs | submit | kdeldycke's favorites login
   submissions | comments

I was afraid this would happen.

Boston Dynamics did good work, but it was all DoD funded at a very high price point. It cost about $125 million and ten years to get to BigDog. That's all R&D; there's no marketing cost in that. DARPA is a patient customer. Google, from the parent article, is not.

BigDog is a great achievement, but it was developed to DoD specs, and it's just too big, heavy, and noisy. The militarized version, the Legged Squad Support System, was bigger and stronger, but the USMC decided it just wasn't useful militarily. Atlas is really an evolved BigDog that stands upright, with similar hydraulic actuators. A hydraulic humanoid is just too bulky.

BD has some problems that aren't well known. BD's CEO, Marc Raibert, is around 67, and due for retirement. He and his girlfriend owned the company, so they're the only ones who benefited from the Google buy. I doubt that the employees got anything. Also, the brains behind BigDog was Dr. Martin Buehler, who previously had an ambulatory robotics lab at McGill, and whose group built the first good running robot quadruped. (Using my anti-slip algorithm, I found from reading a thesis that cites my work.) He was the chief engineer on BigDog, and quit right after BigDog was publicly demoed.[1] He's now at Disney.[2]

Raibert seemed to like hydraulic systems; his name is on patents for BigDog's rather clever hydraulic actuators. But that approach seems to be too heavy for a humanoid robot. Atlas weighs 330 pounds. Schaft, a University of Tokyo spinoff which Google also bought, uses water-cooled electric motors, like Tesla, to get the power to weight ratio needed. You need huge torque only a small fraction of the time, so overloading motors is fine if you can keep them cool. I'd expected that Google would get Boston Dynamics and Schaft to work together, and from that would come a new, lighter humanoid with good balance. But the Bloomberg article said that BD didn't play well with Google's other robotics companies. BD is near Boston, Schaft is near Tokyo, and Google never tried to get them under one roof.

Whatever happened to Schaft, anyway? They built one very nice humanoid robot before Google bought them, and haven't been heard from since. Google wouldn't let them enter the DARPA Robotics Challenge.

Computationally, BigDog/Atlas are not that compute intensive. The balance and locomotion algorithms for BigDog ran on a Pentium 4 running QNX, with the servovalve control loop at 1KHz and the balance control loop at 100Hz. Google's expertise isn't in that kind of hard real time work. You need that stuff down at the bottom to keep from falling down. BD didn't do much work at the higher levels of control; they were mostly building teleoperators with, in the later versions, automatic foot placement.

From the article: "In a private all-hands meeting around that time, Astro Teller, the head of Google X, told Replicant employees that if robotics aren’t the practical solution to problems that Google was trying to solve, they would be reassigned to work on other things." (Probably related to maximizing ad revenue.) That's a great way to lose your robotics expertise for which you overpaid.

I used to work in the legged locomotion area. But I could never see a path to a profitable product. Toys were at too low a price point (even Sony gave up), and a legged working robot for maintenance tasks was a long way off. We're getting closer now; a useful robot that costs about as much as a car seems well within reach on the hardware side.

[1] http://www.robotics.org/content-detail.cfm/Industrial-Roboti... [2] https://www.linkedin.com/in/martinbuehler


The AWS "Region" which the super secret stuff runs in isn't plain GovCloud. It's Amazon C2S. It's not talked about much (at all). From my understanding, most people who work on this cloud do not need a clearance but do need heavy background checks.

Some people operate directly with the intelligence agencies (and work on their campuses as part of their duties for AWS). These people often handle sensitive data and therefore need clearance. Amazon often hires individuals who are located near intelligence services to ease this.


It's hard to take this seriously: storage is an excruciatingly hard problem, yet this cheerful description of a nascent and aspirational effort seems blissfully unaware of how difficult it is to even just reliably get bits to and from stable storage, let alone string that into a distributed system that must make CAP tradeoffs. There is not so much of a whisper as to what the data path actually looks like other than "the design includes the ability to support [...] Reed-Solomon error correction in the near future" -- and the fact that such an empty system hails itself as pioneering an unsolved problem in storage is galling in its ignorance of prior work (much of it open source).

Take it from someone who has been involved in both highly durable local filesystems[1] and highly available object storage systems[2][3]: this is such a hard, nasty problem with so many dark, hidden and dire failure modes, that it takes years of running in production to get these systems to the level of reliability and operability that the data path demands. Given that (according to the repo, though not the breathless blog entry) its creators "do not recommend its use in production", Torus is -- in the famous words of Wolfgang Pauli -- not even wrong.

[1] http://dtrace.org/blogs/bmc/2008/11/10/fishworks-now-it-can-...

[2] http://dtrace.org/blogs/bmc/2013/06/25/manta-from-revelation...

[3] http://dtrace.org/blogs/dap/2013/07/03/fault-tolerance-in-ma...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: