Hacker News new | comments | show | ask | jobs | submit login
Alphabet's Waymo Alleges Uber Stole Self-Driving Secrets (bloomberg.com)
907 points by coloneltcb on Feb 23, 2017 | hide | past | web | favorite | 348 comments



From another source to provide some colour:

> According to a lawsuit filed today in federal court in California, Waymo accuses Anthony Levandowski, an engineer who left Google to found Otto and now serves as a top ranking Uber executive, stole 14,000 highly confidential documents from Google before departing to start his own company. Among the documents were schematics of a circuit board and details about radar and LIDAR technology, Waymo says

> The lawsuit claims that a team of ex-Google engineers used critical technology, including the Lidar laser sensors, in the autonomous trucking startup they founded, and which Uber later acquired

I was confused as to what stealing a patent actually meant:)

Waymo has also posted this....

https://medium.com/@waymo/a-note-on-our-lawsuit-against-otto...

From this post...

> Recently, we received an unexpected email. One of our suppliers specializing in LiDAR components sent us an attachment (apparently inadvertently) of machine drawings of what was purported to be Uber’s LiDAR circuit board — except its design bore a striking resemblance to Waymo’s unique LiDAR design.

> We found that six weeks before his resignation this former employee, Anthony Levandowski, downloaded over 14,000 highly confidential and proprietary design files for Waymo’s various hardware systems, including designs of Waymo’s LiDAR and circuit board. To gain access to Waymo’s design server, Mr. Levandowski searched for and installed specialized software onto his company-issued laptop. Once inside, he downloaded 9.7 GB of Waymo’s highly confidential files and trade secrets, including blueprints, design files and testing documentation. Then he connected an external drive to the laptop. Mr. Levandowski then wiped and reformatted the laptop in an attempt to erase forensic fingerprints.

Ooops, that does sound bad after a first read.


Yea that's incredibly bad if that's what that guy did.. in general for EE designs, the schematic/layouts are not that hard to deduce how you would do it if you are in contact with the company that makes the LIDAR device (the unique and difficult part -- basically just need to ask them, like "yo, how roughly should I interface with this thing and whats your recommendation on a number of the components.. and can you give me a reference schematic.."). So copying stuff means that the guy actually probably was clueless as to how it actually worked and planned to just hand it off to somebody else who did understand it -- so definitely super bad...


Many many years ago, I was at a software company, and a competitor popped up making very similar software. They made a presentation of their software at a conference, and had a slide showing a screenshot. Our typos were clearly visible.

Fun times, fun times. We got a few people fired, but that was about it.


I used to work at a small company that made a SaaS product for a niche HR function. One of our largest customers, a bank whose ATMs you likely see every day if you're in the US, decided to drop our service in favor of a solution built in-house. We were disappointed, but had to let them go.

Then, about 2 years after they'd canceled, our support department got a strange email from a user that was having problems. At first, the support rep didn't understand the issue and asked the user for screenshots of the problem. Sure enough, the screen shots looked exactly like our old product, which had been decommissioned over a year prior. It even had our support email address and phone number on it. It turns out the version of the software they developed in-house started by scraping our site, renaming all the .html files to .aspx and then making the dynamic parts be data driven.

When we called them on it, they basically threatened to use lawyers to put us out of business if we pursued the issue, so I guess our management decided to drop it. We did make them change the support email and phone number.


> When we called them on it, they basically threatened to use lawyers to put us out of business if we pursued the issue, so I guess our management decided to drop it. We did make them change the support email and phone number.

This is why we can't have nice things. Darned lawyers. Can't live with them, can't live without them


To be fair ...saying you'd use lawyers to put someone out of business makes no sense unless the company had done something that the lawyers could go after. Hating on lawyers for being aimed at a company is like hating the court system for being available. The bigco's lawyers arent doing this of their own motivation. I'm' sure there was other stuff going on that made the op's company back down.


Yet another believer in the Just world hypothesis...

As Cardinal Richelieu said[1] about 400 years ago: Give me six lines written by the most honest man in the world, and I will find enough in them to hang him

1. the attribution of this quote is disputed


In this case, I believe the threat was twofold. They'd drag out proceedings so that we couldn't have afforded to make it to a judgment. But they'd also threaten to cancel a deal with one of our other large customers if they didn't cancel with us. Knowing basically our customer list and how much those customers paid for the software, it wasn't much work to infer the company's somewhat tenuous financial situation. They guessed, correctly, that we'd drop it rather than risk the company's future on a successful lawsuit that, win or lose, would likely mean the loss of a significant customer.

There was nothing we'd done as a company that was actionable.


I wouldn't assume so: a larger company can afford to sink a lot more money into litigation, so they could effectively bleed the small company dry defending even a frivolous lawsuit.

Patent trolls are one example of this - where they may deliberately select weaker defendants (instead of going after the bigco "infringers") so ensure they hold the upper hand in the financial power imbalance.


I was only kidding. You can say that of any profession.

Some day people will say the same of software engineers.


"This is why we can't have nice things. Darned lawyers. Can't live with them, can't live without them"

A lawyer can screw you badly, but if you want to screw a lawyer you need another lawyer. If taken as a category, they never lose.


>can't live without them

Really?


Without lawyers, Google would need to actually secure their data, rather than counting on lawyers to clean up the mess they made here.


you could have put their ip in a firewall or rate limited their access to mess with them.


One does not simply "mess with" an international bank.



It seems criminal. I remember an engineer from goldman sachs went to prison for doing something similar.


Based on the story from Flash Boys, that Goldman engineer was taking something like a couple of megs worth of his own work that was related to various open source projects. But he became the poster boy for stealing trade secrets and an example to be made of.

His mistake was that he used the company internet connection to upload said source files to something like his personal github or dropbox of the sort, so there were proxy logs of him stealing corporate secrets.

Most people would've gotten their personal-ish data off a corporate laptop with a usb key/harddrive which they probably wouldn't have triggered any alarms.


> Most people would've gotten their personal-ish data off a corporate laptop with a usb key/harddrive which they probably wouldn't have triggered any alarms.

Most top banks use Citrix technology (or similar virtual desktops) to stream workstations for employees accessing both locally and remotely, so the only way you can get data out is by using your phone to take photos.


The Flash Boys presentation is not accurate: https://news.ycombinator.com/item?id=9047068. (Though I don't think you should be sent to prison for a crime like that.)


Banks and similar financial companies are extremely paranoid about data security. Working at one all USB drives were mounter read only unless they were specifically provided by the company and encrypted using provided software. Anywhere you could upload files was blocked as well.


Companies that care about data security typically disable write access on USB ports.


What's to stop someone from sending sensitive data as an attachment via email using an encrypted email service? I don't think a proxy can detect much besides data size correct? I suppose a virtual desktop could log which files are uploaded.


Most email servers reject that (I tried to send something to a Deloitte consultant and gave up after password protecting ZIP file).

While most of such companies actively MITM their HTTPS connections (I wonder whether Google does that - theoretically they are CA so it would be extra easy for them), you could probably get away to uploading said file to any service that is small enough to be unknown to corporate firewall providers (Symantec, etc).


It also seems like the least interesting thing to steal. The hard part is probably the software and access to real world driving data. It is like holding up a bank and walking out with just the coins. The risk reward profile is terrible.


I thought waymo's lidar sensors were much, much cheaper than other sensors: https://arstechnica.com/cars/2017/01/googles-waymo-invests-i...

We're talking about reducing a ~$25000 part to ~$2500.


A factor of 10 and at least 3-4 years of engineering work? Yeah, that's no small potatoes.


That was a result, though, of Google collaborating with Velodyne. I suspect Velodyne would have reduced pricing to everyone as a result. And Google's not the only entity working towards less expensive LiDAR. There's Delphi/Quanergy, etc.

Just seems like a huge risk to save maybe $10k per vehicle (assuming price drops for everyone) for some relatively short time period.


Levandowski allegedly downloaded 14,000 files. The lidar circuit designs, while arguably trivial, are what Waymo can prove have been appropriated.


Discovery will likely find more things, no?


I imagine there are more software people capable of programming their way through the problem than electrical engineers that can work their way through it.

An electrical engineer is also going to see the Uber name and expect Valley idiocy, while being a being a secondary concern for not being softwary enough.


That would be Sergey Aleynikov [0]. I'd recommend reading Flash Boys by Michael Lewis [1], who covers this story in detail. Sergey's case seems much more nuanced, and if I remember, Lewis takes his side on a number of issues.

[0] https://en.wikipedia.org/wiki/Sergey_Aleynikov [1] https://www.goodreads.com/book/show/24724602-flash-boys


...and then after that, do also read Flash Boys: Not So Fast by Peter Kovac. Not necessarily for Aleynikov's case in particular, but because it's healthy to see both sides of the greater narrative that Lewis unfolds.


It's definitely criminal.


There however could be custom-ordered parts on the BOM, documentation of which might be under NDA.


Yes good point. That would be super obvious that they stole it then.. "yea, could I order the BA33525 part that you custom built for somebody else.. yea I know its undocumented, but I know you have it somehow...."


You do that sort of thing all the time though with electronics: I'll open it up, and find some undocumented chip variant and see if I can source it elsewhere, I'll also search+email around to see if I can find a datasheet anywhere. If it's not printed on the chip you'll probably be able to get it out of JTAG.


Sure, when you have physical access to the device, which wouldn't seem to apply at all here.


From the medium post "downloaded additional highly confidential information pertaining to our custom-built LiDAR including supplier lists, manufacturing details and statements of work with highly technical information."


Levandowski does have some specialized technical knowledge of Lidar, he started up 510 systems, a company focused on making Lidar for robots, which was quietly bought up by Google in 2011.

http://spectrum.ieee.org/robotics/artificial-intelligence/th...


He is a great fit as an Uber executive... another greedy pig!

This one broke many laws and needs to go to jail!


It sounds worse when you consider the potential for treble damages due to having knowledge of the infringement. Uber's negotiating position is doubly weakened by the fact that they are dealing with a reputation crisis at the same time as this claim has been brought forward.

If Google has the information they claim they do, they have persuasive arguments to anchor their damages calculations on the basis of Otto's acquisition value. Times three.

This is a sword of Damocles hanging above Uber.


I don't think I've ever seen Google go in the offensive like this, so this must be pretty serious.


Eh, Google doesn't use patent litigation as an offense strategy, but they _do_ pursue leaks. They treat leaks (that is, exfiltrating data while employed) very seriously. Offensive.

https://www.reddit.com/r/google/comments/4emtua/nest_ceo_ton...


I think Alphabet probably views Waymo as their next big thing, and Waymo appears to have a sizable lead over everyone else in the self driving space which they want to protect. Remember, Alphabet is a major investor in Uber through Google Ventures, and their stake is likely worth billions. They wouldn't pursue this unless Waymo was of critical importance to them.


I get that a lot of people on HN are highly skeptical, but Google Ventures claims to be completely firewalled from Google corporate (this issue is only made more confusing by the fact that Google Capital is not firewalled, and by the fact that Google corporate itself also invests in companies).

I do believe them: many media firms have the same set up between the journalism and advertising departments.


GV may make decisions and operate completely independently from the rest of Alphabet, but that doesn't necessarily mean that Alphabet does not take into account GV's portfolio position in Uber, which is its largest investment. Waymo even hints as much in its Medium post: "Our parent company Alphabet has long worked with Uber in many areas, and we didn’t make this decision lightly."


In practice, when it comes to outside investments, I don't think the artificial separation between Google Ventures (GV), Google Capital (Capital G) and Google Corporate matters that much.

FWIW, David Drummond, longtime Chief Legal Officer took a board seat after GV's $250m investment in Uber in 2013 but eventually stepped down last August [0].

[0] https://www.nytimes.com/2017/02/23/technology/google-self-dr...


And this "over 14,000 highly confidential and proprietary design files for Waymo’s various hardware systems" is what, an engineering repo? (Edit: My money's on repo. It says 'searched for and installed specialized software', I'm translating that as 'installed git.') Or did he just take a copy of his mailbox?

Sounds like a rookie move to be downloading this stuff on his work laptop either way.


Stealing is a rookie move. But I don't think there was a choice about using his work laptop. You can't connect an arbitrary device to Google's network.

https://research.google.com/pubs/pub43231.html


If I were to put my evil hat on, I'd arrange for an innocent and not-very-well acquainted coworker to have their laptop outside away from cameras while I used it to siphon data off onto a USB drive.

But I wouldn't, because I'm not evil.


Nice link, this is a great read for anyone interested in endpoint security research.


... all which are easily compromised by a commodity phone.

"Ah you know, I forgot my charger! Dang! I'll just plug this into the USB port of this sooper secure machine connected to this sooper secure network."


Computer says no.


That's... an impossible task. All ways to identify a machine in a network are dependent on the machine itself — if properly configured, I can create a machine that can not be told different from another machine.

Just extract the device certificate from one device and store it on another. Problem solved.

Extracting data from the TPM or equivalent stores in ARM devices is also not impossible, as the DRM-breaking community has shown with extracting keys from TPM-based DRM.


How about a whitelist of hardware fingerprints, similar to what Microsoft does with Windows installations. Also, Google is known to have custom manufactured hardware, specifically with security in mind. I don't think the idea that Google is able to secure their own network against foreign devices is really that far-fetched.


None of that would solve the issue.

Google can only verify what hardware you're running by sending a packet via ethernet to your device. You control all software running on your device, and can send a spoofed result.

If you were crazy, you could even just emulate Google's hardware entirely and proxy all requests to that emulated hardware.

Nonetheless, while this guy certainly wouldn't be able to do it, many Google employees would.


Couldn't they do a chip and pin style hardware solution where a security chip generates a response using a unique secret algorithm?


That would work — until someone decaps a few of these chips.

They're already doing something similar, after all.


But that's really not the case here. Making secure computing elements like TPMs or HSMs, or Apple's Secure Enclave or the plethora of other devices out there is a solved problem. You can decap it and try to get that data out of it but at the very least with the current state of the art you can make this very unlikely to succeed and extremely expensive to even attempt.

Handwaving away all of this as just a minor nuisance is silly. An attacker would have to find some unknown side channel or try to physically modify the TPM to get at the data, either approach means the attacker has significant resources, certainly well beyond the means of our hypothetical attacker. Heck, it's been speculated that even the NSA couldn't get data out of something like Apple's Secure Enclave without risking destroying it in an attempt.


Wouldn't you need to decap every individual chip you want to compromise?


Please don't dismiss something from your armchair. I've just mentioned that Google relies on this technology to secure their network.


The problem is that it’s basically a fancy version of using MAC addresses for auth.

I’m more surprised a Google engineer couldn’t circumvent it.


If you read the "Beyond Corp" doc, there are also certificates involved.


Yes — I've described above that simply extracting them solves that issue. Overall it's fancy assymetric DRM. (And anything like that can be broken).


>And anything like that can be broken

Yes, of course it can be broken, it only takes 2^256 attempts in order to do so.


Leaving TPM breaking aside, is dude who did all these things on work network the same dude who's going to break a TPM?


I think you're conceptually close, but wrong. My money's on something like this:

https://techdocs.altium.com/display/DMAN/Design+Data+Managem...

Edit: much better link than the above: http://www.altium.com/documentation/3.0/display/VAULT/Altium...


Yes. They would not use Git for huge binary documents if they were in their right mind.


Altium Vault uses SVN for version control (or at least did the last time I used it).

Is that better than git for binaries?


Yes, svn handles large binaries just fine. Most last-generation version control systems do. It's probably the main thing keeping some companies from switching to git.


I also took 'searched for an installed specialized software' to mean 'downloaded the client for Google's repo server from their internal IT site'.


On the other hand, we know that public statements about lawsuits are always a pack of lies as everyone tries to look as wronged as possible: remember YC's bluster about the Cruise co-founder lawsuit, right before they had to settle for dozens/hundreds of millions? or remember the Oculus lawsuit where they alleged almost the exact same thing (Carmack stealing thousands of sooper-sekrit VR documents/files) and wound up only winning on some non-compete stuff and getting only a fixed award which was a fraction of what they thought they'd get?


Being ultimately required by circumstance to make a substantial payment to settle a lawsuit does not imply that the payer was lying, or even incorrect, in their defense.


I think losing a lawsuit is definitely evidence for claims being false just as winning correlates with telling the truth about not doing wrong things, and if it is true that there is no correlation whatsoever between losing a lawsuit and making false claims, then the legal system has failed utterly (and incidentally, you have given me a great idea for a business model).

Given the adversarial nature of the system and the many past comments on HN to take claims by litigants with very large grains of salt when it came to the Oracle or Facebook or Oculus lawsuits, I'm surprised at the apparent credulity on display. Google will huff and puff just as much as anyone else when it comes to lawsuits.


It's evidence, but it's not dispositive, especially in cases like Cruise, where the technicality that determines the outcome might give the defense a 60/40 edge after a court case that will itself cost many millions of dollars.


What he said. Particularly in patent and other IP cases, it's often cheaper to settle, even for a lot, than defend the lawsuit. You could say our system has failed utterly You wouldn't be wrong in that case. ....


I've never been in a situation anything like Cruise, but I've been a founder in an acquisition that went to legal over absent cofounders --- actually, come to think of it, twice, once with me the absent cofounder and once not --- and: I can't imagine any of these cases ever not settling.

Even with no tenable claim at all, the absent cofounder has a gun to your head: a proposed liquidation that pends on a civil suit simply isn't going to close. There are time limits on all of this stuff, and the acquirer is simply going to say "fuck it" and walk if they can't predict when the deal will close. A detail I think people who've never sold a company don't realize is that the legal costs for both sides of a deal that closes uneventfully can get close to 7 figures.

I guess this is valuable information for startup founders. It's also a reason you should run, not walk, from any early-stage business partner that wants to negotiate or complexify vesting. 1 year cliff, 4+ year vest, the way everyone does it, or go start a different company.


The Cruise lawsuit never went to trial. They settled. Neither side had their claims tried in a court of law - which, as you state, should approximate finding the truth.


remember YC's bluster about the Cruise co-founder lawsuit, right before they had to settle for dozens/hundreds of millions?

Is this a guess?

http://www.businessinsider.com/car-startup-cruise-settles-le...

A representative for Guillory declined to discuss the settlement amount, but said the terms were "mutually agreeable." As part of the settlement, the parties have both agreed to dismiss their lawsuits.


It is an informed guess based on the fact that the settlement involved acknowledging, as the original YC paperwork said, he was a cofounder who owned 50% of a $1b exit. I haven't seen anyone in a hurry to leak claims that the settlement was for a trivial amount, nor did anyone at the time offer any good reasons for thinking that.


The oculus settlement I remember was for half a billion http://www.polygon.com/2017/2/1/14474198/oculus-lawsuit-verd...


> One of our suppliers ... sent us an attachment (apparently inadvertently) of machine drawings of what was purported to be Uber’s ...

Ouch, this doesn't play well for the supplier either!

Presumably if Uber wins, it'd have a solid case against the supplier. If it loses, or while Waymo v Uber is ongoing, might there be a case anyway?


This happens more than you would think. The mistake is probably compounded by two designs being almost the same and potentially the same engineers involved.

I'll bet the supplier searched for the engineer's name in his inbox and responded to a big email thread about the project with many people involved. The Otto engineer's old colleagues got the email instead of him.


They'll most likely have a case, but it's also unlikely that the supplier would be big enough to cover any damages in the high 9-10 figures, they'd just throw in the towel and go bankrupt.

Also an argument that "we stole stuff but you're the one who leaked it so you should be punished" won't play well with a jury.


>Presumably if Uber wins, it'd have a solid case against the supplier.

Can they really claim that the supplier damaged them by leaking trade secrets if the trade secrets aren't theirs?


> > Presumably if Uber wins, it'd have a solid case against the supplier.

> Can they really claim that the supplier damaged them by leaking trade secrets if the trade secrets aren't theirs?

If Uber wins, they are Uber's secrets.


> I was confused as to what stealing a patent actually meant

Indeed. The current title of the article is

> Alphabet's Waymo Alleges Uber Stole Self-Driving Secrets

Secrets, not Patents. If patents was the original title perhaps the author was confused.



Patent infringement, or patent theft?

How do you steal a patent? If you steal (to be clear, illegally take) information and then use that to get a patent, is that stealing a patent? Is there case law on on something like that?


Infringement. The cause of action for infringement begins on Para. 88.


How does Google build such forsenics? Do they have spyware monitoring all their company laptops?


Yes! It's open source: https://github.com/google/grr

Google probably has the largest team of internal forensics software developers.


Yeah, they definitely have some really specific accusations about what went on:

>>> To gain access to Waymo’s design server, Mr. Levandowski searched for and installed specialized software onto his company-issued laptop. Once inside, he downloaded 9.7 GB of Waymo’s highly confidential files and trade secrets, including blueprints, design files and testing documentation. Then he connected an external drive to the laptop. Mr. Levandowski then wiped and reformatted the laptop in an attempt to erase forensic fingerprints.

The bit about connecting the external drive is interesting but I guess there's probably a ghost of that action somewhere on a drive (assuming you could more or less restore the drive before being wiped).


My guess is he still had a network connection up when he did the copy. These kind of things like JAMF, osquery and so on upload what you do on your computer on a fairly frequent basis. After a certain company size all of them do something like this, and most of them do not spell out the amount of spying they do on their employee's work devices and office space.

Or he didn't do a secure wipe when he reformatted the drive and google inspected the computer when he returned it.

OR google modifies their laptops and has separate chips logging this stuff, which would be fairly impressive!

Or he didn't actually do any of this alledged stuff and it's all innocent.

We will find out in the court case either way!


I would imagine there are IDS systems running, given Google's state level adversaries.

Accessing large amounts of files you haven't previously accessed and shortly thereafter attaching an external drive should trip a competent IDS.


Unless it's changed since the last time I played with it osquery doesn't upload anything on its own–it's just a local tool/agent that you can use to gather data. Carbon black is a good example of something like that.

JAMF I believe gathers application usage data but nothing as in depth as what's being discussed. It's also comically handicapped. It somehow manages to do a poor job of everything it tries to do so "the world's largest online Mac administrator community" is forum post after forum post of half understood franken-scripts. I used to think it was milquetoast but after sitting through their sales reps crapping all over open source software (despite extensive use of OSS libs in their products) and seeing it fail to do the most basic stuff out of the box my opinion is that it's over priced crap.


Or, you know, syslog.


It could well be that they have comprehensive debugging enabled for all their employees on work supplied machines. I know they beta test through employees, so that would make sense, to some degree. If the debug log is remotely synced, as an added benefit they get to review actions after the fact, which might be what happened here.


Company-issued laptop probably tracks what software is installed anyway to mitigate malware. Services are probably designed not to accept connections from untrusted devices. Hardware 2FA makes it difficult to impersonate someone without their knowledge.

I can see the temptation—I've always missed having access to IP after leaving a company.


You cant restore SSDs. I would assume Googlers are rolling SATA SSDs, if not M.2 PCIE ones.

What Google could be doing is remote logging on the laptops - logs uploaded to ze cloud every time you connect to the mothership. Plugging in USB drive leaves trace with USB ID, volume information etc. Windows also logs this and more http://www.forensicswiki.org/wiki/USB_History_Viewing

Protip: to exfiltrate data with minimal trace your best bet is taking out the drive and reading it in another computer (using write blocker for best effect), this can still be traced if someone is logging SMART written/read data (I am, but Im paranoid), not all HDD/SSD vendors provide this info. Second best is booting from USB drive so the original OS never sees the plug/unplug event in the first place, I have no idea about current state of UEFI/AMT logging going on tho.

Disclaimer: I used to do forensics.


Really interesting! It's sobering to think about the ways that even in a system not set up for logging you can trace back through these actions.

I was asked to figure out what had happened on a system where some data had changed and 2 parties were blaming each other. After about a hour digging around I managed to piece together a picture of how Person X had got up on a Monday morning, discovered (on their mobile, home wifi) that they had made a mistake on Friday, then logged in on their desktop to fix it from home (first time they logged in at home), then went to work and blamed someone else.

What was remarkable was how many different sources there were to pick up bits and pieces from. In isolation there wasn't much to go on, but once you start the connecting the parts, it's really incriminating.


This was the noteworthy part for me as well.

They had to know that he:

1. modified the software on his laptop

2. logged into an area he should not have had access to (this is probably standard)

3. attached an external drive (possible, but standard?)

4. and they got all this info after he deleted the drive, which means they either went in and found remaining data on the drive or else they captured the info in real time.

I suppose if the drive is clean now, and they know he downloaded data, they can infer that he wiped it.

I suppose that if they know he accessed it, and there was software on his computer preventing him from doing so, they can infer that he downloaded something to overcome it.

But knowing that he connected to an external drive implies active monitoring. That's the part I am most curious about.


Just to be clear "his" laptop is "the laptop issued to him by Google".

I'd think that standard antivirus software detects and alerts external drives being attached.


1) makes sense if Levandowski did it over the company internet connection. There could be a record of his unusual requests (software download, software update).

> they can infer that he wiped it

For 4), Levandowski reformatted the hard drive before returning it, so there's no inference there.


It could be a lot simpler than that.

Perhaps the size of the downloaded repository is larger than the physical size of the drive on the laptop?


Since he was working on self-driving cars at Google, he probably was authorized access to the software and hardware involved.


Maybe video footage?


This is all information a rudimentary desktop auditing tool can gather and store on a server. Most collect both hardware (which would include connected devices) and software inventory. Anyone SHOULD be auditing company PCs on a relatively regular basis. It wouldn't surprise me if Google was auditing much more frequently than the average and could catch something like this in the act.


Yup, lots of firms using HIDS that gather all sorts of system data. OSSEC is what I've seen rather frequently for this.


What? Most version control systems have authentication systems, I'm pretty sure Google has good reason to keep audit logs of employee's access to schematics.


Even if they do, it is very unlikely they were built with provable nonrepudiation requirements. From personal experience designing the security of a PKI CA that passed gov security certifications, the audit subsystem is the most challenging part to do right. Could probably consult for the defense in tearing down the evidence :)


I'll accept that it's hard, but why do you think Google didn't do it right?


It would require a prohibitive amount of engineering resources to be done right, i.e. a chain of guarantees that from creation time to the moment they are inspected it can be proven that the logs cannot be tampered with by nonauthorized users. There are other requirements e.g. separation of roles that are expected on audit subsystems. I am positive it would not pass an adversary expert analysis.


Google's threat models include nation-state adversaries: I suspect the effort that seems "prohibitive" to you was seen as necessary after the infamous smiley on that PRISM slide. Security is an existential threat: if user's don't trust Google, they will fail.

Google also has an internal PKI CA - I think they meet and exceed that security baseline for rigor.


Yes, for purposes of issuing certificates I'm sure they are OK wrt auditing (I was just establishing my "credentials" with the CA comment).

The threat models targeting anti-Google malicious actions obviously worked since they have traces of the Otto guy's activities. What I am asserting is that these forensics logs they use as evidence can be attacked in court as not being sufficiently protected from tampering by an internal Google party interested in fabricating evidence.


I would assume that the servers that host big trade secrets would log everything happening. There wouldn't be a need to log the laptop itself.


Typical garden-variety endpoint security/IT management solutions such as Crowdstrike or Casper with a kernel module are capable of all of that.


yeah...dude is going to jail.


I wonder what this "specialized software" is. It sounds like it could be a haxxor hacka-thing software, but likely it's just a FTP-client or similar.


I'm answering my own question, well sort of. The complaint doesn't state this, just "specialized software". This doesn't make the complaint lesser of course.

There are all sorts of other tidbits in the complaint that further strengthens the case in favor of Waymo, it's an interesting read, and I'm surprised it's very readable even for a non-lawyer.


> [...] Mr. Levandowski searched for [...]

Did he actually "search" on "his company-issued laptop" using company network or did he google it from his own machines at home?

Anthony is likely a very smart person, so if allegations are true, I would think the latter is more likely.

How does Alphabet know what he searched for? /G


He could have wiped it with a cloth or something. It's kind of scary how they could figure out what he did after the wipe.


I'm guessing they saw it through remote access logs? If they see he downloaded gigs of data he wasn't supposed to be using for his job then that's very suspicious. I wonder how they found out that he copied it to an external hard drive though. I'll want to watch this case as it develops.


Patent infringement is the least of the worries.

Trade secret misappropriation can lead to criminal charges.


It's kind amazing that Google keeps reiterating how their incredible security protects your data, and that even employees are extremely carefully monitored. Yet their confidential trade secrets were taken by a dude who seemingly didn't have access to them as part of his job (since he had to go install the software) and a portable hard drive.

The post also says he talked about replicating their technology at a competitor with colleagues months before he actually stole the data too. Generally if you work on something confidential, and you start talking about taking it elsewhere, someone reports it or something.

Sidebar question if there are any armchair lawyers around: While I expect Uber to lose this lawsuit based on the type of evidence being claimed here, is it also possible for Uber to sue the supplier for leaking their confidential data back to Google? Because that seems like an incredible lapse of confidentiality in itself. Or will the notion that it wasn't legitimately their confidential data in the end, make Uber's own claim void?


There's a number of tradeoffs to be made between internal secrecy/silos and trust/openness in any company, and google tends to lean heavily towards trusting their employees when it comes to corp data + resources, designs, strategy etc.

User data is an entirely different matter, and is appropriately treated as such.


It doesn't need to be about "secrecy", it's about "security". Even when you ignore the employee trust issue, and assume everyone at your company would never betray you, "least privilege" is still not just best practice for security, it's common sense.

People should not be able to access data they don't need. If they need it, you can grant it. But the assumption should be that someone who doesn't use the design server shouldn't have access to the design server.

I don't want to know about or see information I don't need to have at work. It's not that I'm not trustworthy. It's not that I would abuse it. I just don't need the liability that it gets out through me.

Because your account credentials could get stolen. Your laptop could get stolen. Your laptop could get hacked into. Your laptop could get malware. Reducing the list of people who have access to a resource insulates against all of these things... automatically. And sure, all of those risks have other ways to mitigate them as well. But layers of security is key. And hey, it also stops employees from sneaking off with your data too.


Stealing a Googler's laptop or account credentials don't get you access. At best, you might get temporary access to my gmail in the browser. I could give you my username and password and you could accomplish nothing.

Googlers all have to use 2-factor access via hardware tokens, so you'll need to steal their laptop and steal their token, and murder the employee before they report it to security.

Google laptops only permit the installation of software from Google, they are locked down and don't allow arbitrary installation of software, much like an iPhone. Those using Chromebooks are even safer.

Having to ask permission for every thing not only adds huge overhead, it inhibits global code gardening and technical debt reduction, and it inhibits learning, because you don't even know if you need to ask permission for something until you see it. You don't know what you don't know. If I want to learn about Google Translate because it might benefit my project, asking permission is bureaucracy, because I don't even know if what they have will help until I see it, and if I had to write a long justification for access rights, I probably either don't have a clue why I really need it, or might just not bother because of the hassle and seek out other open resources.

I feel sorry for you if you work for a company that operates internally like North Korea. One of the rewarding things about working at Google is the constant learning experience of exploring other people's stuff.


Having access to code and data you don't technically "need" is often how people learn new skills and advance their careers without leaving the company. It's an important part of the culture.


Why is "the culture" always used in Silicon Valley to justify incredibly bad decisions?


Again, "bad" is your opinion. People aren't machines: how much trust you place in them affects your relationship with them. Culture is about your team's interconnected relationships.

For example: Smaller companies often have the luxury of more trust / transparency. As companies such as Google grow, they have to resort to things like sending internal notice of big announcements very close to when the actual announcements come out (because leakers). If you have 10 people, you can usually trust the whole team.

Google tries very hard to allow as much transparency and openness as is allowable for its size. It fosters a culture of trust. Logging access rather than restricting it is one of those culture moves. It makes employees feel trusted, and puts responsibility on them to behave ethically. When they don't, the other end is that Google can still take legal action.

Culture isn't a fake floofy thing, it's real.


Their model was "trust the employee, and use the courts to enforce punishment if that trust was misplaced."

We are in phase two of that right now.


I'd like to add to what other people are saying here. I've worked at a company where people were given, basically, the minimum level of trust. All the horizontal projects were dead, teams competed with each other for the same clients, and we chose meeting rooms to avoid being overheard by other teams when we were discussing things.

It sucked. I left.


This is entirely not related to what we're talking about here. It's not about trust, it's about good practice. If someone needs access to something, they request it, you grant it. Simple. But there should be common sense controls on data access, for everyone's benefit.

You're talking about a hostile work environment. I'm talking about a secure one.


That's more than just an oversimplification.

Security is inherently hostile, I don't see any other way of putting things. We tolerate a certain amount of hostility in order to reap the benefits that security gives us, and we tolerate a certain amount of vulnerability in order to reap the benefits that laxness gives us.

> If someone needs access to something, they request it, you grant it. Simple.

The saying goes that for every complex problem, like security, there is a simple solution, like that one, and that solution is wrong. The cost of such a process is just too high for most companies. You have to request access, explain why you need access, someone has to review it, then grant it. That's how it worked at the company I was complaining about. Processes that should take minutes took hours, those that should take hours took days.

Security, like so many things, is subject to cost-benefit analysis. Better security systems use the full triad: prevention, detection, and response. From the article, it sounds like these are working as intended. The security team detected the exfiltration and responded with a lawsuit. Trying to rely only on prevention will just lead to paralysis.

I might also be jaded, because when I hear phrases like "good practice" my instinct is that it means "omitting the cost-benefit analysis."


The problem is, it doesn't seem like prevention or detection was working here at all. Even though they clearly collected enough data to reconstruct what happened, they both failed to prevent this, by not having even common sense security protections, nor did they detect it when it occurred, only finding out because a supplier ratted out another client.

These are not high cost processes, these are basic, common sense practices we're talking about, that nobody really has any excuse to not have in place.

You are continuing to let your experience with a single, hostile work environment cloud your openness to something that... isn't even controversial. If you aren't locking down your files to only those who need them, you aren't equipped to be in business. If this is somehow uncommon among Silicon Valley, it explains why so many "they stole our trade secrets" lawsuits are going on right now.


> You are continuing to let your experience with a single, hostile work environment cloud your openness to something that... isn't even controversial.

I'm flattered that you want to talk about me, but really, I'm not the subject of the discussion here, and it's inappropriate to talk about what's going through my head or to try and psychoanalyze me.

> These are not high cost processes, these are basic, common sense practices we're talking about, that nobody really has any excuse to not have in place.

I've worked at a few different places on this spectrum in my career. Three of them have been fairly open, internally, like the way Google apparently operates. Maybe there are some high-value IP repositories you don't have access to, but you mostly have access to any source code you want to look at without getting access reviewed first. These companies were very open about the risks that this entailed, and openly discussed the fact that leaks were possible. The benefits became rather clear the longer I worked at each place. Whenever a system I worked on interacted with another system, I could follow what the other system was doing and even submit patches to other systems if necessary.

Saying that restrictive security is "common sense" or "not even controversial" is begging the question and argumentum ad populum, respectively. My argument here is that there are benefits to open access to most company IP, and that these benefits are important enough that the decision should be made on a company-by-company basis.

The access controls that would have prevented this particular case from happening would have to be rather draconian indeed. Anthony Levandowski's work was basically the genesis of autonomous vehicles at Google. Google purchased Levandowski's autonomous driving startup, 510 systems, in 2011. I don't know what kind of access controls you'd need to prevent a startup founder from accessing the technology built on top of his company's IP.


So, the head of my department doesn't have access to... most of what I do. That isn't to say she isn't in charge, or doesn't have every right to see that information. But it isn't her job, and she doesn't need that access, so she doesn't have it. If she needed it, she'd get it. There's no trust issue here, she is completely trustworthy. But her not having the access protects her just as much as it protects the rest of the organization. Because she doesn't have to worry about any risks to that access through her credentials.

In this case, Waymo has a design server, Anthony clearly didn't work on it's contents, because he didn't already have the software to access that server on his computer. Therefore, regardless of the source of the IP (which isn't his, he sold it), he really shouldn't have ever been given access to it. When the server was first spun up, access should've been given to... the people who would be using it, and nobody else.

Of course, if at some point he did need to access those files, he could ask, and be granted that access. And that doesn't need to be a difficult process (granting access to things takes an IT person a minute or two), but there is now an additional person that knows that user has been recently added to access. Even informally, this is a pretty good security measure, because in most cases, it should be fairly obvious why someone needs something. And if it's not obvious, and maybe that employee has been, as the article says, talking about leaving the company and replicating the technology elsewhere... suddenly that IT person maybe has a reason to mention the issue up the chain.


Where I work, I can make changes to what I'm working on that break things far away, from time to time. In well designed systems this doesn't happen too often, but you might be surprised sometimes how a seemingly insignificant change can make a system fail somewhere else because someone made an assumption that is no longer true.

So I can make a change, see that it breaks some test somewhere else (failed CI test), and peer into the diffs on the opposite side of the code base to decide what to do about that. It's proven quite useful, from time to time. I've seen weird problems like hitting pessimal access patterns for software developed halfway around the world.

> Anthony clearly didn't work on it's contents, because he didn't already have the software to access that server on his computer.

That doesn't follow. I work on a source code repository every day, but I'd need special software to exfiltrate a copy. Same with the various design documents and things I work with—all stored in a private cloud. If I wanted to exfiltrate it I'd get a script to do it automatically.

Remember, this wasn't just the guy who started the autonomous driver project. He wasn't just the "department head". He's an industrial engineer who founded a Lidar startup. The idea that he should be denied access to Lidar design documents is patently absurd.


Apple implements very restrictive internal secrecy, yet leaks haven't been stopped. So what's the point? Live in a police state, still get leaks, or live with freedom and benefits, and get leaks. I'll take the latter.


There's a huge difference between security of user data and security of company secrets. Having access to the latter is not unreasonable; access to the former should be heavily restricted and closely monitored.


And it is.


There's a huge difference between employees being able to see our source repository, and employees being able to see user data or secrets. Generally, Google values its openness internally, and most employees can see the source code of most projects. In fact, that's how large scale refactorings and bug fixes work: I find a bug, I might have to fix the library in question, but also find all callers in the entire code base and fix them as well.

In general, when I develop, I virtually have all of Google's code base 'checked out' and can edit any of millions of files in my snapshot of the world. I don't need to check out multiple silo'ed repositories or beg for access, diving into any code in the universe has almost zero transactional overhead, it's all mapped into one giant filesystem. (https://plus.google.com/+MattUebel/posts/4dQBDF5CmdX)

On the other hand, production systems are heavily walled off from the corporate network. For all intents and purposes, the corp network your desktop is plugged into is "untrusted"

Nice try though.


I'd like to see you enter a Google data center and try to steal data.


> is it also possible for Uber to sue the supplier for leaking their confidential data back to Google?

My first reaction to the waymo announcement is also on this. "apparently inadvertently" sounds like a blatant lie.


Chances are Google shares patent royalties with companies that help them build the technology they use, so the supplier was likely ensuring their continued royalties by informing on the infringement.


Is it just me or is the font on medium.com super blurry?

Also, how did google know he downloaded all the stuff through some software?


It shouldn't be hard. On Box, you can have it notify you every time someone downloads any file in a given directory or even specific files. I assume they have at least comparable infosec, if not explicit tripwires/honeypots in files that should never be downloaded.


I'm sure it doesn't sit well with google that a group of guys left and started a self-driving company that very quickly was acquired for a ton of money.. Most likely google wants to get a piece of the action since essentially they practically deserve it if their IP is being used to help the company make their technology.


Why would it? Imagine being a Google/Alphabet shareholder.


Sure, that's what Google's lawyers claim.


> stole 14,000 highly confidential documents from Google before departing to start his own company

Did they lock down access or were they all on google docs?


If you read the article:

  We found that six weeks before his resignation this former 
  employee, Anthony Levandowski, downloaded over 14,000 
  highly confidential and proprietary design files for 
  Waymo’s various hardware systems, including designs of 
  Waymo’s LiDAR and circuit board. To gain access to Waymo’s 
  design server, Mr. Levandowski searched for and installed 
  specialized software onto his company-issued laptop. Once 
  inside, he downloaded 9.7 GB of Waymo’s highly 
  confidential files and trade secrets, including 
  blueprints, design files and testing documentation. Then 
  he connected an external drive to the laptop. Mr. 
  Levandowski then wiped and reformatted the laptop in an 
  attempt to erase forensic fingerprints.


Yep, I should have read all the way to the end. And now I can't delete so my idiocy as there for all.


Doesn't sound like anyone stole anything.

Waymo/Google still had the documents, and copying isn't theft.

Right?


In fact, copying can be theft.


Copying illicit data constitutes copyright infringement at its best, or industrial espionage at its worst.

Theft is a different type of crime involving actual things.


Exactly! Btw, the specific term of art for "actual things" that are subject to theft is "rivalrous" (adj): a good whose consumption by one consumer prevents simultaneous consumption by other consumers.

You can commit theft of a rivalrous good, like an apple or a computer. You cannot, however, do the same to a file or a song or an idea. I wish more people understood this difference. Society would be better for it.


I'm not sure how society would benefit from a better understanding of a technical difference between these two crimes.


I take it as axiomatic that understanding the truth is better than believing in a falsehood. The latter almost always leads to gaps that can be exploited for potentially nefarious purposes.

In this particular case, the reason that theft is wrong (because it deprives another person) is very different than the reason that copyright infringement is illegal (because the founders wanted to encourage invention by granting limited monopolies). If people believe that copyright infringement is theft, however, then powerful corporations like Disney can convince them that copyrights should last indefinitely.


The word "theft" can have different meanings depending on context. Most people think in ethical terms rather than legal.


Well then I guess there exists a difference in ethics. Theft involves deprivation of the specific thing taken. So deleting the original data after copying it might consist of theft.


> Theft involves deprivation

Not necessarily. Theft is simply taking something you shouldn't. That includes taking by duplication. Or perhaps you'd like to think it as the deprivation of potential profits.


>which makes the theft or misappropriation of a trade secret a federal crime.

https://en.wikipedia.org/wiki/Trade_secret


Wrong.

There's no way in hell an employee contract would ever let someone copy documents and use them after leaving the company. The intellectual property belongs to Google, not the individual creators and certainly not to any other employee.


I had an interview there where the manager asked me to leave my laptop behind and go for a walk. I was hesitant after hearing stories of Uber conducting electronic espionage against its competitors. They could easily bypass Macbook security with a USB device (I had heard of that on HN too) so I was very nervous to leave my laptop behind and noted its exact orientation and position on the table. Sure enough when I returned my laptop had changed both position and orientation, but only enough to tell if you had specifically memorized it. I could be paranoid. They could have simply moved things on the desk. But anyway, people who are paranoid like me are advised not to take their laptops into Uber interviews. They are capable of just about anything, or so thinks my now paranoid self.


> I could be paranoid. They could have simply moved things on the desk. But anyway, people who are paranoid like me are advised not to take their laptops into Uber interviews. They are capable of just about anything, or so thinks my now paranoid self.

I would be cautious about discrediting intuition as paranoia.


1. Why bring your own laptop to an interview?

2. Isn't this scenario incredibly suspicious?


I have been asked to bring my laptop to a interview so that I could could in a development environment I am already comfortable with. I not only found it reasonable but also appreciated it.


Ah that makes sense. They can still use a cloud based coding pad like most companies do


im just gonna say the fact that they told you to go for a walk and leave your laptop is pretty creepy.


Sounds more normal than asking you to take your laptop on a walk.


As a security-minded person, I would be suspicious under any circumstance where someone unfamiliar distanced me from my devices.


This smells like FUD.

> They could have simply moved things on the desk.

Is there any evidence suggesting something else happened?


Did you check the logs to see whether it had been opened in the meantime?


No, how do you do that?


Well, logs are quite easily accessible on any OS.

Memory forensics would have 100% given you evidence that it was accessed and exactly what was done. There are plenty of guides on how to forensically dump memory but that depends on the OS.

But I'm also in this is FUD camp. I highly doubt any company would do such a risky thing such as mirror your machine data merely for an interview. This would be highly illegal and damage the companies reputation for a relatively minor benefit.

Especially when interviewing any software dev which any infosec person would assume they are dealing with a sophisticated target with a high chance of being detected. Meaning they would have to use very careful cloning techniques.

It would be much easier to get you on staff on a company machine and temporarily monitor you closely after the fact. There are plenty of ways to thoroughly vet someone beforehand without taking such a risk.


I think you might be right, but I had reason to worry given Uber's history with competitors, esp. industrial espionage. They were actively competing with the company I worked for at the time. The laptop was my own, but I have no reason to assume that they knew that. I did have some material of my own that I considered to be valuable to their business so I was concerned primarily about that (I had done things that have pushed the envelop quite a bit in a very specialized area that they were just getting into) All of those factors made me paranoid. If Uber was a good corporate citizen with no history of actively and illegally spying on competitors, I would not have been so freaked out after that episode. Call me paranoid... what's new.


Thanks for the info. Will know not leave my laptop behind in any interviews.


A better strategy is just to not interview at uber.


I've never felt the need to bring my own laptop to a job interview. If the company asked me to bring my own, I'd consider that strange or even suspicious.


It'd be really weird for them to request you to bring a laptop, but sometimes when a company flies you out for an onsite and your return flight is that night, your options are either 1) leave your laptop with the hotel as luggage (pretty bad) or 2) bring your laptop to the interview. There's no real reason for them to know you have your laptop with you however.


they did not offer to provide me with a laptop and the interview format was such that I could google things and share some of my personal projects etc


That is weird. What kind of tech company doesn't have an old laptop or a Chromebook for such interview format? What if you don't have a personal laptop? I'm certainly not bringing my work laptop to a job interview...


The kind that's also too cheap to buy jackets for all their SREs, apparently.

This sort of penny-wise-but-pound-foolish shit is depressing, but not nearly as rare as I wish it was.


You didn't ask why?


as a candidate you're trying to please and build trust, so naturally I wouldn't raise questions that would indicate my lack of trust


Piece of unsolicited advice for the future: don't let anyone pressure you into doing anything you don't feel comfortable with. Ever.

If a place is worth working for they're not going to judge you for feeling more comfortable keeping your stuff with you.


What about the TSA people asking for your cell phone at the airport? The whole border crossing incidents


Recruiter A to recruiter B: Well, I hoped this one would be sharp enough, guess we need to search harder for people that wont be careless with our company sensitive stuff laying around simply because someone told them to leave laptop behind and take a walk ...


it wasn't a company laptop


> They could easily bypass Macbook security with a USB device

Isn't this only possible if the laptop is unlocked?


It's not possible unless you have zero-days against the USB drivers or firmware on your laptop, in which case being logged in or not doesn't really matter.


Proof I'm paranoid. My greater point, however, is that they have done some really shady stuff, and stealing competitor's IP is part of their culture, so their behavior itself promotes and justifies paranoia on my part and on the part of anyone looking to work for or do business with Uber.

Reputation is everything.


It's pretty stupid to get paranoid over this. As GP pointed out if you logged out then it's very unlikely that they will get access.


I think it's pretty stupid to consider any consumer device to be secure enough. I did hear on HN some time before that interview that some USB device can be used to bypass the lock screen, which was the basis for my worrying. Now, some are saying in this thread that it is possible (or at least was at the time) while others saying that it is not (and was not) -- Even an educated sample of tech folks cannot make up their mind, so there is (or at least was) room for justified concern... no?


I think the consensus is that it's possible, but expensive. So like, nation-state espionage yes, corporate espionage no. But anyone who actually knows anything won't be talking about it on HN ;)


I must've missed that story, when did Uber steal competitor IP?

(Except this one, of course. To be fair it was Otto before the acquisition)


If the machine doesn't have full harddrive encryption, you can just boot from USB/CD/Firewire and have access to everything.


FDE has been the default for ages.


I can install hardware backdoor in a macbook in under 3 minutes (with prep obviously).

Even you could do it https://www.youtube.com/watch?v=qGPGOoJn54E hint: there are internal USB buses in the macbook, you can hijack one for something like rubberducky +management circuity to trigger only when laptop is powered on for a longer period of time but not touched (no imu/keyboard/touch events/dimmed screen).


So, anyone up for getting an Uber interview and using their laptop as a honeypot?


Why didn't you just shut it off first? You had FDE turned on, right?



That's pretty cool but won't do much against macOS.


A really critical thing that hasn't got much attention is that shortly before leaving Waymo, Levandowski had a meeting with senior Uber execs(!). The day after the meeting, he formed 280 Systems which became Otto.

The implication in the filing is that Uber planned this with Levandowski, and he only created Otto as a plausible corporate vehicle for developing the LiDAR technology before Uber acquired them. Given what we know about Uber and the assertions in the complaint, this sounds entirely plausible, maybe even likely.

https://drive.google.com/file/d/0B7dzPLynxaXuQjY3dkllZ2ZKb0k...


Paragraph 48 if anyone is wondering.


In related news, Tesla is accusing ex-autopilot director Sterling Anderson of stealing code from Tesla before starting up Aurora with Chris Urmson (the former CTO of Alphabet's self driving car program):

https://techcrunch.com/2017/01/26/tesla-sues-ex-autopilot-di...


> searched for and installed specialized software onto his company-issued laptop

That could mean he downloaded an SFTP client like Cyberduck. He could have searched the internet for a client and then installed it. It doesn't say he did not have auth.

Imagine a Google security engineer being deposed for this lawsuit.

Lawyer: "Show me on the MacBook how he downloaded the files"

Engineer: "Well, he used Cyberduck"

Lawyer: "Is that part of the Mac?"

Engineer: "No, he'd have to download it separately"

Lawyer: "So, he searched for and installed specialized software onto his company-issued laptop?"

Engineer: "Um, sure"

Lawyer: "Thank you, that's all the questions I had"


> That could mean he downloaded an SFTP client like Cyberduck. He could have searched the internet for a client and then installed it. It doesn't say he did not have auth.

They weren't trying to claim he hacked in. They're making the case he went out of his way to get his hands on these documents, and building a timeline that suggests why he went to that trouble.


It could also be a e-discovery tool like Nuix. If you want to find all documents containing X amongst millions of docs- that's the sort of tool I would use (work in computer forensics) *edit. And 9.7 GB of data, assuming it's docs not "just' a lot of CAD is a lot of docs..


This is true, but it doesn't negate the issue. If you need some sort of client to gain access to the design server, and he didn't already have it, it very likely means that he didn't work on those files normally, and hence, really probably shouldn't have had access to them.

He may not have installed "hacking tools" or anything like that, but he did specifically take action to access files he didn't normally use as part of his job. Which is, I think, all that this post is claiming.


Interesting. I vividly remember a commenter here on a thread about Uber's acquisition of Otto. The user said based on the timeline and filings, it seemed like Otto hadn't really accomplished anything yet, and was probably founded purely to be acquired by Uber. I wonder if there's even more here...


Does anyone else remember this New Yorker profile [1] of Anthony Levandowski and self driving cars? Way back from 2013, when this tech was still novel. Google let Levandowski run the show for this piece -- his name is mentioned 57 times in the article. Goes to show how important and trusted he was in Google's universe.

[1] http://www.newyorker.com/magazine/2013/11/25/auto-correct


Sure. I met him when he was still a student at UC Berkeley. He was the one who built the self-driving, self-balancing motorcycle for the 2005 DARPA Grand Challenge. It didn't navigate that well or get all that far, but it was really cool.


Anthony Levandowski was targeted back in 2014 by protesters against Google's military work with Boston Dynamics.

https://arstechnica.com/business/2014/01/protestors-show-up-...

http://www.berkeleyside.com/2014/01/22/activists-target-goog...


Maybe I have a selective memory as a former Zynga employee, but generally these "stolen documents" lawsuits in high profile tech companies have generally turned out to be pretty factual. Easy to prove, and hard to fake.


And lawsuits like this generally don't get filed unless it's near slam-dunk considering the burden of proof is high on stolen electronic documents.


Considering even with logging off: journaling file systems, "user assist", device connection logs, pre-fetch (or your OS's equivalent)- these are all huge tranches of data if you are looking at a system shortly after an event. Ask me 6 months later- probably not. Give me a system that hasn't even rebooted, has a heap of ram and hasn't been used much since- 6 weeks is fine, not ideal, but doable.


Journaling filesystems don't "journal" all the activity in perpetuity. They typically just journal the changes until they're committed to disk, usually for less than a second.

See http://www.nobius.org/~dbg/practical-file-system-design.pdf, section 7.2 "How Does Journaling Work?".


I always was incredibly surprised at how quickly Uber had working self-driving cars (with the required, highly specialized hardware). Guess this explains it.


Because Uber hired experienced people from CMU (who probably also brought gigabytes of files with them).


Uber hired an entire experienced department from CMU.


Uber acquired Otto around 08/2016. I don't think this explains it.


The lawsuit from Waymo specifically calls out good LiDAR as a component that Uber had not successfully developed prior to acquiring Otto.


Who knows maybe they had somebody else from the Google self driving car project to steal self-driving car secrets earlier. Based on what this Levandowski guy did the industrial espionage may go unnoticed. I'm wondering if Waymo will require Uber to reveal schematics of their self-driving car project as part of the law suite.


I have doubts what was stolen was actually any of the secret sauce. An interface board for a lidar unit is probably one of the most simple things on the list.

The actual self driving software, and more importantly, all of the collected data from the waymo fleet would have been the key.


Not so much an interface board as a whole new tested design for a LIDAR unit, including a unique patented optics setup and laser driver circuit, according to the complaint. Also testing, manufacturing, and characterization procedures and results and information on suppliers for the parts required. The PCB was just the component whose accidental disclosure lead them to conclude that Uber and Otto were using the stolen design. Since the PCB apparently dictates the position and orientation of the laser diodes and sensors, presumably it would only be useful if they copied the whole thing.


Interesting. Is the specific Lidar unit really that big a differentiator? I understand they aren't cheap or simple, but it seems odd that each self driving car company would want to design their own. I would guess you would rather have some healthy ecosystem of suppliers...Velodyne, etc.


> it seems odd that each self driving car company would want to design their own

They don't want to do this, so, if they do, it's because they had to do this.

Practically all of the current sensor suites are expensive, bulky, and power hungry. If you want them on lots of cars, you need to reduce all 3 of those characteristics dramatically.


Frankly, I wish this "patented" (does that word mean anything at all these days ?) tech gets into China, and gives all us hackers cheap Lidars to play with.


Uber launched their beta in Pittsburgh just before Otto acquisition. This doesn't explain it. It explains how they quickly wanted to build their own lidar.


Oh come on. CMU had its own DARPA contenders, and where on earth is this grand institution you ask ? "Pittsburgh" !


Yes. What does this have to do with Levandowski?


What kind of employee would download 14K files to a personal drive right before quitting? It is trivially easy to watch what files get copied over to external drives.

I think you can follow the money trail here and find some answers for sure. Now if Uber/Otto has a clause that prohibits employees from bringing in confidential data from previous companies, how can they be held liable? Does Google have to prove that those stolen documents were actually used in Uber designs?


We call them "bad leavers" in the forensics industry. There are enough of them to keep us in business.


A supplier of Google received the file from Uber and that supplier forwarded it to Google. This means the file was sent out by Uber to a supplier to try to get parts made. I think that's proof enough.

Btw that's 1 very sharp eyed engineer, whoever that is...


>Btw that's 1 very sharp eyed engineer, whoever that is...

It sounds like this was entirely accidental on the supplier's part.


Lots of Juice here.

Complaint: https://drive.google.com/file/d/0B7dzPLynxaXuQjY3dkllZ2ZKb0k...

>Waymo was recently – and apparently inadvertently – copied on an email from one of its LiDAR component vendors.

Is this going to be a legal test of that annoying lawyer email footer language?

>This message contains information from xxxxxx that may be confidential and privileged. If you are not an intended recipient, please refrain from any disclosure, copying, distribution, or use of this information and note that such actions are prohibited. If you have received this information in error, please notify the sender immediately by telephone or by replying to this transmission.

Ha! More legalese BS that never holds up.

> Otto launched publicly in May 2016, and was quickly acquired by Uber in August 2016 for $680 million.

The fact pattern here is going to be absolutely brutal for Uber. A non-technical judge is going to see the allegation: ex-google employee downloads technical documents in December 2015, launches a company 5 months later in May 2016, and is bought for $680M (later speculated to be $1B+) for all its technical accomplishments. How much fundamental research did they do in the 3 months between May-16 and August-16?!?!? Or was it just to buy the stolen IP that google had developed over 7 years?!? Brutal for Uber!

--

A public company recently settled a similar lawsuit (competitor hires exec, exec is proven to have downloaded documents) for $130M on much smaller numbers. And the defendant was run through the legal wringer first.

http://www.geekwire.com/2016/zillow-realtor-com-operator-mov...

Expect Uber spankage, bigly.

> shortly after Mr. Levandowski received his final multi-million dollar payment from Google

Funny because of all the recent press that Google paid autonomous driving talent too much that they left!

>Infringement of Patent No. 9,368,936 (Against All Defendants)

Real nasty. If a trade secrets lawsuit is an arrow, throwing in a patent infringement claim too, is poison tipped and barbed!

This is some good "old skool Google" where they used to show broad competence across many domains; in this case legal.


Presumably you're in Arizona at the moment, Mr. Levandowski, it's close to the border, run for it!

We'll take a moment to remember the salad days, when you were just a crazy college kid who showed up at the Darpa Grand Challenge with a self driving motorcycle:

https://youtu.be/XOgkNh_IPjU


This is going to be interesting to watch. Alphabet just:

- went nuclear on Uber/Otto

- revealed what they track internally to all their employees


"- revealed what they track internally to all their employees "

Googlers already know this. Like it's not even close to being close to being close to being secret.


When your company stores very private info on billions of people, and is actively attacked (sometimes successfully) by the top intelligence agencies of the world[1][2], you have to be extremely careful, and monitor everything.

[1] https://en.wikipedia.org/wiki/Operation_Aurora

[2] https://www.newyorker.com/news/amy-davidson/tech-companies-s...


Some people might see an irony in your comment.

Economist Joseph Stiglitz wrote in 2009 "...banks that are too big to fail are too big to exist..."

My theory is that the too big to exist theory is now true for basically all the tech giants. Generally, everyone who knows the kind of tracking these companies do (internal and external) agree this is true, except those who benefit from the companies' continued existence e.g. employees, investors, shareholders.


You conveniently forgot consumers.


On the other hand, imagine if the data collection never stops and one of the big companies gets hacked, or faces a serious competitive threat making it more likely to sell its data, starts going out of business, or needs to cooperate by sharing its data in return for government favors, or needs to share data to get access to foreign markets etc. I have a feeling this venerable "consumer" is going to learn a painful lesson one of these days.


I think it's reasonable to be suspicious but what they described sounds mostly feasible without extra steps of tracking.

What evidence did they present and how could it be tracked?

- downloaded 9.7 GB of waymo data -> server logs of what files where accessed and downloaded by what user

- searched for special software -> he used google while logged into a work account, so they just looked up that work accounts search history

- Connected external hard drive and wiped data -> Short of automatic backups or something this seems like there is software explicitly for tracking when data is copied, where, and how much

Most of this besides the external hard drive part can be done by any employer who owns your work your gmail account. What really should be alarming is how easy it is for your employer to get lots of data on you even if they aren't some tech giant.


> Most of this besides the external hard drive part can be done by any employer who owns your work your gmail account.

Actually, I think that would be the easiest as I would assume any external USB devices connected to a computer would automatically send an alert to the security team due to how easily they can infect your computer with malware. I'm not sure my company has something like that but we have posters everywhere telling people to never plug external USB devices into our computers so I would not be suprised.

>What really should be alarming is how easy it is for your employer to get lots of data on you even if they aren't some tech giant.

Right? So much for the Principle of least privilege.


Google employees are probably aware they are being tracked. Keeping such a thing a secret would likely result in more leaks, not fewer.


Not only is it very well known internally, Google has even open sourced some of the tools that are used for that purpose: https://github.com/google/grr


Still, how can they work in a basically zero trust environment? They can't hope anyone reasonable will come up with some great idea and willingly share it with them.

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: