Frankly I feel that for this kind of investication (car crashes and accidents) the manufacturer should be required to hand over all data relevant to the investigation. On device logs are probably the most relevant data you could get in these cases.
That this lab decided to try to decipher the data, and found troves of it that wasn't known to exist before, speaks to how the relationship must be.
They didn't trust Tesla to hand everything over.
In my humble opinion, any region should force all manufacturer of cars that will be sold in that region to hand over data (or methods by extracting the data) in cases like these, lest their license to sell their product be rescinded.
I’d go further, and say that the data should be exportable in a predefined format (ideally an open standard), so that investigators can use 3rd party tools to investigate any make of ‘smart car’.
Then you obtain a warrant for that data, as intended by due process of law. If the owner still refuses to hand it over, you have them on obstruction of justice.
I'd go further still, the telemetry data should be standardised the same way OBD-II interfaces are. Open source the tech, or at least disclose it to researchers upon request.
So, you think it’s OK if the car you own spies on you, using electricity you pay for?
I’m rephrasing it that way to show that, depending on details this might be close to what Apple intended to do w.r.t. child pornography, something ‘the internet’ wasn’t a fan of, to say it mildly.
There’s a difference here in that there’s an ‘investigation’, but you’re also talking of ‘car crashes and accidents’, so this could get reasonably close (we already have cars that call 911 crash; should they also report it when they hear car accident-like sounds that co-occur with unexpected car movements?). Who’s going to decide whether an accident is serious enough to warrant this? Should car owners have the right to disable this kind of recording?
The car is already spying on you. I think the question is: if data is available to the manufacturer, should it be available to investigators?
I am think it is reasonable. Tesla uses these data to protect themselves (for example by showing that autopilot wasn't in use during an accident). It is not really about user privacy anymore, it is about corporate interests, and I think road safety is more important, that's why we have all these regulations.
The car is already spying on you. I think the question is: It shouldn't be spying on you.
The data belongs to the car owner. If the owner wants to disable data collection, so be it. If anyone else wants that data, they can ask the owner for the decryption key and access to their car to make a copy.
I don't think it's even remotely reasonable for devices that we own to betray us. My car should protect me, not the manufacturer.
This is not what apple suggested.
Apple suggested preemptively notifying government of "illegal" material in a users device.
This is about facilitating an ongoing specific investigation after a car accident.
> this might be close to what Apple intended to do w.r.t. child pornography, something ‘the internet’ wasn’t a fan of, to say it mildly.
Not to completely disregard your point but its not the same. After seeing a few brain bits splattered across windows and chairs/dashboards, stray limbs...road accidents are in another lesgue entirely. We sort of need accidents to happen so that regulation of technology will use blood as ink/toner.
> In my humble opinion, any region should force all manufacturer of cars that will be sold in that region to hand over data (or methods by extracting the data)
This is precisely what is required from Tesla in China.
I just read some Dutch news sources to get some more context.
Some things to know:
Tesla hands over all requested data in case of an investigation. But only the requested data. Because we now know what data is stored more data can and will be requested in the future.
The crash was caused by the driver because he thought lane assistance was turned on but only adaptive cruise control was turned on. This caused the vehicle to move to another lane.
So the question about this accident is: how obvious is it for the driver what is going on and what assistance to expect.
> So the question about this accident is: how obvious is it for the driver what is going on and what assistance to expect.
- you have to deliberately turn the full assistance on/off
- there are audio cues when it turns on, and whilst it’s on, and when it turns off
- the visuals on the screen are different when it’s on vs. off
- the steering wheel feels very different when it’s on
- when it’s on, there are regular cues to generate torque through the wheel to show you’re paying attention
- the overall concept is that even when it’s on, you should keep your hands on the wheel and pay attention - though this is obviously subject to human failings
TBF, I’ve done something similar once or twice —-only turned on cruise control when I thought I’d turned on full assist—- but if you’ve got your hands on the wheel it’s pretty natural and quick to correct and then realise your mistake.
In the 2,5 years I have been driving my model 3 it has happened to me several times that autosteer got disabled (likely due to steering wheel resistance) and I only noticed it after a while. The visual queues are extremely subtle, the auditory queues easily get lost in road noise or music and when you are driving on straight stretches of smooth highway you don’t constantly feel feedback through the steering wheel. Of course the person in the driving seat always remains responsible but it’s not hard to imagine how someone might drift out of their lane like this.
Not being facetious, but aren’t audio, visual (=the dashboard), and touch (=control forces) the only ways a human is able to interact with a car?
They could make the engagement or not of FSD more obvious in the UI, and maybe have spoken alerts? (Although at the moment there are so many audio alerts in a Tesla, I think it’s approaching saturation - the point where if everything is an alarm, then nothing is an alarm…)
The Problem with the model 3 and Y are the lack of a screen in front of the driver. All of the ACC, Lane Assist, and self parking (can’t be engaged from the steering wheel) in my Jeep are displayed right in front of me. I don’t have to look away to see what’s turned on. My wife’s BMW is similar and it also has a HUD that displays everything. The driver never has to turn away from the road to figure out what’s happening. Both cars also have audio cues associated with the various capabilities.
From similar discussions, this seems to be binary - compared to a 'normal' dashboard (which while central isn't in the driver's eye-line whilst watching the road, is often dark or lacking in contrast, and can be obscured by the wheel depending on the driver's size and seat settings) people either love or hate the M3/Y's screen location. I get on fine with it (and flicking my eyes there is arguably better than my other cars' 'traditional' dash location) but others don't.
Agree that a well-designed HUD might be the best of all worlds; interesting how very few cars implement this, despite it having been technologically straightforward for decades.
Audio cues in the form of a one-shot bell are completely useless for important information. A driver is naturally in a noisy environment and too busy paying attention to other things - and when they have time to worry about what the sound was, it's gone. The same applies to light UI changes in a display the driver rarely looks at - especially when it isn't even in front of them!
That's why you usually use pretty strong color-coded indicators in conventional dashboards to convey mode - hard to miss unlike vague UI differences in an already low-contrast UI, and they can be seen any time unlike an audio cue. Audio cues in such cars usually just mean "recheck the dashboard, a new lamp came on".
The only better thing I can think of (outside a continuous warning beep) would be a HUD display like other high-end cars.
It wasn't a request for "all and any data". From the article:
> Tesla however only supplies a specific subset of signals, only the ones requested, for a specific timeframe, whereas the log files contain all the recorded signals
Tesla provided exactly what was requested and nothing more. Now that the government knows what is more in the data they can request that too, even if it's irrelevant to the investigations.
I am not sure. Maybe the law states you can only investigate what you need to investigate. Climate control and music volume settings might be irrelevant.
It's not a person requesting data about themselves under GDPR, it's the dutch prosecutors and courts requesting data from Tesla for a criminal investigation. "they" is not the driver but the dutch authorities; GDPR simply is not relevant at all for this case (e.g. GDPR article 2.d explicitly states that GDPR does not apply to activities like these), general rules of criminal process and court orders apply.
Yes, that’s true. I was more thinking about the general research they were doing into establishing what data various Tesla models record. i.e. purchase a vehicle, drive it around (perhaps crash it if the research grant can stretch to that), submit a SAR. Any discrepancy between what personal data Tesla provide and what they found when they decrypted it themselves feels like the main story here to me.
would driving data/telemetry be categorised as personal data under GDPR? I assume cars could have multiple drivers so unless there's a clear way of deriving the person from the data itself it might be classed as anonymous and therefore not in scope
Yes it likely would be, if it’s timestamped log data. Even if there are multiple drivers, the data recorded still relates “to an identified or identifiable individual”. Simply knowing who was driving the car when, which shouldn’t be difficult if it’s your car, makes the specific individual identifiable.
I wish there were more information about how they were able to "decrypt" the data. I put that in quotes because often times journalists or laypeople don't differentiate between a file format that is just compressed or obfuscated in some way, and true encryption.
After all, it would be trivial for Tesla to provide essentially unbreakable encryption: just encode all the stored data with a public key where only Tesla has the corresponding private key. But if NFI was able to decrypt, they obviously don't do that, so I'm curious what manner of "encryption" is used in the car.
The word “decrypt” means “make (a coded or unclear message) intelligible”, which is more broad than the IT-specific jargon of using cipher algorithms. It’s perfectly fine to use “decrypt” in the general sense in an article meant for general people to read.
The problem with this, then, is it confuses topics that are important for lay people to differentiate.
Take the current idiocy coming from the governor of Missouri, trying to prosecute a journalist for "hacking" for the crime of ... reading HTML. While the governor used the word "decoding", not decrypting, he accused the journalist of going through some complicated intentional procedure to access private data, which is of course bullshit.
Even in the general sense, "decrypting" something strongly implies the original author had the intention of keeping it private and secure. Was this the case with Tesla's data? It seems like not, because it would have been trivial for them to implement unbreakable encryption. Or was this just a case where someone else figured out their file format?
I’m assuming Tesla don’t give you the source code to their cars. Does anyone? I’d love to have a car that can be taken apart and put back together — the engine and the UI. Not because I have the skill or shop needed to do maintenance, but because at least I stand a chance of paying someone else to.
Right now it feels like I’m dealing with a bunch of pirates. I either pay $800 to hack an HDMI cable into my Audi’s virtual cockpit just so I can use a map that’s not 6 years out of date, or $500 to the dealership to install the latest map update pack. Argh!
I don't see an issue with source code, nor intellectual property or similar, it is all about recorded data, the whole point is that Tesla, requested according to the Law to provide the recorded data (decrypted) provided only a (small) subset of such data and the guys from the NFI managed to decrypt the whole stuff, finding out that Tesla omitted data useful to establish the reasons of the crash.
It also seems that - casually - the omitted data lead to believe that the root cause was Tesla's autoipilot making the car tailgating the one in front of it.
Imagine that an airplane crashes and the decoding of the flight recorder is carried by the airplane manufacturer that omits the part where (say) the tail rudder didn't work as it should have.
No, sadly there is no open source car avaiable (I think some designs exist, coming from univerisies, but nothing you could buy anywhere) I suppose consumers, tuners and repair shops would love it. But I do not see any big company ever going into that direction.
It would be interesting, how it could avoid the patent madness. I assume EVERYTHING with cars is patented.
I've actually had the intention of building a company focused on manufacturing and selling open-source household appliances, like washing machines, fridges etc. Cars would be the endgame for a company like this.
Still in the early planning stages, just creating a good toaster that's repairable, open etc. is an interesting undertaking if you want everything above board.
Which, interestingly enough, was forced on IBM in the form of a right to repair/interface provision in an anti-trust consent decree targeting mainframes.
> The NFI said the decrypted data showed Tesla vehicles store information about the operation of its driver assistance system, known as Autopilot. The vehicles also record speed, accelerator pedal position, steering wheel angle and brake usage
Aren’t these the same things twitter.com/greentheonly publishes very often?
A lot of questions about responsibility come from this case.
If autopilot was following the car too close would the driver be responsible for not taking action? How much trust is a driver allowed to put in autopilot?
I also wonder how much data other cars collect. As far as I know most cars collect a lot of data you don't know about.
"According to Tesla, each follow distance setting corresponds directly to a time-based distance representing the time it takes the Model 3 from its current location to the location for the car in front of the driver.
For cars manufactured after April 27, 2021, Tesla set a minimum follow distance of 3 or greater. But with the 2021.4.21.3 update, this number has been reduced to 2, bringing Autopilot ‘Pure Vision’ close to parity with its previous radar-based counterpart."
Note they say time based and not second so 2 might be something like a second.
Would be interesting to know if the issue here that the driver selected 1 (or 2 or 3) car-distances, and the Tesla followed at 0.25 or 0.7 or something
What if not all sensor data is accurate or reliable? For example their new cars have no radar and therefore the distance is just an assumption of the AI model and can be wrong. It would be pretty bad to rely on unreliable data for the justice system. If they want to use all data I assume some form of interpretation should be done by the Tesla to avoid false positives or false negatives. Especially in crashes where FSD was driving.
That comment doesn't make sense at all. If anything you want to know what the Tesla AI was thinking so you can compare to reality (like Traffic cameras etc.), which might find fault with the driver assist system. Letting Tesla "interpret" the data before handing it over is completely backwards as they are an involved party on the accident.
Radar data is also an assumption, way weaker than cameras.
Cars radar sensors are not the F22 ones, they are very simple, limited and it is not the sky in which metallic objects are foreign and there is empty space everywhere.
The world on earth surface is full of metallic objects with very complex shapes and reflections everywhere(and you have military safety standards, my uncle got cancer(and died) after working with military radars as an engineer and working so close to them). My uncle could not prove the connection but if thousands of people start getting cancer after those devices are common it will be much easier to prove.
With cameras you have millions of points array for cheap that lets you differentiate things much better than simple radars do.
Imagine that you integrate all the colors that go in a camera to a single color. That is what you have with radar sensors.
Did you mean single pixel? Even with a single color, a camera provides a wealth of information. Radar is not quite a single pixel - there is speed, phase, and timing information - but it's much closer to a single pixel than to a monochrome camera. The driving radar I mean. The kit on used in aerospace sweeps out an area so it's more like a camera in the actual output.
Side note: Sorry about your uncle, but I do wish to point out, radar is non ionizing and extremely unlikely to cause mutation. What is more likely carcinogenic is stuff like degreasers, paints, fluids, and open burn pits the military likes to use.
Question for Tesla owners: is it possible to opt out of such telemetry? I like the cars but find the idea of being closely tracked and monitored very creepy.
While the software might have failed here, so did the human driver who failed to stop the unsafely operating autopilot. And this very problem could be fixed with the next software upgrade.
The more appropriate question if the autopilot is worth it if it has to be carefully watched by humans.
DMCA is just a letter format for removal of content per U.S. law. Any country that observes copyright (they are apart of the WTO) is expected to comply with other member nations' copyrights. All developed countries have also adopted the Berne Convention.
It’s actually exactly the other way around: countries don’t comply with other countries’ copyrights, but they must offer the same level of protection to authors of works created abroad as they give to authors of works created locally (I.e. non-discrimination of foreign works/authors). And they must also offer a minimum level of copyright protection as specified in the convention. The egregious content removal remedies offered by the DMCA in the US are luckily not part of those minimum protections, so most sane countries have a more balanced procedure for content removal remedies.
If you think that the US is (relatively) insane when it comes to this, you have never witnessed the German equivalent (which is much more complainant-friendly).
The whole point of DMCA is that it's a relatively recent extensions/addition to the copyright law that USA had before, which matched the Berne convention.
The non-DMCA parts of copyright are pretty much the same in USA as across the whole world, as they have been harmonized with international treaties. The extra requirements of DMCA (e.g. restrictions on circumventing technical measures of copyright protection), however, are country-specific and not widespread elsewhere.
That is a pretty big simplification, both partly true and false. There is international cooperation but copyright has significant differences between countries and having a copyright claim in one country does not always mean you have it in another.
And which of these treaties exactly has a problem with the reverse engineering work done here? Nobody is claiming that Tesla doesn't have copyright on their software, if that copyright allows them to restrict this work is an entirely different kettle of fish, and depending on local laws.
You're right, I don't think there was ever a DMCA claim actually made, and probably would not be valid. Just replying to the misleading statement as if copyright from other countries have no standing.
They didn't claim that copyright from other countries has no standing, but correctly pointed out that claims under other countries copyright laws are irrelevant.
To answer the point that I think you were actually making, reverse engineering in the EU is covered by EU Directive 2009/24. This allows reverse-engineering for interoperability purposes, and (possibly in a more limited way?) also for the purposes of understanding the operation of a program.
Governments are bound by law, at least in democracies. There are thousands of cases in court by citizen and companies trying to, for example, overturn some decision (i. e. building permits, asylum, road construction, ... really anything), and they are frequently successful.
Well, more precisely, they didn't trust that the private company was providing everything, presumably:
> The NFI found that Tesla had complied with data requests from the Dutch authorities, but left out a lot of data that could have proven useful.
They asked for data, were given data, when they looked further they found it was incomplete.
I don't think there's anything particularly weird about this; if you're investigating something, you probably shouldn't uncritically trust the object of your investigation.
No, but they do have the GDPR, which affords data subjects the right of access to their own personal data.
Seeing as “the vehicles also record speed, accelerator pedal position, steering wheel angle and brake usage” and that personal data is "information that relates to an identified or identifiable individual", that sounds in scope of GDPR to me. I’d like to know what lawful basis they are using for this. I can see no other valid basis other than consent.
Tesla being less than forthcoming about what personal data they are processing is likely also problematic when it comes to the right to be informed too.
That this lab decided to try to decipher the data, and found troves of it that wasn't known to exist before, speaks to how the relationship must be.
They didn't trust Tesla to hand everything over.
In my humble opinion, any region should force all manufacturer of cars that will be sold in that region to hand over data (or methods by extracting the data) in cases like these, lest their license to sell their product be rescinded.