Now, it's "later", their software demos are about where Google was in 2010, and Tesla has a big problem. This is a really hard problem to do with cameras alone. Deep learning is useful, but it's not magic, and it's not strong AI. No wonder their head of automatic driving quit. Karpathy may bail in a few months, once he realizes he's joined a death march.
If anything, Tesla should have learned by now that you don't want to need to recognize objects to avoid them. The Mobileye system works that way, being very focused on identifying moving cars, pedestrians, and bicycles. It's led to at least four high speed crashes with stationary objects it didn't identify as obstacles. This is pathetic. We had avoidance of big stationary objects working in the DARPA Grand Challenge back in 2005.
With a good LIDAR, you get a point cloud. This tells you where there's something. Maybe you can identify some of the "somethings", but if there's an unidentified object out there, you know it's there. The planner can plot a course that stays on the road surface and doesn't hit anything. Object recognition is mostly for identifying other road users and trying to predict their behavior.
Compare Chris Urmson's talk and videos at SXSW 2016  with Tesla's demo videos from last month.
Notice how aware the Google/Waymo vehicle is of what other road users are doing, and how it has a comprehensive overview of the situation. See Urmson show how it handled encountering unusual situations such as someone in a powered wheelchair chasing a duck with a broom. Note Urmson's detailed analysis of how a Google car scraped the side of a bus at 2MPH while maneuvering around sandbags placed in the parking lane.
Now watch Tesla's sped-up video, slowed down to normal speed. (1/4 speed is about right for viewing.)
Tesla wouldn't even detect small sandbags; they don't even see traffic cones. Note how few roadside objects they mark. If it's outside the lines, they just don't care. There's not enough info to take evasive action in an emergency. Or even avoid a pothole.
Prediction: 2020 will be the year the big players have self-driving. It will use LIDAR, cameras, and radars. Continental will have a good low-cost LIDAR using the technology from Advanced Scientific Concepts at an affordable price point.
Tesla will try to ship a self-driving system before that while trying to avoid financial responsibility for crashes. People will die because of this.
I think that depends on what you mean with self-driving. My prediction is that in 2020 we will have slightly better driver assistance systems. Maybe a lane assist system which won't kill me if I don't babysit it on a road with construction ongoing and multiple lane markings or if corners get too tight. Maybe some limited self-driving, e.g. only available in dedicated areas like highway/autobahn.
Keep in mind that 2020 is 3 years, which is less than the development cycle of a car. Or in other words: If something should get released in 2020 you now would already see it driving around on the roads for first tests.
My personal prediction is that we will see reliable and advanced self-driving technology in mass production cars maybe in 2 full car generations from now - which is 2030.
If all sensors indicate that there is no emergency obstacle requiring sudden swerving or braking or what not, the car should be able to safely decelerate and stay in a lane etc... but this might be to complex a problem. (I was thinking if a driver may have a seizure or some other episode - but if a driver is under duress this may be a bad thing)
- Volvo 
- Chrysler 
- Ford  (2021)
- GM  (2018, first live test fleet)
- Toyota  (although they just started with self-driving)
Or if someone else makes a mistake.
That's another scenario that can have very unpredictable and immediate consequences ranging from 'nothing' to 'two car accident with fatalities'. Even in relatively placid (when it comes to driving) NL I see this kind of situation at least once per year.
Then there are blow-outs and other instant changes of the situation. I do believe that especially in those cases it should not take long before computers are better than humans because of their superior reaction speed.
Your viewpoint is how self-driving cars will come to be accepted into the mainstream. It will essentially sneak up on the average driver.
I would be extremely nervous to let a fully autonomous vehicle drive me given the current state of the art.
The aviation people hit this problem in the 1990s, as autopilots got more comprehensive. Watch "Children of the magenta", a talk by American Airlines' chief pilot on this.
The automated parts would kick in when necessary and they would be increasingly intrusive in their warnings and modification of driving until the human is only needed for identifying source and destination.
Are you sure? Your calendar says that Bobby needs to be picked up from practice.
Why don't I go ahead and set the destination for you?
It means a car that doesn't have a steering wheel, brake and accelerator. That kind of a car is quite far.
And that's for the high end. The low end cars are probably 50-60 years away, IMO.
Really cheap cars don't even have automatic gearboxes (or they are bought without them 90% of the time, outside of the US).
By "cheap car" I mean something cheaper than $20000 and "really cheap" would be below $12-15000.
Here in the US, the sensors and control fly by wire that Lexus used a few years ago have largely trickled down to the Corolla and the like as a standard feature. The differences between making autonomy work on low end and high end cars will be purely a software problem. That's not the kind of additional work that takes thirty years.
It's a price problem since you need to install a bazillion sensors, motors and other thingies which are basically used only for one purpose.
And regarding really cheap, you're on HackerNews, ergo less likely to ever meet those. But it's usually cheap models from cheaper brands such as Dacia, Hyunday, Kia, Tata, several of the Chinese brands. I doubt most people you know actually own one :)
That's an overstatement. You need to install sensors, many of which you'd install anyway for the modern lane-keeping and crash avoidance features. The sensors are not ruinously expensive and economy of scale incentivizes an automaker that makes both cheap and expensive cars to get these features out to the low end very quickly. As Animats pointed out, you really want to order this stuff by the million, not the thousand.
I guess I wasn't arguing with the crux of what you're saying - having a feature is generally going to be more expensive than not having it. The overstatement was very strong, especially in terms of timeline for reducing the costs of this functionality.
While I'm sure there are probably new (or relatively new) vehicle out there sub-$15k, they probably aren't much to write home about. Certainly nothing I want (that's just my personal wants - for instance, if it ain't 4wd and/or it can't be lifted, I don't want it).
As far as the transmission is concerned, here in the US most vehicles can't be bought without an automatic. Manual transmissions are becoming the exception; most car models don't even have them as an option. The few that do (mostly trucks and sports cars) have seen declining sales of the option (and I wouldn't be surprised if it actually costs more to get it!).
(Whether they also had a steering wheel is of course irrelevant, what the parent means is that they can drive without nobody needing to use the steering wheel).
I can confirm that, at least in a sense, this is false. There are plenty of series cars with LIDARS, but not he scaning things you are thinking about, but simpler kind of lidar tech. I know that is not what you were talking about, but I thought it's worth pointing out other, existing, alternative approaches.
Waymo cars are really expensive because of that and they cant scale because Velodyne can't make Lidars that quickly.
You also consider the fact that Lidars are literally beams of light being emitted and reflections measured. If you have every car with a Lidar, you get interference and its not the gold standard of measurement anymore.
Tesla conquering the problem with algorithms is the right approach. Remember our brains use algorithms and two cameras to drive too. So its technically possible.
Nvidia keeps on pumping out faster GPU's, cameras keep on getting better. Teslas getting more and more data. They just need better algos while they wait for cheaper sensors that scale.
That's a very wise business move while everyone else waits for a magic bullet.
I had people working on these things that told me that interference is not an issue. Take this with a grain of salt but I would love to see some technical analysis about the possible impact of interference before dismissing LIDAR as susceptible to interference and the impact the interfecene it makes relative to the inference of positioning and spatial estimation.
I've found this: http://sci-hub.io/10.1109/ivs.2015.7225724 but it seems that while interference exist, from what I'm reading from the paper, they are not critical in the sense that they can't be worked around.
And if that is so, if you believe LIDAR is going to dominate the field, it's even more critical working with LIDAR early on so you have the know-how to fix the issues that might arise.
Most automotive projects look like: You want to release a new car model in year X (with is typically around now() + 3-5 years). Then you start a development project exactly for that car, which involves creating roadmaps for the car, creating the architecture, sourcing the components, packaging everything together and testing everything. Most components (including infotainment systems, driver assistance systems, etc). are contracted to sub-suppliers, which develop them especially for that car model (or maybe a range of models from one OEM). At the end of the development cycle you have a car which has exactly one car which has (hopefully) everything that was planned for that model and which will get sold. In parallel the development cycle for the next model begins, where there might be only a minimal reuse from the last one. E.g. it might be decided that one critical driver assistance component is sourced from another supplier, is now working completely different, and requires also changes for the remaining components.
So if you do not intend to upgrade something or reuse it, it just doesn't make sense to include additional hardware for it. We will see the required hardware in cars which also will make use of their functionality.
For Tesla it will be quite interesting if they will really deliver huge autonomous functions on that hardware, or whether we will see a new generation Model S (with overhauled hardware) before anyway. I'm personally pretty sure that we will see new model generations before the software will be on a "fully autonomous" level.
Not from what I saw from some constructors. Lot of software and parts are reused for multiples models.
The trick is, they'd have to advance the state of the art in software quite far, to derive "full self-driving capabilities" from this hardware.
so what if they can add it to future production, their talk promises or implies they have all they need. yet cursory review of random youtube videos will show you how limited their system still is.
this may be another "war" they lead the charge on but falter in securing the win. you can play the car marketing game at times similar to the technology market but in the area of safety there is no compromise. instead of acting like a tech company pushing a new tablet they should have acted like SpaceX
For love of the god, please do not comment in the public if you do not understand the subject. Take a look at Chrysler + waymo minivans, Volvo + Uber suvs, Mercedes self-driving test cars. They all have lidars.
See how condescending this is?
I think that's good advice, very often applicable, even on HN.
Lidar is AMAZING for giving press demos on sunny days. For the real world with rain, snow, leaves, plastic bags, etc? Useless.
The future is radar + cameras + a LOT of software blood, sweat and tears.
In fact, the reason we have crashes is NOT our eyes’ lack of distance detection through laser return timing — having two eyes is enough for distance appreciation. We have crashes because of attention deficit instead.
At this point, there is no reason to believe that a machine can't achieve and outperform a human on a driving task given the same inputs. Sure, human eyes have 5 million cone cells and 1080p feeds only have 2 million pixels, but 4K has 9 million, and more importantly, that level of precision is unnecessary for regular driving.
And Tesla doesn’t even bet just on the visible spectrum; it also relies on radar.
So sure, theoretically cameras would be enough. But we're not yet there with software, we can't use the camera input well enough. So if you can side-step the need for not-yet-invented ML methods by simply adding a LIDAR to a sensor suite, then it's an obvious way to go.
Compare with powered flight: we didn't get very far by trying to copy the way birds do it. The trick is in the super-light materials birds are made of, and the energy efficiency of their organisms. We only succeeded at powered flight when we brute-forced it by strapping a gasoline engine onto a bunch of wooden planks.
That in particular is what makes the hiring fascinating. This problem is Andrej Karpathy’s expertise. His CNN/RNN designs have reached comfortable results, in particular showcasing the ability to identify elements of a source image, and the relationship between different parts of the image.
The speed at which those techniques improve is also stunning. I didn’t expect CNNs to solve Go and image captioning so fast, but here we are!
I think the principles are already there; a few tweaks and a careful design is all it takes to beat the average driver.
But I think first we'll see cars utilizing tech as described in this paper:
...and variations of it to handle other modeling and vision tasks.
Self-driving vehicle systems are amazing complex; it won't ultimately be any single system or sensor, or piece of software or algorithm that solves the problem - it's going to be a complex mesh of all of them working in concert.
And even then, there will be mistakes, injuries, and deaths unfortunately.
Now imagine walking on top of a sky scraper in pitch darkness. Yes your eyes work in light, but in this case you will likely fall to death.
In fact even in daylight you drive a lot as a leap of faith. When the traffic light is green for you at a crossing and you see a car arriving on the side, you assume that you have the priority, that the car will stop and you go ahead without adjusting your speed to the coming car. This is a leap of faith in the fact that all other cars will follow the rules.
A car like a Tesla has also highest-quality maps and GPS sensors - these alone are way better than what you get in your smartphone and are enough to keep the car from going over the cliff.
Darkness is to eyesight as inclement weather/obstacles is to LIDAR
I am not sure what software does with noise from a lidar sensor but I have seen data from other noisy sensors and they are often useless.
In darkness it helps if you can hear the vehicles coming close.
Bingo. When we drive a vehicle, we use so much more than just our eyes to sense the environment, and hearing plays a very large part.
I believe that it is something that warrants research for self-driving vehicle usage; I don't know if anyone has done such research, but I haven't seen any papers on it yet. If not, it seems like an underappreciated sensor aspect that could potentially greatly augment self-driving vehicle capabilities, and would be a very simple and cheap sensor to add to a vehicle as well.
EDIT: Found this recent article...
While it seems to be focused mainly on diagnosing issues with vehicles before they become larger problems, there are hints about it being used for self-driving tasks as well.
Studies have been highly mixed:
I would be comfortable saying that the advantages a deaf, many eyed, always alert self driving system would far outweigh the safety of a hearing, two eyed and sometimes alert human driver.
If a technology only helps with some of the cases (e.g. fair weather) and does not work for the others, then there are two cases:
(a) A single replacement technology will be found that works in 100% of cases.
(b) The technology will only be used on the cases it works well, and the other cases will be handled by some alternative technology equally only suited to them.
In the case of (a), Lidar is indeed useless (or at best, only used as a supplementary technology in favourable conditions).
And I fail to see how (b) can be the case -- that is, how there can be another technology that will solve the rain/snow/night driving problem, but which cannot also outperform/replace Lidar for fair weather driving.
Isn't it interesting that we have five senses, when we could just have one that works in 100% of the cases? A third option is a system based on multi-sensory inputs. Several inputs that are just marginal on their own can provide good performance when combined.
... all of which Waymo's solution also has, in addition to LIDAR.
Until then its just an attempt to make something that breaks at the next unanticipated exception.
Just to get our car out of the garage, I had to plead and negotiate with N vegetable vendors with makeshift stores on the road.
Also: bicycles, motorbikes, rickshaws(in human pulled, CNG and electric varieties!)and pedestrians mixed in traffic everywhere.
Not to take away from anyone's work in this area, but I have no idea how long it'll take to go from "works in America" to "works in India". In many countries the safest option (to evade disaster) can occasionally be "floor it and break the speed limit" to get away from x dangerous thing. I'm not sure if that's something that Google is willing to write into an AI.
This is a very practical test case for a car on a road. Not just in India but anywhere in the world.
Instead of N vegetable vendors you could have N traffic cops. How do you manage the human interaction part in the self driving car?
bike < car < van < truck
which makes sense because if you're the one who's going to come off worse in an impact then you really want to give way - especially if you have a massive painted tipper truck hurtling towards you!
It will be very interesting to see how self driving systems can cope with these local unwritten bylaws.
That doesn't make much sense, because the main (and most common) question would still be between vehicles of the same class: car vs car, and this doesn't solve it.
This is why self driving AI will require Hard AI.
India is a perfect test bed for these people to test their algorithms. And for heaven's sake why would you test it in some place like the US. Cars in US are pretty much trains on road any way.
But even in those cases it makes sense to test in India. Why? Sooner or later you will have some situation in the US which may resemble daily traffic conditions in India. Imagine a law and order situation where people are running around without regards to traffic laws. Or some other situation where traffic is being rerouted through a wrong way, In US may be as an exception traffic is being routed through the left lane(being a right lane drive country) etc etc.
For all these situations you will very soon need a test environment that provides you with all situations to test.
Which won't be India.
Because you obviously don't want to be testing unprecedented conditions with live subjects in actual traffic.
If anything, that will only happen after tons of simulations of such conditions in fake environments.
It's not going to rain or snow today, and if it would, then I can take the wheel myself.
I thought I remembered Nvidia presenting some additional stuff about it in their Tech demo recently.
Now the guy is tackling another hard problem and everybody knows better.
I mean, I think bullshit can be sold to even 'techies' aka HN crowd, if it is wrapped just the right way...
But in spite of that, technically, Musk has already built a subscale Hyperloop nearly a mile long at the SpaceX campus in Hawthorne including an electric sled used for student competitions:
Can we just let the man try without being annoying the whole way ?
Or do something that helps ?
Or do something at all to understand that it's hard enough and that you really don't need a thousand voices enumerating all the reasons you might fail ?
Musk is adamant that lidar isn't necessary. Many disagree and are voicing their opinions on that.
I also think his view that strong ai is around the corner is detrimental to the industry. He is overhyping things and potentially creating an expectation whose investments reality won't support.
So, when I spend time pointing out he is not an expert in the ai space, it is to soften his outlandish predictions in the space, and help bring a more realistic perspective on who is talking and who knows what they're talking about.
I think Musk will contribute something to the self driving and AI space, just not in the way he claims.
A constructive criticism would weight pros and cons, explain POV without condamnation and not make grandiose pronostics.
Animats gave plenty of detailed constructive criticism, as did I. Neither comment could be simplified as you allege without leaving out important context.
If you'd like to put money where your mouth is, I'd happily do a wager with you. Tesla will be the first to have a fully autonomous vehicle - care to bet against that?
Yes, I would. Google is way ahead in the tech. Nevermind that they don't have a product. If we're talking strictly about the tech, Tesla is and always has been far behind
Feel free to message me if your side of the bet comes true. But, I really doubt it will. If Waymo forms a partnership with any major manufacturer they'll pretty much have it.
Nobody has built automotive LIDAR units in volume yet. That's why they're so expensive. It's not an inherently expensive technology once someone is ready to order a million units. It does take custom silicon. Tesla, at 25K units per quarter, may not be big enough to start that market.
Continental, which is a very large auto parts maker in Germany, has demo units of their flash LIDAR. They plan to ship in quantity in 2020. Custom ASICs have to be designed and fabbed to get the price down.
Engineers at Takata and in GM's ignition key department made one choice, Waymo seems to be making the other.
The whole reason the car project got spun out into Waymo was to fast-track commercialization. They do not in fact have an infinite amount of money.
It takes longer for a driver to react to a problem in that mode than to react without it. There have been full-motion car simulator and test track studies on this. Even with test subjects who are expecting an obstacle to appear, it takes about 3 seconds to react and start to take over vehicle control from lane keeping automation. Full recovery into manual, where control quality is comparable to normal driving, takes 15 to 40 seconds.
There are now many studies on this, but too many of them are paywalled.
People fall asleep even while actively driving the car. How can they be expected to maintain vigil with something like this.
But I guess Tesla is content with ending their responsibility at "Informing the user that they should be vigil at all times, even when car is driving itself", with out considering how feasible it is.
Another funny thing about it is that, earlier, with regular cars, you only had to watch the errors from the other drivers on the road. Now you have to watch other cars and also mistakes made by your own car's AI...
What could go wrong?
As far as I know they've never been in the front of the pack.
For example, I'd be surprised if you took one of those competing systems to "fail road" and it started to veer the way Tesla's system does instead of disengaging
On the other hand pushing the envelope on self driving technology using cheap sensors will probably help reduce the world's 1m annual auto deaths earlier than otherwise. Thousands of people will not die because of this.
Doesn't matter, only the results matter. Planes work and work well, they crush birds in every performance metric and sometimes literally. Can a self driving car be made safe without lidar? I suspect so, but I am not certain, but I am no expert.
Most automotive LIDARs just report the time of the first return, but it's possible to do more processing. Airborne LIDAR surveys often record "first and last"; the first return is the canopy of trees or plants; the last is from ground level.
It's also possible to use range gating in fog, smoke, and dust conditions. Returns from outside the range gate are ignored. You can move through depth ranges in slices until something interesting shows up. This seems to be in use for military purposes, but hasn't reached the civilian market yet.
Range gated LIDAR imagers have been around for at least 15 years. By now, it should be possible to obtain a full list of returns for each pixel for several frames in succession, crunch on that, and automatically filter out noise such as rain, snow, and dust. It's a lot of data per frame, but not more than GPUs already handle. Some recent work in China seems to be working to make range-gated imaging more automatic in bad conditions.
"Well, there's your problem right there, let's just slap on some strong sensors and you should be good to go!"
You know what has a weak sensor system? Any car without any sensors.
On one hand, not pulling in potential safety improvements because they only work in good weather seems wrong, but on the other hand...that might be what needs to happen from a cost/marketing/legal perspective.
This is a pretty strong statement. Would you sign up for a slightly more specific version of your claim?
"I believe the Tesla self-driving system that ships by the end of 2020 will be statistically less safe than unassisted human drivers."
And not just drivers of Teslas.
Humans can avoid potholes with one eye. I don't know why you assume LiDAR is a requirement for this.
In self driving AI you are programming the car to do a specific thing. Sooner or later you will run into a situation in which algorithm will panic and can't do much.
But the problem of making a self-driving car without LIDAR or something equivalent is awesomely challenging! An I bet Andrej Karpathy will really enjoy working on it.
And the tech resulting from this line of work will surely find its way in other things. (I guess the military has wet dreams about this stuff... I mean, even "unsafeness" can become a "feature" here: "uhm, look, that school we blew up by, uhm, mistake... was an AI-error... like... these stuff happens, you know, even Tesla's cars have an accident from time to time, that's life". Well, those dreams could also be nightmares: basically any "self driving thingie" is a potential guided missile, and dirt-cheap-because-lidar-less stuff has the potential of becoming ubiquitous, and unmaintained/unupdated/unsecured/hackable, leading to nightmarish urban warfare scenarios...)
And: "People will die because of this."... Uhm, yeah, they will, but if people ain't dying it means research is not moving fast enough, and competition will overtake you. I'd be more worried about when this stuff will be deployed on buses with tens of people, but hopefully public transport would stay a safe decade behind bleeding-edge stuff :)
And about and Tesla: however this plays out, Elon Musk made quite a lot of what would've been technically considered "bad business decisions" and things turned up OK so far... so I wouldn't feel sorry for them or short their stock ;)
Could you help me understand this further? It feels quite insensitive to me.
One rarely hears Dr. John Doe from Florida State University (or insert non-Standford university here) in Distributed systems has moved from Microsoft Research to NetApp. These are arbitrary names. The point is you rarely hear about people from other areas of CS outside of machine learning/universities outside of Stanford moving from one company to another. The field of CS is vast and there are multitude of practical and theoretical problems outside of machine learning that are worth looking into (ones that aren't currently considered hip or cool by the public).
AI is hot. So therefore there is a huge spotlight on all angles there. You can argue whether that is actually fair (personally, I do think AI is a high beta field). Topics that don't fall under this are regarded as inside baseball.
But obviously there's more to drawing people's attention than the individual skill or a comparable position at a different company. As you mentioned this top of HN ranking is driven by joining a trendy company, leading a hugely hyped product team (the Tesla automation stuff was on the front page yesterday), and a personal with a really hot skillset.
Combined of course with the usual luck and good timing.
Is it really hard to see how this isn't much more interesting than someone joining a realitively standard branch of Microsoft?
The comment up this thread holds true: human attention is not evenly distributed. That doesn't mean, however, that there's an imperative to "network" or build a "personal brand" – plenty of people gain a deep satisfaction from excelling at their craft.
If "make it big" just means a giant pile of money, there are plenty of millionaire pure technologists at Silicon Valley companies whos names are never told; the thousand or so that were created when Google IPO'd are basically unknown. Forbes had a recent article advertising Craigslist competitors, but reading between the lines, Craigslist has minted some of them, but they're entirely nameless among the wider population. If thats your definition of "making it big", then it's possible, but if you want broader recognition, I don't know that it's possible.
Maybe I'm being unimaginative, but outside of Steve Wozniak I can't think of any pure-technologists with household name recognition. The closest that comes to mind is Elon Musk, but unfortunately for you, there's plenty of marketing going on. I'd bet a large number of readers even here won't even recognize the name Vint Cerf.
Maybe you feel marketing is about lying, maybe selling yourself feels icky. However they're skills like any other; refusing to learn and use them would be like refusing to learn or use multiplication.
Read Sam Altman's praise of Greg (gdb) (http://blog.samaltman.com/greg) who is quite the gifted technologist, but the praise is for his dedication, on both technical and non-technical talent.
EDIT: Not saying these celebrities don't also earn their keep through their skills. It's just disappointing how much of a factor self-promotion is.
In a valley of smart and motivated people, discoverability will always be a challenge...though I don't doubt it's much more competitive now than ever.
There area millions of quiet, confident, competent people across industries. People who reliably turn out high-quality products and are well-paid for their work. They get on well with their colleagues and progress with their career at a decent pace. You just don't hear about this much. Doesn't this count as making it?
I'd say that the reason you hear more about people who are well-known is essentially just because they are well-known :)
It probably wouldn't work this way if people could wrap their heads around it.
The really question is, should we (as thoughtful human being
as we consider ourselves to be) bother to question the trend (Which you are surprised that people are doing)? Or silently accept it?
That would be nice.
> it would appear that previous questioning has failed to have an impact.
Just because something does not cause a change, does not mean it does not have an impact. Maybe, the impact is that it is keeping things from getting worse...In this case, question like these may help to balance the influence of "trends" and helps us to maintain perspective...
If the alternative is phony politeness and masking reality under a guise of everything went great then I prefer this approach.
This is a great aspect of the software industry IMO. And if you want to get lots of job offers then build some great OSS projects like this guy. It's a great way to demonstrate your skill and attract a following which guarantees you job offers.
But I hadn't seen that Telsa released a similar statement, and that changes the context quite a bit to match what you describe.
That's not "begging the question" .
When Scott Aaronson wrote about moving to Austin, that post made the rounds here too.
Personally, I am aware of the two you listed + experimental, condensed matter, and astrophysics. There is some overlap between physics and EE, so I may be aware of others.
Chris's response: "Turns out that Tesla isn't a good fit for me after all. I'm interested to hear about interesting roles for a seasoned engineering leader!"
I don't think I've ever seen a tech company throw an employee under the bus so publicly. I wonder what Lattner did to warrant such a public separation?
A players don't look at company's reputation, they look at things that actually matter.
They don't worry about their reputation either.
If you worry about any of that, you're not an A player :)
"In the end, Elon and I agreed that he and I did not work well together and that I should leave, so I did."
This part was removed after one day
In any case, judging by the reactions to his tweet it looks like he can pick and choose his next job.
In that case I have a bridge id like to sell to you
Also, it is slow compared to almost everything, even C++. It has got better over the last 3 years, but most of that is pumping heuristics into the system.
I think many want rewrites of fine, working software to show language's worth. This is startup or single developer stuff, large professional companies rarely rewrite stuff on whim.
> After all, they also use Typescript developed by MS.
Ofc, it's the best language for the purpose.
> This is startup or single developer stuff, large professional companies rarely rewrite stuff on whim.
That's not correct. https://martinfowler.com/bliki/SacrificialArchitecture.html#...
Failed to make AP2 work well with cameras alone would be my guess. Tesla is hitting the glass ceiling with its sensor hardware and the future isn't going to be pretty. Expect more changes in engineering leadership until Musk realizes he needs better data (sensors) for his neural nets.
VP Autopilot Software
January 30 - June 20, 2017
When I joined Tesla, it was in the midst of a hardware transition from "Hardware 1" Autopilot (based primarily on MobileEye for vision processing) to "Hardware 2", which uses an in-house designed TeslaVision stack. The team was facing many tough challenges given the nature of the transition. My primary contributions over these fast five months were:
We evolved Autopilot for HW2 from its first early release (which had few features and was limited to 45mph on highways) to effectively parity with HW1, and surpassing it in some ways (e.g. silky smooth control).
This required building and shipping numerous features for HW2, including: support for local roads, Parallel Autopark, High Speed Autosteer, Summon, Lane Departure Warning, Automatic Lane Change, Low Speed AEB, Full Speed Autosteer, Pedal Misapplication Mitigation, Auto High Beams, Side Collision Avoidance, Full Speed AEB, Perpendicular Parking, and 'silky smooth' performance.
This was done by shipping a total of 7 major feature releases, as well as numerous minor releases to support factory, service, and other narrow markets.
One of Tesla's huge advantages in the autonomous driving space is that it has tens of thousands of cars already on the road. We built infrastructure to take advantage of this, allowing the collection of image and video data from this fleet, as well as building big data infrastructure in the cloud to process and use it.
I defined and drove the feature roadmap, drove the technical architecture for future features, and managed the implementation for the next exciting features to come.
I advocated for and drove a major rewrite of the deep net architecture in the vision stack, leading to significantly better precision, recall, and inference performance.
I ended up growing the Autopilot Software team by over 50%. I personally interviewed most of the accepted candidates.
I made massive improvements to internal infrastructure and processes that I cannot go into detail about.
I was closely involved with others in the broader Autopilot program, including future hardware support, legal, homologation, regulatory, marketing, etc.
Overall I learned a lot, worked my butt off, met a lot of great people, and had a lot of fun. I'm still a firm believer in Tesla, its mission, and the exceptional Autopilot team: I wish them well."
The first draft ended "In the end, Elon and I agreed that he and I did not work well together and that I should leave, so I did."
Or what Tesla did. Why are you assuming it's Lattner's fault?
"Naively one might have expected some machine learning expert to take over the reins at Tesla."
Here we are.
Compilers give a good base for transferring to things like databases, operating systems and IO heavy systems with lots of transforms / filters etc. They also ingrain a way of thinking that isn't native to most devs - writing code that generates code. Monads and other approaches to dynamically composing a computation - they come easy.
All rank speculation, of course, but maybe he didn't like what was "under the hood" of this feature and how it was being developed and marketed?
Just spectator curiosity, but interesting to ponder none the less.
I wish him that best though. Hopefully some of Tesla algorithms will be open source someday and those of us who can't afford a Tesla will be able to use it as well.
It's like Musk said with SpaceX: publishing patents is just like putting out a recipe book for China.
I guess that's a win win for the employees and for Musk. Not sure how many other supporters Open AI has, though I doubt that's what they had in mind when they donated to support that effort.
If it weren't attached to Musk, and if Tesla never hired from them, I'd agree it's good to have a non-profit in the mix. As it is, if it looks and acts like a pipeline for talent, it's a pipeline for talent.
Source: I am a manager who has given offers to top-tier ML experts.
300k+ for a new hire ML/CV/NLP PhD with some relevant experience.
150k+ for a new hire ML/CV/NLP MS with little to no experience
We were working with a very expert ML contractor that is doing 800k on his own from pop-up projects.
Agreed though on total comp for top ML experts who have been around for a while - or the highest end.
The obvious reality is that top people rarely talk about their comp packages, as there is no reason to rock the boat.
The AI scientists, those with work in computer vision, natural language, and audio, developing novel networks and training methods, make at least $500K/year. I've been a data scientist and the pay (and work content) was a joke. I switched to AI and damn, work makes you think and you get paid like a mid-range NBA star.
Theoretically, one would feel, that by just reading blogs, watching videos along side taking MOOC courses, and spawning GPUs on the cloud, should do it.