Hacker News new | past | comments | ask | show | jobs | submit login
Drone Uses AI and 11,500 Crashes to Learn How to Fly (ieee.org)
462 points by kasbah on May 11, 2017 | hide | past | web | favorite | 118 comments



This is interesting. It really shows how deep learning has become almost "Lego-like".

1. Imagine the problem, add a camera or two (or more).

2. Build/use a pre-trained ImageNet model as a starting point (probably using TensorFlow/Keras).

3. Build a dataset, split it into test, train, validation sets.

4. Train the model further.

5. Test and validate the model. Lower the error rate (don't overfit though!).

6. Profit?

As far as what language to use, depending on the speed of whatever you're trying to do, Python would likely work fine in the majority of cases. If you need more than that, C/C++ is around the corner.

Oh - and OpenCV or some other vision library will probably be used (but just to grab the images, maybe a little pre-processing).

You wouldn't have to use this exact pipeline (you could substitute other deep learning libs, other vision libs, other languages, etc) - but the basics are to start with a well-known CNN model, preferably "pre-trained", then apply your own dataset(s) to the task to get it to work better. Not much more tweaking needed, the biggest thing is to get (or be able to synthesize from what you do have) enough data to throw at it (and have a fast enough system to train it in reasonable time).

We've seen this approach many, many times; it seems to work well for a ton of domains and problems. Again - very "Lego-like"...


For the things they showed in the video, deep learning is probably overkill. The main thing is having a robot capable of surviving its initial failures, and then actually collecting the data.

I can't comment specifically on airborne drones, but at my lab we've demonstrated that a robot[0] with an extremely low resolution camera and a very simple model is capable of learning to avoid running into walls with about a minute or so of training data. We use reinforcement learning with linear function approximation[1], and even though the robot sees the world through a couple hundred pixels, it's sufficient to discover that walls have a certain color and that if too much of your vision is "wall" you should probably move in a different direction.

The advantage of deep learning is that your agent hopefully learns to generalize, and so isn't fooled by changes in brightness/color or room layout. If your task is simpler than that, you could just use some OpenCV filters to extract colors and textures and let the a simple linear model figure out which ones correspond to obstacles.

----

0. An iRobot Create, basically a Roomba without the vacuum.

1. Incidentally, that means that if you don't require deep reinforcement learning, you can get simple obstacle avoidance up and running with like 5000 data points.


No offence, but it's hard to compare learning a colour histogram to learning concepts such as couch, floor, and open door.

RL (Q-learning?)with a linear approximation wouldn't work if you have subtle patterns in the image (poor contrast between walls and floors, a gradient, a border, etc.) and that's exactly the issue with robots: not detecting things that seem obvious to us.


> it's hard to compare learning a colour histogram to learning concepts such as couch, floor, and open door

The output of most machine learning algorithms is just a belief function. Deep learning is nice, because it basically removes the need to manually choose features, which can be the hardest part of applying machine learning to solve a classification task. But the output is still a belief function.

Machine learning as we generally know it today isn't about making a computer understand "concepts" or anything higher order like that. I think it is easy to compare learning a color histogram to learning classifications (e.g. couch, floor) because the two algorithms do exactly the same task in different ways.

The parent is saying that the function the robot needs to learn is linear. It doesn't matter that it's a drone, and the deep learning apparatus in the middle is overkill, because learning linear functions is easy (you don't need much data to figure out which way is up on a line)


You could also use deep features (pre-trained for ImageNet classification) and use them in your Q-function approximator in such a way that the Q-function is linear wrt some high-level features. Then, you get the best of both: being able to process complex visual input while being able to do reinforcement learning with very few training trajectories. See [1] for an example (in simulation).

[1] http://rll.berkeley.edu/visual_servoing/


>The output of most machine learning algorithms is just a belief function.

It's not even a belief function, in the sense of a normalized probability distribution that respects conditionalization properly. It's basically just a one-hot vector for classification.


I'm not sure what I'm supposed to be offended by; in any event I was not talking about color histograms, but instead using "some OpenCV filters to extract colors and textures" and testing a linear model first before reaching for the big guns.

Sure, it's different from human-like perception, if that's really what the deep net is learning to do. But the burden of confirming that it's learning "concepts" instead of, say, dedicating a million parameters to implementing Sobel filters or wavelet transformations, or something even more trivial like "if all my pixels are one color, I am probably near an obstacle" is not on me[0].

When I approach a deep learning problem, my default assumption is that the model is out to humiliate me by learning something entirely trivial, and so I go to great lengths to augment my dataset and validate the fact that I got some extra mileage out of spinning up the ol' GPU that wouldn't have been possible (or at least, not as easy) with simpler methods. Because if you can use something simple, why not do it[1]?

For robots, it's maybe a full page of code to try some quick image filters, flatten them, and implement SARSA or Q(λ). Our demo used Pavlovian control (basically, TD methods to estimate the likelihood of running into a wall, and turning if a collision seems too probable). You can run it on a Raspberry Pi in real time, no GPU required, including the robot it costs less than $300, and it doubles on sax. When I'm done with my current project I'd like to try it with a drone, because aerial demolition derbies sound like the next great spectator sport[3].

----

0 . There are techniques you can employ, for example: examining individual units or clusters of them for their response to different frames, deconvolution, and of course, messing with the inputs. But this is rarely done, because it takes time away from using the magic hammer of DNNs to nail yet another previously difficult problem. This is understandable, but it makes me wish I had the time to develop some tools for performing quick and easy trepanation on deep models so that examining the representation becomes as easy as the training part.

1. My colleague Marlos Machado has written a paper that seems relevant to this sorta thing: https://arxiv.org/abs/1512.01563 . By looking at what Deep Mind's Atari DQN was doing (or what it seems to be doing) and developing the analogous linear features, you can get performance that is almost as good with a model that ingests data many times faster. That is, their median score was better than or equivalent to Deep Mind's. When it comes to RL, if you can use linear methods it's a huge help-- you know you're going to converge[2], probably quite quickly.

2. Subject to technical conditions, e.g. you're dealing with a stationary ergodic MDP, Robbins-Munro stepsizes, the environment's oblivious, the algorithm's on-policy, no purchase necessary, see in store for details etc.

3. It might not be as good a representation, but maybe a faster reaction time is more important? And also because it could be that quadcopters are an entirely different kind of beast and really do need some of that old time deep learning religion, and I don't want to fall into the trap of thinking that the problem is simple when it's not. With these sorts of questions you either have to do the experiment yourself or pay a guy from Google to tell you what they've done (and then be prepared to litigate).


It concerns me a bit that at some point, there are fundamentals concerning simple yet effective approaches that may be forgotten (or a least not taught) in favor of black box neural nets, like Kalman filtering, PID control, etc.

Institutional memory is something of a weak point in the software industry compared with other fields.


Similar "institutional memory loss" happened in mechanics. On many occasions, it's not so much the theory as the fluency putting it in practice that might be lost...


Part of the solution, I think, is making implementations of existing techniques available (and of course, rewriting them in Rust). Deep learning is popular because it's so widely applicable that it justifies the effort of learning to use it: deep nets work with image processing and robot control, and it's kinda unusual to have the mostly the same bag of tricks such different tasks.

But a reliable way to improve your performance (or at least write a publishable paper) is to examine why a specific technique worked, rewrite it as a differentiable function or finite automaton, and then implement it as a component in a deep net. So it's a cause for concern, but also an opportunity for academic arbitrage. I think it will work out eventually, but I become uncertain when I meet machine learning "experts" who are only familiar with deep learning and unwilling to consider anything else.


I mean that's math... not as likely to get lost. Now designing a NN to perform kalmann filtering seems interesting ways maybe it would lead to insight in how the network operates... e.g more of a grey box


"...success is simply a continual failure to crash"

My programming methodology has been validated at last!


"There is an art to flying, or rather a knack. The knack lies in learning how to throw yourself at the ground and miss. Clearly, it is this second part, the missing, that presents the difficulties."


That describes my view of the wingsuit flying to a T.


Wingsuit instructor here. Can confirm.


Is there a more stressful job than instructor for an extreme sport with such a high risk of death or serious injury? (I literally cannot imagine, though I hope I at least overstate the risk in my mental model.)


Jumping out of a plane wearing a wingsuit is reasonably safe. If something goes wrong, you deploy your parachute. If your parachute fails, you deploy your reserve. You have plenty of time to control with a spin or stall.

Jumping off a rooftop or a mountain wearing a wingsuit is practically suicidal. If something goes wrong, you die. The margin of error is simply too small.

The latter form of wingsuit flying is relatively new and highly controversial, even within the wingsuit and BASE jumping communities.


Off of a rooftop would be practically suicidal. Wingsuits need time to inflate and start flying, which for the best guys on the planet is around 300 feet. Normal humans require about 400. 'Margin' is a word that has a bunch of different contexts, most of which still put wingsuit base in a reasonably safe range. Terrain flying, which is the sub-discipline of wingsuit base where you're goalposting trees, is indisputably the most dangerous sport on the planet, and I've lost six friends to it in the last year.

I don't know that i'd describe it as 'controversial' but would rather describe it as 'that thing a bunch of people with nowhere near enough experience or currency to be doing it keep doing and fucking killing themselves.'


Indeed, I was thinking of BASE wingsuit jumping.


So as some people further in the thread have brought up, you start flying wingsuits out of airplanes, and you start in suits that are far more forgiving than the ones you'll end up flying. It's similar to how a pilot first learns how to land a Cessna before you try to land an F-18 on a carrier at night in a storm.

We do often (as instructors) talk about how nervous we get when we're with a student that we're pretty sure is just going to flatspin uncontrollably for like 8k ft and it's just like 'Okay heres everything you need to really have a bad afternoon. Dont? Please?'


Presumably all the good days make up for the very bad ones, else you wouldn't do it. Those must be really really good days (I find it impossible to imagine this as well.)


every interview with a wingsuit flyer i've seen has them mention a few friends or even a partner who died doing it


Oddly enough (and probably speaking to our mindsets as a community), it was the death of a good buddy of mine that pushed me to finally start base jumping.

Here's that story: https://vimeo.com/167054481


I love it when those HHGTTG references pop up every now and then on HN.


An expert is someone who has made all of the mistakes that can be made in a narrow field.

(Not the definition of 'Expert System' I was thinking of, however)


.. and learnt how to avoid repeating them?

Seems like that's an essential to claiming you have expertise.


Similar to the rc rally car that learns to drift by driving and crashing itself [1]. I'm collecting articles/videos where you can see machines teaching themselves things like flying/driving/video games [2].

[1] http://spectrum.ieee.org/cars-that-think/transportation/self...

[2] https://www.reddit.com/r/WatchMachinesLearn/


That's not exactly what the car in [1] does. They use stochastic optimal control methods, which are more domain specific. They perform forward simulation on a lot of trajectories and effectively pick a good one. They also use localization so control is based more on current position than sensor inputs. The machine learning component is the dynamics model identifications - determining how the car reacts to control inputs. The model is basically a complicated function with a few inputs and outputs, which tend to be smoothly varying, so ml techniques work very well. This is fairly standard in model predictive control since empirical motion models tend to outperform ones that are physics based.

Edit: looking at the paper, they apparently use many physics based models of the car as a basis, but then use ml to mix the models together.


Unfortunately I don't know enough to see the difference, or even understand most of your comment :). I only read about ml for fun, I don't do anything with it! If understand it, the racecar computes potential paths it could take, while the drone looks at what caused it to fall vs. continue flying?


No worries :) the drone basically generates a big dataset of "crashing" and "not crashing" video clips from the camera. It then feeds all that into a convolutional neural net, which can (after training is complete) give control decisions based on the camera which avoid obstacles. This is very "black box" in the sense that it's hard to say exactly how the system is working.

The car, on the other hand, uses hand written algorithms to forward simulate various controls. Based on the forward simulations, it can pick controls which are predicted to give good results. Forward simulation relies on a model of how the car reacts to any possible control. However, this model is complicated because of the nonlinear dynamics going on (inertia, wheel slip, etc). Therefore, they use ml techniques to identify the model.


In the car case:

We write programs that predict how the car will drive given steering inputs. Because we're not sure, we write several programs that give slightly different answers.

Given a driving input, all the programs predict the future : Parent comment called this "forward simulation."

We pick the program that has worked well in the past and do what it says to do - that program drives the wheel of the car.

We measure what actually happens to the car. We then remember which algorithm actually gave us the right answers (might be different from the one we picked to steer) - next time, we'll trust that one more.

Because it's annoying to keep writing more programs, we figure out what we can tune - like a left / right balance knob on the stereo or a base / treble knob. In this case, it might be a "ground slipperiness" or friction knob.

So as well as picking the programs, we ask the algorithm to tweak the "friction" knob and try to pick a setting that seems to match reality.

---

In the flying case:

We make a "black box" full of sheets of numbers and put a picture into one side. Each dot in the picture does some maths with the first sheet of numbers which makes a new "picture" for the next sheet.

We run maths based on remembered numbers and the answer (say 0.0 - 1.0) tells us "safe to fly" or not. Lets say 1.0 is safe (0.0 unsafe, in-between unsure).

Once we figure out that a given picture was safe we go backwards through the sheets of numbers to apply "back propagation" and change them - we make the "safe" picture output something closer to 1. Perhaps it output 0.50 before, now that same picture outputs 0.51. If the picture was unsafe, we adjust the other way.

We do that LOTS of times. Eventually safe pictures output 0.91 and unsafe ones 0.12 or something. We show the computer a new picture, and we call the answer "Safe" (say 0.8-1.0) "unsafe" (0.0-0.2) and unsure (0.2-0.8). We fly only towards pictures which are "safe".

Everyone pops champagne. We didn't learn much - only that lots of numbers can solve more wacky problems than before. It's hard to generalise what the computer "learnt" or really understand it.


I know there are lots of rules with racecars. Is "can't have AI" one of them?


There's a driverless race. I'm doubtful if it'll be as powerful a force as the real world race to the market that has a bigger prize.

https://electrek.co/2015/11/30/formula-e-will-launch-a-new-r...


Most professional racing will require the driver to be handling the controls without assistance (to a degree; I believe many, maybe even most, allow for things like power steering) so I think that takes care of the AI case.

Now in the future maybe they want to add the ability for AI drivers? It would sure make things interesting! Or boring depending on how good the AI does.


I know people have trained AIs to do more sophisticated things than this, but something about watching from the drones perspective as it scans its environment and moves around really makes you feel like you're watching a real intelligence at work.


It's certainly cool to watch and in area of research which is proving fruitful, but isn't ascribing true intelligence to it just anthropomorphising the point of view of the camera? When I watch an insect essentially using trial and error to try to find its way towards the light on the other side of an open window, intelligence isn't the first thing that springs to mind


I find that in many cases the goalpost of what is intelligence moves further away each time we get somewhere close to what was previously considered intelligence. If you asked someone you could train a computer to fly on it's own and learn from its mistakes, say, ten years ago, they would probably be more inclined to say it's intelligent than we are.


I think it depends on how you define "intelligence". And to be honest, I don't really think we have a good definition for the various things that encompass what it means to be "intelligent".


If the insect learns from its errors would that be intelligence?


This is how Uber is going to train it's self-driving cars, isn't it?


It's how they're training their business.


Would it not be cheaper and faster to simulate a drone and fly it through virtual 3d environments and still learn?

Or would the physics be too complex to model well for simulation?


It's not that the physics can't be modeled, but figuring out the appropriate models is probably harder than it looks. Every sensor and actuator has quirks. Does it matter how much your airframe flexes? Does the turbulence caused by some little protrusion matter enough to model? You aren't going to model every individual molecule, but how much detail is enough to be right?

The elegant thing about using machine learning is that you don't need to build any models at all. And you can develop the ML technique once and then reuse it to train different hardware configurations, instead of incurring the cost of modeling every one.


One way around that is to make small random variations in the simulation (sensor calibration, vehicle performance etc.) when you generate the training data, so that your system learns to drive a generic vehicle rather than a very specific one.


But you could use such a model to pre-train your NN before you continues training it in the real world.


There are some fairly realistic flight simulator video game for drones. It wouldn't surprise me if someone's also done a GTA-V mod for something like this too.


That's what Sadeghi and Levine [1] do in their work. They train in simulation in a lot of randomly generated scenes. Since the drone is trained with such a diverse set of scenes, the learned policy generalizes to the real world.

Also, note that the physics of the simulation doesn't even need to be realistic. Unless you are doing high-speed control or aggressive maneuvers, the challenging part is the perception and not the control. In the paper from OP, the controls are even high-level discrete actions: left, forward, right.

[1] https://arxiv.org/abs/1611.04201


I wonder how well the learned policy generalizes to other environments. Places like an art gallery, outside, or a cave. Could the network have learned something fundamental about monocular vision?

It would also be interesting to see if the learned policy corrects for perturbations. If we tilt the drone by hitting it, will the policy stabilize it again?

While this is a really cool result, I suspect that this approach might not be the best way to control UAVs. Dragon flies are ready to fly, avoid obstacles, perch on stuff, hunt down prey right after warming up their wings for the first time. This implies that a good amount of the flight behavior is 'hard-coded.'

Although I really can't wait until someone expands upon this approach. So instead of outputting left or right, the network could output 'stick vectors,' which translate to control stick commands. Maybe even have the network take in some sensor data and a 'move in this direction' vector. Add in a pinch of sufficiently fast video processing and we could probably learn how to do fly through an FPV course or do aggressive maneuvers to fly through someone's windows[0]

[0]https://www.youtube.com/watch?v=MvRTALJp8DM


> If we tilt the drone by hitting it, will the policy stabilize it again?

My understanding of the way this is being done is that the output from the machine learning model is already a simple "left", "right", "straight on", so it's not really responsible for stabilization anyway.

That side of things is likely being handled by the drone's control software which takes those inputs, translates those into what angle the propellers need to be at to achieve it, and then translates that into the correct rotor speeds. If you hit the drone the gyroscope will pick up that it's at the wrong inclination, feed that information into the control software, and the control software will adjust rotor speeds to correct.


This reminded me of the Douglas Adams books, in which Arthur Dent eventually learns how to fly by "throwing himself at the ground and missing".

Also, the flight had an almost organic quality to it somehow. Spooky, but cool.


What determines autonomous flying? Couldn't the drone just hover in the middle of the room and not crash? Would that count? Don't you need some sort of increased score for moving around?


In the video it shows that the AI is only able to do one of three things: "go left", "go forward", and "go right". It can't choose to hover in place because that isn't an intent it can express. Although I don't know why it couldn't just keep going around in tight circles?

Funnily enough, if you've ever implemented a Pacman clone, this is also how the ghost AI works.


Fun fact: all of the Pacman ghosts actually behave differently:

http://www.gamasutra.com/view/feature/3938/the_pacman_dossie...


This would be a lower level part of the control system, with some higher level part determining its target. I would assume it would be somewhere close to a PID loop being tuned to avoid flipping.


A paper airplane is flying. A "flight" involves takeoff, climbout, leveling off, descent, and landing.

So the question is, what does autonomous mean and what is it adding?


Imagine if Google taught their self-driving cars that way: 10,000 crashes. I think self-driving was completely procedural back when it started 15 years ago. But now with faster and better understood neural nets, some parts like recognizing objects have been replaced by deep learning.


> There is an art to flying, or rather a knack. The knack lies in learning how to throw yourself at the ground and miss. ... Clearly, it is this second part, the missing, that presents the difficulties.


Why would they not use a general purpose classifier [1] instead ?

Sure, the tagging of objects in the field of view in this model may be unnecessary but you leverage an existing model that should allow the drone to 'think' beyond the current limited "obstruction here". It could at-least have been used as a base model to build upon.

[1] https://www.youtube.com/watch?v=_wXHR-lad-Q

Personally, I'm also looking forward to neural networks modeled after real brains [2] .... but the tech to accurately scan the complex interconnections in larger brains seems far away.

[2] http://www.smithsonianmag.com/smart-news/weve-put-worms-mind...


Why would an object classifier matter?

If my plane is about to crash, do I really care whether it's a mountain or a building?


Well you could decide the action based on the type of the object.

eg: If it's a human "Bob" : listen for command. If no relevant command, detour & continue to goal.

If it's a vehicle in the middle of the road, wait for it to move and then go no with your path if there are no other moving vehicles on a collision trajectory with you.

It's definitely a step ahead of what the OP are doing.... but isn't it a more practical approach ?

The point I'm trying to make is that training an NN to efficiently detect objects has been a solved problem for quite some time now. We should give more attention to experiments that take it a step ahead.


You would have really cool behaviour.

+ Not going through glass (although humans do not always exceed in that themselves)

+ Going through a fly door curtain

+ Going through smoke

+ Not going through a mosquito net.

+ Not going through a fountain.

+ Pushing through soft objects.

+ Pushing a half open door.


The method used would also end up with many of these behaviors. If it didn't crash, that behavior is enforced. So flying through smoke would not be avoided if it didn't crash


Would you care if it's a cloud or a reflection off glass?


Does it really predict 'weather' to move forward or not?

I think you mean 'whether'.


Does it really 'more' forward or not?

I think you mean 'move'.


Fixed. Good thing I'm not published in IEEE.


You need a few doses of https://twitter.com/respectfulmemes I think


if (sunlight){ propellor.frontLeft.activate(.9) } else { return 0 }


Its a pity that it was programmed to split for a decision only between left and right images. It could have avoided the chairs by flying higher if there were top and bottom images. Ideally, the number of decision-point images should be the area of the FOV divided by the drones forward surface area.


Cool! Do I understand correctly, that the splitting into the part where the drone was doing fine and the part where the drone is crashing (i.e. the annotation of the dataset) was still done manually?

A similar approach using unsupervised learning would be even cooler...


Pretty cool.

Why did they use an input that does NOT provide any information about depth/distance from objects?


It's also probably because it's a lot easier to use an off-the-shelf commercial drone than to build/modify one (from a researcher's perspective), and commercial drones typically already have cameras than can be accessed through their SDKs.

Also, off-the-shelf depth sensors can add a lot of weight to the drone. It might still be possible to fly with the extra weight, but now the drone will be more sluggish and fragile. It would be great if commercial drones had a built-in depth sensor.

Distance sensors such as sonar and proximity sensors are usually very noisy and they are susceptible to interference (if you use more than one).


Those drones are perfectly able to lift a couple of pounds according to their specs. The other reason you mentioned about not wanting to customize it is more likely.


Vision-based navigation is ultimately more interesting for robotics. Structured light depth cameras don't work in bright sunlight and aren't useful for long range perception. They also use more battery power for the light source.


Yes, but what I was trying to get to is that even if the drone can handle the payload, the drone becomes too heavy to crash safely, and it might break more easily (after falling down with all that extra weight).


I can only guess because I am not on their team, but:

They clearly didn't need it. Human pilots clearly don't need it.

Extra custom sensors might produce more noise than they are worth.

Image processor is a hot topic of research in CNNs.


Except humans derive distance/depth information from their sight (depth perception), that's why we have a pair of eyes and not a single eye.

The equivalent on this setting would be adding a second camera.

I'm not criticizing, their experiment is pretty cool, I was just wondering why they chose to use only the camera on board.


> that's why we have a pair of eyes and not a single eye.

This is really overstated. It really only matters for about 3 or 4 meters of distance. We do depth perception well enough at distances to drive cars.

We also do just fine at perceiving depth in video games and single lens camera.

How often do you have trouble determining the depth of something in a movie? Only about as often as the filmmakers want you to.


Humans can fly quadrotors very fast and precisely through monocular FPV cameras.


Human pilots do get depth information by either looking, and/or by radar and other instruments.


Human pilots don't really use vision to see depth when flying - most things in flight are too difficult to judge distance/size, so one learns to fly without.

As for radar, most planes don't have their own radar. About all you're gonna get for anything close to depth is al altimeter.


to be fair, we're also usually not flying around inside a room


Humans don't get depth information past some distance that is probably pretty small compared to how far away objects are in flight. Past that point it's all contextual.


Somebody should do a cost-benefit analysis of all this machine learning business. For instance, how much did this project cost and what did they get in return? I'm not suggesting it's not worth it, just curious to know how the numbers turn out.


Also, IIRC early neural net research covered this approach with Hannibal & Attila decades ago.

Edit: I guess there are some differences -- I think this is what I was remembering:

http://people.csail.mit.edu/brooks/papers/AIM-1091.pdf

...but maybe there was further work that is more closely related.


Like watching a baby learn to walk! We're truly in an exciting age for technology.


Yep! And the best thing is that nothing could possibly go wrong and there is no conceivable universe in which humanity comes to regret the exact way we're building intelligent machines right now. :)


Oh come on. If that were even remotely possible, there would be books about it. Maybe even movies. That would make a good summer blockbuster.

Luckily, that possibility is too remote even for Hollywood.


Regrets about building small autonomous drones that later end up killing people? That was a Black Mirror episode (albeit not the best episode):

https://en.wikipedia.org/wiki/Hated_in_the_Nation_(Black_Mir...


ok, you gotta be a little more subtle with the sarcasm there, lol


That's silly. It took me no more than 1,000 crashes to learn to fly my drone.


But hiring you to fly their drone doesn't scale. You're not nearly as good at replicating your ability as a computer is, or at working for free.


True, but I was kidding.


Pretty cool!

I was curious, in the article it mentions difficulties navigating through glass environments, could they combine visual information with sonar to avoid crashing into glass and other transparent barriers?


Now the question is, do human quadcopter pilots have the same problems running into glass? Although birds tend to be bad at handling 'glass environments'


yes, this is called sensor fusion, and it would use features from both sensors simultaneously to make decisions.


Drone Uses AI and 11,500 casualties to learn How to kill terrorists ... coming soon /s


After 10,000 civilian casualties, it finally got a terrorist.


Learn to fly scenarios vs how tesla would build an autonomous drone is my question for something viable.


Cant they use a simulation to do the training and then finetune it using the poor drone?


The third sentence of the fine article:

> The gap between simulation and real world remains large especially for perception problems.


They're training on video so their simulator would have to produce thousands of hours of realistic video. On top of simulating the aerodynamics and performance of the done itself. All to produce something which is going to be inferior to real-world data.


Does not need thousands, the article says they did only 40 hours of flying for this experiment at least


Human pilots also train using simulators first.


Well because if a human crashes a plane, that human doesn't stick around long enough to learn from that experience and do better next time.

Also the cost of a human crashing a plane is a bit more than a drone crashing itself, such that it's probably better to save on planes, and invest in simulators - whereas developing an accurate physics simulator for the purpose of training a drone might take more time/money than just letting it crash, and figure it out itself.


Type rating, yes, but initial training is usually done in a real airplane, at least here in the U.S.


A coding drone uses AI and 11,500 bug reports to learn how to code. Are we there yet?


In other words, they're just learning not to crash/failing to crash.


what happens if after trained we move the drone to another location, will those learned "abilities" be reused? making easier to fly at the new location.


Oh, great. Next step: weaponization and AI target analysis.


Pretty close to 10000 hours to get good at something?


I don't really believe in the 10,000 hours number. Granted I've never read the book but it 'feels' faux.

The amount of time needed to master is more a combination of deep practice, motivation to master & the individual learning rate.

'The talent code' talks about deep practice. Deep practice is the 80 in the 80/20 principle.


Some essays have refuted this hypothesis. Some people have natural talent, i.e. prodigies, which may become proficient in far shorter time.


The article says 40 hours of flying time.


Plus, trivially parallelizable across many drones, and it's not even using normal RL techniques to accelerate learning. Training a drone controller is more or less a solved problem, the interest of this is whether all the crash datapoints are useful.


I think this is how babies learn to walk.


This really is the beginnings of SkyNet.


I don't understand why it didn't learn to fly; using a flight simulator.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: