Remember a video where helicopter shot reporters because their cameras looked like an RPG? A soldier with a gun, when facing an uncertain situation with where he or his friends risk death, will weigh heavily on the "caution" side, shooting on everything that moves. AI with a gun, however, can accept it's death as an acceptable outcome in a similar situation.
In two words: scale and miniaturisation. A rifleman has inherent limitations -- he cannot move unaided beyond his walking speed, cannot be made to weigh on the order of a kilogramme, and he must sleep and eat and shit. He cannot lie in wait indefinitely, and he cannot fly either. He has, in the godawful vernacular of the defence-contracting industry, SWaP (size, weight, and power) issues. His face is as vulnerable to bullets as yours or mine, and a 12.7mm (.50 BMG) round will walk through his body armour anyway. He is human, and the harm that men with guns can do is thus limited.
Stuart Russell uses the example of micro-UAVs with AI-based targeting software and each armed with a single-use shaped charge (for anti-personnel use or breaching doors) -- 10^6 of them will devastate a city, with extremely little human/logistical support needed. A million riflemen could do a bunch of killing, but they will be slower, easier to stop, easier to detect, and will require a lot more support and infrastructure to remain effective.
What do we call weapons that allow very few men to kill millions without placing themselves in any hazard, again? Russell (rightfully, in my judgement) classes this sort of use of AI as "scalable WMDs". Lethal autonomous weapons shouldn't be compared to a "soldier with a gun"; appropriate comparisons are more along the lines of "flying landmines with face recognition".
All of those things are important, but none of them are a priority for the people who have the "AI with a gun" programmed.
Aren't they, really? Why do you think so? Soldiers have exactly the same incentives as designers and engineers of those devices: accepting enemy's surrender can be a rational tactical choice (so that more of them surrender instead of fighting to the end), they are just as accountable in the eyes of law (which may be important to them or not - exactly the same as the usual soldiers), etc.
The only difference is, AI will make choices rationally and less influences by emotions of the battlefield. Do you really think then net result of average soldier's emotions brings him closer to "merciful"? As far as I can tell, it's the opposite - most powerful emotion on the battlefield is usually fear, and it doesn't make people merciful at all.
Humans will always make mistakes, and not that two humans can't make the same mistake, but each mistake is individual. A bunch bots stamped with the same code using the same hardware will be capable of making the same mistake for each bot due to the same bug.
It's only true for the current, logic-style programming. I don't think it will hold for neural network-based decision systems.
It happened only last week in Zimbabwe. An army of humans designed to keep the ruling powers safe turned on them and takes over control, and it's hardly the first time.
Skip to 0:40 for human tracking demo
If it can differentiate between strangers and yourself/friends/family that would be (additionally) interesting.
We plan to open-source it over time
My focus is on capturing the health of plants and extracting meaning using a Deep Learning Video Camera.
Anyone know have any information on where to start? Such as if there's a database that I can feed into the ML engine on different diseases in a plant?
More ideas: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4600171/
 - https://twitter.com/search?src=typd&q=metaverse%20app
 - https://www.youtube.com/watch?v=zWuZVM46qa0
I think for a hobby project bundling an accelerator is the right choice so that hobbyists don't have to worry so much about performance.
The same way that a raspberry pi really is overkill for almost everything people do with them since you could do the same with a microcontroller with no OS, someone could probably squeeze something onto a raspberry pi without the accelerator, but that's going to be far harder than just getting started with the high level APIs.
What are the inputs, and outputs?
How do you train it?
I only see a hardware assembly guide, but nothing on the software.
EDIT: found more information here: https://developers.googleblog.com/2017/11/introducing-aiy-vi...
SD image - coming soon
Android app - coming soon
SDK - no links or search hits
It's an unfinished project that's been rushed to the press with little documentation.
"It’s called the AIY Vision Kit, and it’s up for pre-order from Micro Center for $45, with an expected ship date of December 31st."
Not sure that's the contrast you are looking for...
I'm guessing this VisionBonnet accessory is simply another spin on the Intel "Movidius Neural Compute Stick" with the Movidius Chip wired directly to the CSI Camera port and the GPIOs on the Zero used to talk with it. So you probably develop on it using the same Movidius Toolchain you use for the Neural Stick: http://developer.movidius.com
Their SDK recently had a major release with TensorFlow support included, which I bet drives this. (Even with Tensorflow Lite optimizations, the RPi zero is probably just too weak to drive inferences for any non-toy model.)
For one thing, this board in particular apparently has a direct connection to the camera. I'm not sure if you can do anything but live video from the directly-connected camera (in my case, I want prerecorded video / video from IP cameras). Maybe it can but it's not immediately obvious anyway.
The $75 "Movidius Neural Compute Stick" uses the same chip and does everything via USB so that's more promising. But it's a binary-only API that's totally focused on neural networks (and only available for Ubuntu/x86_64 and Raspbian/arm7). In contrast, I believe you can easily send Hexagon arbitrary code. Its assembly format is documented and upstream llvm appears to support it. So if I want to do background subtraction via more old-school approaches, the Hexagon is probably useful where the Movidius stuff is not. And I have yet to learn anything about neural networks so that's a significant factor for me at least.
Really neat hardware but I wish it were more open.
Security and game cameras are a massively-unsolved problem, for instance. I'd like to capture footage of bears, coyotes, and other wildlife as it travels through my back yard, not to mention keeping an eye out for larger bipedal visitors. But it's almost impossible to convince the naive motion detection algorithms in my surveillance cameras not to respond to trees swaying back and forth in the wind, or to the resulting rapid movement of patches of dappled sunlight. Or to spiders crawling back and forth in front of the lens, building a web. Or to moths that seem to be attracted to the IR illuminator at dusk. Or to any number of other things that any human would instantly recognize as a false alert, but that are very difficult for software to reject without frequent mistakes in sensitivity, specificity or both.
It's hard to believe that anyone with an outdoor security camera hasn't had to deal with similar hassles. I'm sure there are other applications for a camera like this, but if I were an investor, I'd be very interested in the intersection of ML and security in general. I'm definitely interested as a homeowner.
Of course, neither would a camera that's made out of cardboard and runs on a Raspberry Pi. But for prototyping, this seems like it could offer a good start.