The only way to stay ahead of the 'evil' corrupting influence of new tech is to prevent it from widespread use controlled by a single entity. So, yolo is ok as long as you cannot deploy it in a cloud at scale.
So, just as nuclear weapons (the massive concentration of energy release at tremendously fast rates) are bad, so is a super AI/AGI (the massive computational ability at nanosecond scale).
No evil was ever perpetrated by institutions of learning - only business entities and governments who scaled up those discoveries caused evil.
And now for the flame bait-
So, by this argument we should elect Luddites to govern us , especially ones that are not imaginative or creative.
I know you didn't make this argument here, but I still want to point out that that's ethically irrelevant for his decision.
Or the other way around: "Someone else would have done it" is not a defense when you've built something that was clearly gonna be used for Bad Things(TM).
Indeed, there are many nefarious applications of computer vision. But applications to the medical industry are plentiful too.
I see weighing up the net benefit as a tricky and a personal matter.
Furthermore, autofocus has already progressed from face detection to eye detection.
Is that pjreddie used horses, dogs, and bicycles as training data? Not realising that his technology could also be used on human faces?
I'm not sure how you got to this idea, but it's just not plausible.
Initial good results on CV doesn't mean that you realize all the ways it'll start to be used and the implications thereof.
There are existing power structures (planned ones and emerged ones) in this world and the technology we create can either be used to reinforce them or to question them. Sometimes it is both and things cancel each other out and move on a sideways trajectory – but in the case of CV, it is quite clear who will benefit: those in power, those who need to quantify, control and punish the human element, but don't have the manpower (=legitimacy?) or funds (=priority?) to do so manually.
I get that working in CV is interesting and cool stuff, but the collective suffering it might help creating and keeping is something one should seriously think about as well.
But in this case it's very simple:
"Good guys" using of CV gets them things like good auto sorting in Google photos
"Bad guys" using CV can reduce the complexity of creating a fully Orwellian, big brother like surveillance state from "absurdly complex to implement and impossible to maintain" to "we can already put in place a solid implementation today and it will get better by the day".
Now I used to work in CV and as you said,I get how great and exciting the underlying tech is. But I definitely lean more towards fear than excitement these days. I also realize that it's already everywhere, with heavy research efforts and that you can't ever stop it at this point. But that really goes to show that the Yolo creator was right.
Think about it, what does the chinese people gain from CV or face recognitio right now? Maybe cool filters. The chinese government? Unimaginable levels of surveillance and control over it's entire population and it's just getting started.
I agree that society needs to come together as a whole and regulate this into law, because that's how bad actors from governments can be stopped. At least in democratic countries.
CV is a vitally-necessary component of self-driving car technology. Self-driving cars could save more than 30,000 lives each year in the US alone.
Even the very wise cannot see all ends, so why try? Work on what interests you.
To qualify, it needs:
- To have a negative net effect. Explosives for example don't qualify. They are certainly used as weapons and for all sorts of nasty reasons, but they are invaluable in many areas, including safety systems.
- Not to be an essential stepping stone for other, positive discoveries. For example the V2 missile made space exploration possible.
We could put all sorts of weapons in the list, with nuclear bombs in a top position. But think about it. Hydrogen bombs didn't kill anyone. Although the idea is debatable, they may even have acted as deterrents, preventing conflict. As for Hiroshima and Nagasaki, it basically ended the war, who knows for how long it would have continued otherwise, with potentially more victims than what the nuclear bombing have caused. More generally, the most technologically advanced countries are now living in an unprecedented time of peace, despite having the most advanced killing machines ever.
In the end I don't see any tech that I would put in that list because of abuse. The ones I would put there would be of the "oops, didn't know it was bad, let's stop using it" kind. Leaded gasoline comes to mind.
as a german I'm always amazed how casually americans are willing to forgive themselves and even rationalize their states war crimes as good
- To have a negative net effect. Explosives for example don't qualify...
I was more idealistic about this kind of thing at one time, but it really is easy to rationalize the "Work on what you want to, don't worry about the end use" attitude that I have now. All I needed to convince myself was a good example.
Imagine that it's the early 1980s. Reagan is in office, the Cold War with the Soviet bloc is still very much a thing, and Star Wars is ramping up. Every other week it seems that somebody proposes yet another batshit-crazy weapons system. You're an engineer with progressive political views, and your bosses are asking you to work on a vast, global satellite network that will allow the military to locate both targets and assets with pinpoint accuracy anywhere on Earth. As far as you're concerned, ol' Ronny Raygun can fuck right off, and you tell them as much. "I'm not working on anything like that!"
20 years later, it turns out you missed your chance to get in on the ground floor of the most important public utility since the telephone system, all because you could only see the destructive uses for the technology.
For me that's hypothetical since I was nowhere near old enough to be employed at the time, but it's easy to say the same thing about applications like UAVs, autonomous vehicles, and ML/AI in general. CV is nowhere near enough of a defense-centric technology to justify refusing to work on it, IMO. Someone who refuses to work on CV on ethical grounds is walking away from their share of our technological future, just like the hypothetical engineer who refused to work on GPS.
I tend to agree with the general notion that nukes are a net win for world peace, but it's debatable whether the net death toll due to warfare has been that much lower in the post-Hiroshima age. Superpowers just conduct proxy wars nowadays instead of beating up on each other in person. If you added up the civilian toll of those proxy wars, it would probably be right up there with many WWIII scenarios, but since those conflicts are happening somewhere else besides major American or Soviet cities, nobody much cares.
The other concern I have is a relatively new one: people are going to forget what those things are and what they do. Eventually, the last person to see a nuclear explosion in person will die of old age. Long before that happens, morons with microphones will deny that Hiroshima and Nagasaki ever happened, just like they do now for events ranging from the Apollo landings to Sandy Hook. Others will take the position that nukes are just bigger versions of regular bombs, nothing that special.
So long term, who knows... maybe it would be better to put that genie back in the bottle if we could. Dunno, and in any case, it's hardly the same thing as CV. I can understand if someone is reluctant to work on nuclear weapons technology, but that understanding stops well short of refusing to work on CV.
The person I murdered could've gone on to be the next Hitler, so just do whatever interests you.
This logic isn't very helpful, is it.
I think it's more likely that you want to detect a person so you don't hit them with a car than trying to hit them with a drone missile. Detecting cancer with CV, and other improvements to diagnostics also save lives. If you could be working on these technologies and stop, your decision could cost lives.
Technology that exists for surveillance can also be turned on the surveillants. The most relevant case probably being police abuse being caught on smartphone cameras. These tools don't just discipline citizens, they also discipline the police. If I'm in a room with someone in a position of authority far above me, I'd rather have the camera on both of us than none of us.
So it's not actually that simple, and I don't see opting out as realistic or helpful, because other benefits these technologies bring, for example security, will always convince the population to drive adoption forward.
My attitude is being there in the developer group influencing how the applications behave, interfacing with our management, sales and clients and being a voice unafraid to raise ethics questions within these groups is my method of knowing what is the state of this dangerous technology. I'd rather be there influencing its development and use than on the outside, frankly blind to what it's in-deployment capabilities are.
If we focus on reinforce vs question part. It is kind of prisoners dilemma. If we assume governments(reinforcers) will anyway work on this technology, and lets assume they will end up with some lower quality version (lets say 50/100) . Questioners have 2 options:
- dont work on this, accept 50/0
- or work and improve both sides, 70/20
Without deeper context hard to decide which will be better for questioners.
Would you prefer to have a gun against rifle, or no weapon vs gun?
It is obvious to anyone that one of the primary uses of computer vision is to watch other humans at scale. This cannot be surprising to anyone in the field yet now there are ethical concerns mixed with politics through the roof.
Maybe we can be more precise about this?
The work here is a technical achievement but there are some weird comments here which I think has something to do with the author being Chinese.
Focusing too much on negative aspect of thing will lead you nowhere.
Manhattan project gave >500 research papers. Gave us Iodine 131 and other radio nucleotide which we use in medicine. And gave us lasting peace. So was it bad or not?
Is gene editing bad? was the internet bad? Was the dude who invented round wheel bad?
I find your comment is entirely missing the point and low effort. Asking whether random techniques, inventions, inventors were "bad" or not makes us much sense as asking:
Is the sun bad? Were dinosaurs bad? Are the aliens bad? Is life bad?
I think the jury is still out on that. Or rather the trial is still underway.
One computer brain can be copied for free, and deployed to thousands of computer clusters and work 24/7 on facial recognition, at a fraction of the cost.
When looking at these projects, how do I figure out what hardware they're aimed at? This one mentions NVidia/CUDA.
Is there any sort of hardware abstraction layer that YOLO or R-CNNs can operate on? Can I use any of this code (or models) for my R-Pi?
I remember a professor saying,
"The definition of AI is: something that doesn't work"
I use it in the exact same scenario, a Raspberry Pi (Zero W) with a camera with motion detection and notifications on movement, my implementation may be specific though.
Each of my Raspberry Pi cameras runs motion (https://motion-project.github.io/index.html), and recorded files are stored on a NFS share. Each camera has it's own directory within this share (or rather each camera has it's own share within a parent directory), and the server then runs a python script that monitors for changed/added files, and runs object detection on the newly created/changed files.
If a person is detected in the file, it then proceeds to create a "screenshot" of the frame with the most/largest bounding box, and sends a notification through Pushover.net including the screenshot with bounding box.
There implementation is not quite as simple as described here, i.e. i use a "notification service" listening on MQTT for sending pushover notifications, but the gist of it is described above.
Edit: I should probably clarify that my cameras are based on Raspberry Pi Zero W. They have enough power to run motion at 720p - at around 30fps. Not great, but good enough for most applications. I've since migrated most to Unifi Protect instead. A little higher hardware cost, a lot better quality :)
I think the next thing, which is already happening, is embedded “AI” processors on SOCs. Apple is doing it with their A12 processors, and it’s probably only a question of time before you can purchase “general purpose” ARM chips with “AI” capabilities.
Pjreddie is a giant for this. It is a real contribution.
Where can the "Easy Set, "Medium Set, and "Hard Set" evaluations referenced in the "Wider Face Val" be found?
Trying to think of some applications for this. For example one could create a mechanism that watched people entering and exiting a shop providing the shop owner more quantitative data that he could use to optimize his sales.
Or you could have it watch a soccer game. Generating all sorts of data on how the game went.
All on relative cheap piece of hardware.
Plain-text XML for the frontal face detector is 912 KB. 132 KB gzipped. It should be smaller in binary.
Picasa on the other hand has no problems with huge libraries and the tagging interface is actually fun to use.
Downside to Picasa is that the initial indexing is very slow, as it only uses 1 core.
Also (both Digikam and Picasa do this) software needs to write metadata into the JPG file. No sidecar allowed.
Zolo does feature vector similarity analysis on facial feature vectors. Extracting those vectors is something else’s job.
Oh look, now every authoritarian government has free access to never before seen levels of data harvesting. But nobody has to feel any guilt because they only contributed a third of the machine. Hooray!
What we should be worrying is actual usages of grand scale citizen-control and monitoring projects that are enabled by technology. Think China, not UK or US.
Well this is damn close. It just needs an executive apparatus which is where drones surely come in handy.
Watching leadership supine and carefully uncritical of burning, looting mobs offers little confidence that they will stand in the way of this. After all only <outgroup epithet>s have anything to fear.
Please tell me why this is an unlikely scenario.
If researchers getting good at something is sufficient priming to cause you to direct your imagination toward hyperbolically negative outcomes, the problem on your hands is a constitutional resistance to further progress in the research area.
In that case, challenging readers to produce arguments on the finer details of the narrative you've painted in support of the technopessimism is bad faith rhetoric.
There weren't many comments when I replied.
My criticism was tailored to a fairly specific phenomenon: asymmetrically imaginative doomsaying that appeals to a vivid vignette / sketch of an adjacent possible future featuring some hyperbolically elaborated extension of trending tech, like Flash Mob Gone Wrong and Slaughterbots.
HN flagging and points are irrelevant to me.