> Although the cameras are cheap, officers can generate 15 gigabytes of video per shift; storage costs mount. Police unions often oppose body-worn cameras, fearing they imperil their members by giving superior officers licence to search them for punishable behaviour.
>Other officers complain about the amount of time required to review and redact footage in response to public-information requests.
Sure, the Police themselves don't like the cameras turned on them. Suddenly it is "too expensive".
Then they will turn around and argue for ubiquitous CCTV camera installations.
The optimist in me says that the cost of proper police reform, training, body cameras, video storage, and actually punishing bad cops would be much less then that.
I'm sure implementing reform etc isn't cheap, but I seriously doubt it's on the order of $100M.
Uh... what they could do to help officers improve most might be shipping a device that doesn't seem to mysteriously malfunction or be turned off during the most controversial use of force incidents.
The way bodycams are supposed to help is by forcing officers to be accountable for their actions. When police have plausible deniability due to poorly designed tech, all these expensive devices become basically worthless.
In my cynical moments I have wondered if making such bad tech is actually a selling point for Axon and its competitors. They're selling to police departments; the less likelihood bodycams have of embarrassing police, the more likely they are to make a sale. Check the box, move along. I try to have more faith in our institutions, but the abject failure of bodycams from a technology perspective makes me wonder.
I supposed it could also just be due to a few companies having a lock on an enterprise hardware market.
Blame everybody for only looking out for their own interest, from scammers to corrupt politicians to short-sighted investors to oil execs. Otherwise it's just status-quo infinite whack-a-mole and we see how well that's going.
If they don't want accountability as police officers, they can go find another job.
> Don't blame them for looking for their interest.
I can, I will and I will argue that it is my moral obligation to ensure that the public servants are acting in the best interest of the public as a citizen. But thanks for your concern.
If "punishable behaviour" includes things like slacking off to stop by dunkin donuts, the problem is with what qualifies as punishable behaviour, not the ability to search for it.
I agree that this is a unreasonably high standard to hold people whose job doesn't have anything to do with being the sole arbiter of violence and defusing dangerous situations at the trust and expense of taxpayers and local citizens to.
Edit: unless you meant not supervisors but the policymakers who define "punishable behaviour", in which case getting rid of body cams is about as helpful as rearranging the deck chairs on the Titanic.
You're appropriating sociological behaviors of the individual to maxims at the corporate level, which is ridiculous. Corporations need to be regulated, not coddled in some kind of convoluted, scaled-up, harm-reduction policy.
Yeah, sure, ban it. But without social consensus backing up those laws it will have no teeth in practice.
This kind of technology is fundamentally dangerous to people who do not fit cleanly into the rigid caste-like boxes that are used to categorize people.
Perhaps a small badge with visual friendly metadata would be handy. Not that we need to give them more information of course, just thinking out loud. The idea of everyone walking around with a little badge on their chest containing their metadata is oddly dystopian haha.
This idea was tried before, but for just a subset of the population, though:
Like it or not we all have piles of metadata that follow us around. A big one is our face. It won't take long for facial recognition to do the same thing I mentioned in the previous post; except you won't have a choice then. Hell, you won't even know.
The metadata badge idea was of course not serious; but it doesn't seem so farfetched to me or absolutely terrible. It (as I proposed it), at least, would be something you controlled.
Which, unlike state IDs, licenses or Nazi' camp badges are not something you would control.
I do not appreciate the extreme association with a free-thought idea to that of the worst massacre in history. I'll assume you meant no harm, but still - I figured I had to at least express clarification.
I.e, someone robs a bank, their face is captured, get an alert when they are seen else where.
Which is to say, if you say you're a "trans woman", am I able to infer that your desired gender pronoun is female?
Yes, this is indeed a generally safe assumption. View "trans" like any other self-descriptive adjective. "I'm a tall woman", vs "I'm a trans woman". It's not an absolute given, because nothing is black and white (some might prefer gender neutral pronouns), but generally safe.
Generally people who prefer neutral pronouns will identify as nonbinary, genderqueer, or perhaps transmasculine or transfeminine, (although not all people who so identify prefer neutral pronouns) and you might need to ask them for their preference.
Prohibition works when the barrier to entry is higher. Any mook with some yeast and sugar can make alcohol. Pot grows just about anywhere (which is why they call it weed). But not a lot of people are privately developing nuclear bombs or cloning humans.
At the same time, I am aware of the dangers of people doing it for commercial purposes (on so many levels!).
This is not to say they don't have good reasons for not liking a technology. Rather, that tend to not know all sides for something.
Does it work? (Any reviews, since it's past its release date?)
I'm pretty sure if you were, and the crime was caught on camera, your position on facial-recognition software would change.
Funny enough (actually the opposite of funny), the systems used by law enforcement are designed to create more false positives, which then leaves it up to law enforcement to decide which "leads" to follow up on. This will certainly be used to validate abusive behavior.
Additionally, facial recognition is used in some countries right now to silence political dissidents. If you think that can't or won't happen in the US, I'd suggest you look at how activists have been treated in the past (beaten, arrested, and sometimes killed).
yes, but likely in the wrong direction because I'd be angry and personally insecure. Decisions of liberty, privacy and law shouldn't be based on personal experiences or trauma.
I find this line of argument weird. Do you really think a personal decision about long term social issues is better than an impersonal one?
If you're talking to someone who's more conservatively inclined I'd talk about how Iceland has managed to basically eliminate Down Syndrome. Prenatal pattern recognition has successfully convinced nearly all Icelandic parents who have a pregnancy where the child may have Down Syndrome to abort.
If your listener doesn't care about gay rights AND thinks any fetus with a higher likelihood of having Down Syndrome should be aborted, then they're probably exactly the sort of people working on or hoping to use these tools.
You're giving (actual) people the opportunity to choose to have healthier families.
- Note that I say 'healthier families' specifically because willfully passing on genetic features that disadvantage your children burdens not just them but your entire family unit.
Depending on where you fall on the fetal personhood debate, preventing someone from existing is much better than killing them. But that doesn't mean the choice can't reveal negative things about society's overall attitudes towards people in those groups. If you're a member of one of these groups, it feels bad to have someone tell you that a fundamental part of your personality is something that ought to be eliminated from society in general.
And of course autistic people and their families often have hard lives, but a large portion of that difficulty comes from social and infrastructure problems. We're not good at building societies that are easy for differently abled people to interact with.
I want to make an analogy that should hopefully make the problem more clear. It's easy to make a case that being gay or transgender socially disadvantages both a person and their overall family unit. Raising a transgender child is going to be harder than raising someone that conforms to strict gender-rules -- you're going to have a few difficult conversations, and you're going to fight with a few systems, and you're occasionally going to have to deal with jerks telling you you've messed up your kid. None of this is right or fair, and we're getting a little bit better, but it's still a problem.
If there was a way to detect someone's future sexual orientation and identity in the womb, would you be comfortable allowing families to make abortion decisions based on that information? If that technology had existed in the 1980s where being transgender was even harder than it is today, would you have been comfortable with it being used then? Would gay and transgender rights have made the kind of progress we've seen if that technology had been available and unregulated?
Given that in this scenario there are two potential children; the one with undesirable_trait_x, and the replacement without it, and one will come into existence and one will not, as a parent, shouldn't you pick the child that will have the happiest life rather than the one that will better serve as canon-fodder in someones ideological crusade?
However, I'm guessing you're actually talking about morality on an individual level -- that it's immoral for an individual parent not to try and guarantee their future child the happiest possible life. The problem is that even though being autistic is hard, having a hard life doesn't necessarily mean having a bad life. Being autistic is hard because autistic people are different, not because autism is inherently an undesirable trait.
Not everyone who is autistic or who has Downs would, if given the chance, flip a switch and erase that part of their personality, and they bristle at the "happy life" argument because they see it as (intentionally or not) just another way for people to imply, "differently abled people will always lead less satisfying lives." See also the controversy in the deaf community over hearing aids, which are a much less dicey proposition than eugenics but still prompt heated arguments sometimes.
As a side note, it's worth mentioning that no one is talking about artificially increasing the number of differently abled people as some kind of "recruitment strategy". Opponents to this kind of genetic selection are talking about just removing that characteristic as a determining factor entirely. It's not any different than banning sex-selective abortions, which is already an entirely normal and relatively uncontroversial policy in multiple developed countries.
Many families with Downs children describe how much joy and love their Downs children bring to the family, how the family members learned to cherish what’s important in life, etc.
These families compared to many families with non-Downs children are quite healthy and arguably not burdened though they experience different challenges than other families.
EDIT: fix quote; change last preposition in second sentence.
While I personally am not entirely sure how I feel about this sort of eugenics, it's worth noting that at the gestational period these genetic anomalies are detected it's far more likely that these fetuses would progress to develop legal personhood than not.
Given our species history of reliance on anomalous individuals I suspect it's unwise to seek further genetic homogenization and better for us to learn to accept those who don't neatly fit societal norms of fitness.
An example being China’s automated jaywalking-shamer.
It’s a living laboratory.
Given that society functions without it and is constantly improving anyway--crime rates, poverty, mortality continue to drop from socioeconomic and technological development without the need for the intervention of pervasive surveillance--I personally literally cannot conceive of a benefit that would actually be worth it.
For example facial recognition at a high priced building, suddenly non-white people can't get in because you did not show a real face at the scanner. Sorry, it's a bug not a feature.
The same problem would of course apply to anyone with facial deformities etc.
The problem would be essentially of two forms - Automatic exclusion from a group by facial feature, or automatic inclusion in a group by facial feature.
Think of the automatic inclusion as advanced phrenology or other sort of woo, in this scenario company X sells the idea that they will have facial recognition of 'criminal types' based on whatever research they can pull out that criminals often correspond in appearance. Then they fill that corpus up with criminals which then is a nice self-selecting system of find the black/hispanic person and deny them the job, loan etc. because of likelihood for crime past the threshold set.
the reason why this would happen is the same reason why banks used to deny loans to people in certain zip codes, because by doing so they could be racially profiling and excluding but pretend not to do so.
I’m not sure this is s good argument against the technology because there is a solution to this objection: they’ll improve it and ensure every face type is comparably recognized. It should be rejected on more basic principles.
With a pervasive system, unless people are hermits, they’ll get scanned periodically, and with other bits of information the changed face can be correlated with the same person (I go into a salon looking one way, come out different, go into a boxing match, come out different, but I’m following my routine and go to the same subway stop and convenience store and use the same payment method, etc.)
Generally the arguments against a technology is not that the technology does not work for its purpose, but that the way humans will use it is problematic.
I'm thinking something analogous to ASME for mechanical systems. Or is the technology still very much in the wild west phase?
Both my phone and my PS4 use facial recognition for login. I don’t want that banned.
My camera can be set up to take a photo when it detects someone in the frame. Very useful for selfies. Are you going to ban that too.
Are you going to ban Tensorflow. Keras? SciKit-learn?
If I hand code my own neural net, are you going to ban that as well?
Engineers, much like police, are humans. Therefore they hold biases, they can be short-sighted, and they may not understand many of the long-term ramifications of their jobs.
All this does is move the power and bias up the chain from police to engineers while simultaneously making some very, very powerful decisions about what privacy is and what your rights are in society with 0 oversight from the actual society. In fact, police are (in theory) beholden to politicians and the voting public right now. If piss-poor decisions are made based on faulty software from software conglomerate A, who do we hold accountable?
Most cops are college educated?
>They will have biases, but not the same ones police have and far less racist or sexist ones.
I honestly am going to need citations for that statement.
>Your average engineer is far more intelligent and introspective than your average police though.
Again, this is nothing more than a magical belief. There is nothing about the field of engineering that makes the people doing it inherently better than other fields.
I'll say it again, all you do is shift the biases from a publicly accountable (in theory) position to something hidden behind a corporation and unaccountable to the public. This is not a solution, it's just a shell game.
As an "engineer" myself, I don't think my peers are in general much better or wiser people than the rest of the world, and I'm not at all comfortable with implicitly handing them an ever-increasing level of power within our society.
Being educated can help protect some against abuses of power, but do you believe engineers are immune to abuses of power? I don't think so. The mechanisms are the same.
What would lower abuse is deescalation training, dedicated, well-founded mental health care, decriminalizing drugs, and most importantly breaking the blue shield: holding officers accountable by actually trying officers that have committed crimes, increasing the power of agencies regulating them, weakening the power of the police union so they can actually be accountable.
What I personally hope for is that people achieve a balance between safety and privacy.
Relinquishing privacy puts a society on a path where all power accrues to the state, to the detriment of all citizens.
States will prefer the former to the latter and firms will provide, at which point I don't think that the definition of safety-from is necessary.
I do believe that, at some point, people will as well. If other options prove too expensive or too inefficient, it's not just states that will prefer near complete mass surveillance.
Sure if everyone lives in a police state under the thumb of the government you're safer from petty crime but you're a heck of a lot less safe from government abuse.
I don't generally subscribe to conspiracy theories, but I do genuinely believe we're (in the US at least) being groomed for some sort of repressive state apparatus using this type of software to 'keep us safe' in the next decade or two.
Surely it's not difficult for you to imagine fairly common scenarios where your equation is patently untrue. A battered spouse of LEO or a non gender-conforming individual in a theocratic society are only two really easy examples that come to mind.
If we assume that individuals who control or have access to surveillance mechanisms are universally magnanimous and actually capable of neutralizing threats to safety, then your negative relationship between safety and privacy might hold up to scrutiny. I have more than a bit of trouble seeing how that assumption is anything other than utopian.
What leads you to think that this is always true? Do you not think there are any other solutions to reducing crime other than increasing surveillance?
Perhaps you would have more luck trying to build a foundation on the relationship of risk management and privacy. Mitigation vs acceptance vs avoidance, etc.
Why not? Because it's isolated on the device, and can be turned off whenever you want. I think the real concern is the combination of facial recognition with vast centralized databases. Those big centralized databases are just begging to be exploited and abused, whether for commercial, authoritarian, criminal, or even creepy "LOVE-INT" stalking by the people running the databases.
I think facial recognition (and other forms of recognition) can have a bright future in society, but only if they conform to our expectations of privacy. Those expectations have been created over thousands of years based on how we interact with people. The closer that computer interaction can hew to existing social conventions, the smoother path it will have to adoption.
I don't mind if the waiter in a local restaurant recognizes me when I come in; in fact I like it! But I don't think I would like it very much if I knew he was calling 15 different data brokers to report on me every time he sees me.
Not using it IS "shaping" it. Just not into a shape profitable for this dishonest huckster.
Do we want to use it for criminal purposes only, effectively to almost make it impossible for someone to commit a violent crime and get away with it and make a much safer society?
I'm all for it, there are some nasty people out there, serial killers, pedophilians, drug cartels. This technology deployed at scale would make these crimes much harder to get away with.
As long as it's another way to ensure democratic laws are respected, I don't think that completely forbidding it makes sense.
I'm all for regulation, we don't want this technology to be abused.
And when you know you're watched, you start self-censoring to adapt what should be the 'right thing' to avoid blame or judgment.
Facial recognition is used as a catch-all term because it's easily understood, but the deep-learning behind it is much more widely applicable.
I also foresee a surge in wide-brimmed hats.
I chuckled, because it was ridiculous, and now I'm wondering if they make them in professional colors. . .
But then again, how suspect is that to wear a ski mask? Everyone will notice.
Surveillance will come slowly, then suddenly
I believe we live in a society where it's already possible to track the vast majority of people to a general area 100% of the time. Technology is progressing, soon it will be real-time tracking of targeted individuals 100% of the time. That will slowly expand to all individuals. Between cell phone tracking, license plate scanners, and facial recognition cameras, the system knows where you are.
Street crime (including petty thievery) has dropped immensely.
Any city in China is much safer than Oakland.
This technology opens wide doors for surveillance, for finding, identifying and pursuing political opponents - not just their leaders, but every person who was brave enough to protest on the streets.
It opens wide doors also for cyber bullying, and this issue is a problem for every country, not only for dictatorships.
You take a video stream and compress it down into a timestamped stream of IDs. It's really lossy, but it's the same as OCR or Speech-to-Text -- it is a tool that allows us to better handle large streams of data.
As always, the tool isn't the problem. It's the use of the tool.
(But I think we all know that.)
Photos are everywhere, the software will only get to be more accessible. You can't put this back in the box.