Because many people are ok with mass-survielince as long as long as it's used fairly. The problem they see with this tech isn't that it's going to create the kind of world we don't want to live in but that it unfairly targets minorities and/or the poors more than other people. They don't understand that it can never be fair because the institutions that run it can never be fair because there will always be out-groups because that's how human nature works and they will get disproportionately screwed unless we limit the ability of the majority to screw them.
The big difference is that surveillance was localized and didn’t follow the person.
If someone did a crime or did something against local mores and got ostracized they could skip town and settle somewhere else to begin anew. Now, of course being the new person in a new place you were under scrutiny, but as long as you followed local practices your old peccadilloes or even crimes didn’t follow you.
That is a big difference. But the big difference is how one-sided the new surveillance is. In a small town, they knew everything about you, but you also knew everything about them.
Today, parking regulations in my town are enforced by cars that circle around scanning every license plate parked on every street. I could probably find out who has access to that data if it were a priority for me. But what's stopping a private entity from doing the same thing? I'd never know it happened. Cell carriers selling my location data? I could easily have never known about this. Ad tracking companies having a record of most sites I visit? I don't even know who they are other than a few major ones.
You see this already today with data from census bureau, FRED databases, COVID tracking tools, think tanks, etc.
Although, if one's goal were a town or even an individual city, I bet enough infrastructure to sift the data is buildable or rentable by individuals or small groups of individuals.
It becomes an interesting world when non-profits can afford big data resources.
When they pooled their data, they were the first group to notice the military had started running flights to and from locations they didn't normally fly, and it didn't take much investigative journalism after that to discover those planes were carrying people.
With total surveillance any revolution movement can be efficiently destroyed before it gains any momentum. That removes an important check on any government's (or corporation's) power.
Outsiders were shunned for a reason. Because one didn't know if they had left their previous town because they were too antisocial to coexist.
I used to see privacy as empowering but I've changed my opinion more and more to seeing transparency as more important, across multiple dimensions, not just policing.
Whether it's tax avoidance and financial havens, human trafficking scandals, sexual abuse victims being empowered by social media to communicate and hold people accountable, watching the police (very relevant), it seems more and more to me that privacy is mostly a tool for the individually powerful whereas transparency is a tool for collective action and impartiality.
For all I know I'm being tracked in that way by satellite already. It's only when things become operative, of someone harasses you or something ... but then they'd be easily traced and prosecuted.
In short, I'm not sure I care.
To be honest I had expected the GOP to come up with ... let's say substandard candidates, but Trump is on a level no one would ever have thought possible. And there are people way, way more authoritarian than him in this party.
Hoods and glasses only work if everyone is doing it. In certain parts of my town you can tell the drug runners by their "uniform", primarily pulled down hoodies and a hand down their trousers.
I'm not sure I follow your logic. Your experience seems to prove the point that pulling a hoodie down helps conceal your identity, I assume that is why the drug dealers do it.
The focus of the article is facial recognition, which is commercially available today. Your examples all require some type of initial identification to know where to check for someone's mass, scent, DNA, etc.
Cop played by Samuel Jackson
the terrorist played by Ryan Gosling,
"these guys don't even look alike"
response Samuel Jackson: they're both too good looking to be janitors.
Ryan Reynolds: why does everyone keep saying that!
later in the movie - Samuel Jackson - I have had it up to here with all these good looking white guys fing up the mf*ing facial recognition!
Idris Elba should be in this too. somehow.
on edit: maybe at the end Idris Elba is brought in as another person the facial recognition identified as the terrorist.
everyone is "how is this possible!"
Ryan Gosling: I don't know, he's pretty good looking, I'm kinda flattered.
Ryan Reynolds: yeah, the computer thinks I look like this guy, wow maybe I am too attractive to be a janitor!
The ask Idris Elba what he does for a living -
"Presumed Innocent" https://www.imdb.com/title/tt0100404/
"No way out" https://www.imdb.com/title/tt0093640/
One of the things that stuck with me as different: They take a little time to look at a new technology a decide if they want to use it. I wonder if our embrace of tech as being "neutral" is correct sometimes.
"They're more cautious — more suspicious — wondering is this going to be helpful or is it going to be detrimental? Is it going to bolster our life together, as a community, or is it going to somehow tear it down?""
Of course predicting where things end up when new tech disrupts is difficult.
It feels more and more like the “neutral” part is a shield we’ve created to exempt ourselves from responsibility.
Basically, with 3D printers and solar power and local grids and machine shops to fabricate for local needs, and a loose mesh network to connect these decentralized nodes/communities together (like Mastodon, instead of the paper-and-print Amish newspapers that exist). A kind of techno-libertarian-Amish blend, to form a society that was more resilient and modular, rather than hyper-centralized, dense, and easily swayed by viral influence/behavior/trends or literal biological viruses.
EDIT: Of course it would never "scale" since such a system probably could not support higher population levels at all, but maybe that's okay in the ultra-long-run or some post-apocalyptic rebuilding future (also dangerously veering into population control and genocide-ish topics).
None of those could exist without the significant science and R&D which is only possible as a product of people leaving their personal areas to congregate in cities and colleges where information is shared and built upon. Along with a lot of money coming from the government and big companies. And the issue with that is when people leave for that purpose, many of them tend to leave forever to stay there.
It's also kind of a difference in strategic approach, to pool resources and create cutting edge institutions, or to adopt a more decentralized approach where yes, you might not get as quick of a pace of innovation, but each community is more self-sufficient. So, you might not ever get a fancy cure for cancer or CRISPR technology in this alternate universe, but most communities might have their own local clinics and more local nurses and maybe better overall health outcomes by focusing on common treatable problems, rather than pushing for cutting edge innovation. Similarly, you might have fewer PhD's, but more high school graduates or bachelor's level of education.
Anyways, nothing about this is all that realistic or anything, just some idle world-building in my head.
From the ban: https://assets.documentcloud.org/documents/6956465/Boston-Ci...
"Nothing in (b)(1) shall prohibit Boston or any Boston official from:
a. using evidence relating to the investigation of a specific crime that may have been generated from a face surveillance system, so long as such evidence was not generated by or at the the request of Boston or any Boston official;"
So if a third-party, say the FBI or DEA, provides info from face surveilance systems to Boston with them specifically requesting it, they could use that.
Realize people, FR enables the physical world to be overlaid with web-site-like tracking ability. This is hugely valuable to business, and if they figure that out, they will push FR just like they pushed evasive tracking advertising all over the web.
From the article: The city council unanimously voted on Wednesday to ban the use of the technology and prohibit any city official from obtaining facial surveillance by asking for it through third parties. The measure will now go to Mayor Marty Walsh with a veto-proof majority. Walsh's office said he would review the ban.
Human face-matching accuracy is worse than software in many scenarios (“is this the man that robbed you?”) but it requires so much effort that the absolute number of false positives are low.
On the other hand, facial recognition hooked up to cctv can passively generate mountains of matches all day long for pennies.
Interestingly they also said they don't want to know the specifics of anyone watching.
I wonder if laws like this, that in actuality seem fairly toothless, will result in more of that. "Safety, and ignorance of where/who the watchers are."
In my experience living in one of these places, police mostly don't investigate crime even when there's clear video.
You could make neighborhoods a lot safer with a lot fewer police by using modern methods like facial recognition cameras and unmanned aerial surveillance. Break-ins and robberies suddenly become wildly easy to punish afterwards and a lot of investigative work like tracking gang members to get a sense of their operations morphs into a trivial affair.
A good analogy would be a chef tolerating fecal matter in their food. Yes, there is always some small chance of that happening, but no one accepts food from s chef that explicitly treats toilet water and claims 99% of the fecal matter is gone and only one in 100 people will get sick from it.
if you sample enough, nearly any real world classifier will have a nontrivial false positive rate.
Um, source? Using the link below, that's clearly not the case.
How would the use of a technology make the person acting on the results of the information provided not accountable? There is no agency in those systems, so accountability can not be deferred to them.
"A good analogy would be a chef tolerating fecal matter in their food. Yes, there is always some small chance of that happening, but no one accepts food from s chef that explicitly treats toilet water and claims 99% of the fecal matter is gone and only one in 100 people will get sick from it. "
Actually that is exactly how the food industry operates.
Note that I am not arguing for nor against the specific use of the facial recognition systems by city officials.
Even worse, if you have a shipment that is 20% fecal matter, you can use it as long as you dilute it with other batches to the legal proportion.
Also there is the idea that such an infastructure would scare criminals into not doing incriminating things.
Even if the criminals were afraid of the repercussion of their actions, and were aware of the system to catch them and be convinced it will work and would lead to their capture (that's already a LOT of if), this would only work on premeditated crimes, which is a minority of the ones commited in the street.
A lot of crimes are not though in advance, but done on the moment. People fighting because they are angry. Somebody stealing because they saw an opportunity.
We hear a lot about recurring pick pockets, or bank heists. But those are not the majority.
Most bad things in the street happen without being planned, and without consideration for the consequences.
Planned crimes are white colar crimes, and they don't get caught by cameras.
I still think it makes a big difference. I spent time in China and honestly I felt safer in areas where terrorists were a concern than I do in downtown SF.
To put it differently, explicitly accepting injustice is incompatible with mine and I believe society's understanding of justice. This is the same reason why ML shouldn't be used to replace human judges regardless of improvements in outcome. The authority of judges,cops and legislators is legitimate because the people have a contract with them where the people's view of justice is implemented as law in return the people subject themselves to these laws. Facial recogntion and ML judges violate that contract and as a result the outcome is illegitimate,100% of convicts become wrongly convicted because as a people the views on justice of their society was nor enforced by a member of their society that is elected or appointed by the people but by a blackbox logic that only looks at the outcome as a metric. A defense attorney can claim eyewitness testimony is inaccurate for example and cross examine witnesses but that attorney cannot interrogate a ML presenting as evidence a blurry picture with a 99% accuracy match.
But what you are describing is really a problem with your "justice" system. A system where most people are coerced not to use their right to trial ("accept the plea deal or your sentence will be much worse", with many innocent people taking the plea deal) is simply incapable of delivering justice.
How would this work for the acceptance of self driving cars?
But if we have a reliable percentage that won't create too many false positives that can alert polices if a suspect is seen on a camera so that they would start the investigation, what is wrong with that?
I predict a lot of cities and countries will follow suit and start banning facial recognition technology altogether. If not then we're screwed. I already avoid traveling to the UK for the insane privacy invasion of their CCTV system. I get that there are benefits for crime investigation but it's far from worth it, the whole concept feels incredibly surreal (and wrong) to me.
Edit: To further your point, even if fingerprints were 100% accurate, there's still the off chance that someone planted your fingerprints. And someone could be wearing a you-mask, so you cannot rely on fingerprints or facial recognition for waterproof evidence. Which is why proper trials require multiple sources of evidence (I hope - IANAL).
The question is if we can do this without the people in charge abusing or not. If we can guarantee that it will be only being used for catching people with arrest warrant then I would have no problem with increasing the efficiency of cops
> The question is if we can do this without the people in charge abusing or not.
I think this is the main question and goes hand in hand with whether governments should be able to decrypt the internet.
Please let me move around freely, meet the people I want to meet, without having me added to some database of people with suspect contacts. I am fine with granting "criminals" the same privileges (as I posted in a comment yesterday I am definitely a criminal, given the definition of the word).
You start with cameras everywhere for facial recognition, then you add microphones... it's hard to encrypt real-life discussion without inventing a new language, which could be easily decrypted anyways.
No one is presenting the outcome of a facial recognition algorithm's certainty in prosecutions. This is fundamentally incorrect
There have already been some other comments pointing out that this is an unreasonable standard, but they aren't really thinking big enough - the court-based system of justice isn't even that accurate. "Beyond a reasonable doubt" isn't going to be a standard of evidence even reaching 99% of evidence. Out of every 100 people in jail for murder, some of them just didn't do it. 100 people is a lot.
This isn't the right characterisation.
Facial recognition at 95% can be very useful in helping to find people, initiate investigations. It doesn't have to provide grounds for arrest, let alone evidence for a trial.
Of course, there are problems even there, depending on a variety of issues.
Your license plate is being recorded in a variety of places, and for serious crimes, they will definitely use that information to 'look you up' FYI and face-matching isn't that much different.
As for arrest and prosecution, obviously that's another can of worms, but even then, there is a threshold at which we will effectively call it 'evidence'. 99.999% ID + was in the vicinity, plus no alibi, plus motive and prior record? That's a case.
If you think about it - we consider 'video evidence' to be fairly conclusive ... well I doubt humans are actually 99.999% accurate in their ability to matches faces to video! So it adds up.
" When it comes to justice, mistakes are tolerable so long as adequate compensation exists"
There doesn't need to be any compensation if there wasn't any malfeasance or incompetence in the system. To the extent our public servants are all acting above bar, using efficient, fair and lawful methods, then 'compensation' is not really the issue.
"It's better to let actual guilty criminals get away than explicitly and systemically accept even one innocent person being punished incorrectly,"
"accepting any amount of injustice invalidates that legitimacy and authority."
This is not true - by your very own logic. First - a justice system that 'lets people off who are 99.999% chance of being guilty' - implies quite a lot of injustice. Moreover, that in some instances, innocents will go to jail is tragic but absolutely inevitable. Not in 100 years will we have fixed this problem.
Along with some of the racial injustice you mention, you know what is 'unjust'? The amount of crime, particularly murder that goes unpunished in the US due to low case clearances, particularly in violent communities. That means aggressive killers on the block go free, gloat in their crimes, and do it again. Can you imagine having your sibling or child killed, and essentially knowing 'who did it' - but having them go free? This is common.
Facial recognition is not quite ready for prime-time, and it's invasive in ways that license plates are not, so it makes sense to ban it for the time being - however, there are thresholds at which it would make sense. Facial recognition in airports, near high value targets and such systems for alerting police to 'high value, dangerous targets' might make sense. But for everything else it's just not needed.
>We are not willing to give up liberties regardless of the benefits of the technology.
Finally town with smart people. I wish I could live there.
> As a people, we do not trust the government or the ruling class to stop at the good uses of this technology. This is why technocracy is evil, it only looks at outcomes, disregarding the will and consent of the governed.
That's the way to go.
Face recognition is even worse than Traffic cams. I don't even think it's the matter of referendum. It should be considered a human right. I do not wish to see some majority could strip anyone from basic human rights.
Second - "We are not willing to give up liberties regardless of the benefits of the technology."
This makes no sense. It's like a line from a Rambo film. Communities trade liberties for benefits all the time.
Suppose police could clear 100% of murder crimes if we all consented to having our fingerprints taken. We we then do that? Probably. So long as there were no other ill effects.
This arbitrary notion of 'freedom' completely abnegates the fact that people live in communities with other people. Everything is a tradeoff. Absolutely everything.
Go ahead and use your chainsaw at 1 am in the city and see how 'free' you are to do such a simple thing as trim a hedge when you please. Or do it naked in the front yard.
Second - both these comments have shades of fear of the unknown.
This is the same kind of uproar people had about 'finger prints', 'DNA' and possibly 'seat belts'.
Both DNA and fingerprinting have been used expansively and widely for law enforcement, we generally accept their use, and are better to understand the parameters of them.
Using AI on a case by case basis in fact - is considerably less invasive.
Your fingerprint and DNA are semi-private bits of information.
You face is de-facto public.
Using 'AI' is no more 'technological' than any thing else, frankly, the term 'AI' is misleading because there is nothing special about it.
We could very well frame it as 'database search'.
---> The police took video footage from the crime scene and identified three possible suspects to the database one of the assailants was found to have the gun used to commit the crime.
It's ridiculous to assert there's something wrong here, let alone some kind of fundamentally new way of policing.
As for 'arbitrary surveillance' - that's another thing entirely. We don't have camera speed traps in a lot of places, that's fine, that's one application of technology.
I suggest we don't need broad surveillance either, but that has little to do with AI.
Crime is a real thing, cases unsolved and criminals unpunished definitely represent 'injustice' and so we need to use the tools we have to make the world a better place.
Let's take this simple one as example of stupid laws that suppose "to make world a better place "
Can you justify rationally and logically why naked person in the front yard is a problem?
I mean, we have born this way aren't we?
And yet having such biases you advocating for technology amplifier to enforce laws??
And you think you are making the world a better place ?
If you ready to prosecute someone for being naked,
which means actually simply being himself and not
sharing your views about nakedness, I prefer
that you would be prohibited to have access from any surveillance completely.
>We could very well frame it as 'database search'.
And in that case I do not wish you to have any access to any database what so ever , and the 'data base' itself should be destroyed. Because 'AI' is a different level of magnitude in surveillance, because one thing is when you go some people notice you while some don't and another when something watching your every step. This by itself is intimidating. People would not feel free and relaxed they would fake it or smash the cameras at some point.
We already have too much surveillance to the level it's annoying to go outside. I do not feel comfortable any more. I do not like somebody watching me with AI precision. I was questioned by police just for taking the walk not in a 'proper' hours and all I wanted is just to take the air and think about deep topics without interruption.
The world is already going crazy with too much control and some people seems do not realise that at all.
Will it ever be? Should it ever be? You have no perfect laws or rules and never will to begin with, so the idea of following rules with ideal tech-supported precision is wrong in it's core. Pursuing laws perfectly would kill any freedom and it's a perfect way to a totalitarian state. Searches were limited for a reason and thus tracking should be prohibited too, even if you have tech ability.
Mr. Turing himself was a victim of idiotic laws, I just wish he could get away with his "crime" of making love. If you think today it's different, it's not.
Too much laws and you have zero progress or evolution because progress comes from reviewing known and accepted concepts. You can look in history how many idiotic laws were in place, add to this tech precision of your dreams and the free world is finished. Would you like that? I don't, for sure.
This makes absolutely no sense.
We already use fingerprinting, DNA, photographic evidence - and in some cases worst of all '1st hand witnesses' as evidence.
Pursuing techniques and possibly technologies that more effectively help us create just out comes is exactly what we want to do.
Your example of 'Turing' is another thing entirely: what is deemed legal and not-legal is a completely separate thing from what we construe as evidence and not evidence.
"Too much laws and you have zero progress or evolution because progress comes from reviewing known and accepted concepts"
What does 'too much laws' mean, and what does it have to do with anything?
It means, that you have laws that you should not have at all.
It means that you never can make absolutely wise perfect laws.
And it has to do EVERYTHING with this topic.
If you use perfect means to follow imperfect laws you would end up with stupidity amplified.
If you can't grasp this concept, look at the example of 'Turing' it is exactly about this pure dumb stupidity amplified.
We have now all kinds of idiotic laws and if to insist on them with too much of precision using technical amplifier their stupidity will be revealed too vividly and this would lead to a strong opposition and let us hope it would be not armed opposition, but I would not bet on it.
Even getting asked to come down to the station to give a statement based on an incorrect facial recognition is not something I would want to deal with for the sake of "the greater good".
95% seems awfully low accuracy, and even without an arrest this will cause a lot of headaches for many honest citizens.
> No - there doesn't need to be any compensation if there wasn't any malfeasance or incompetence in the system. To the extent our public servants are all acting above bar, using efficient, fair and lawful methods, then 'compensation' is not really the issue.
Sounds like a utopia, definitely not the world we live in.
'95%' isn't meaningful, do you mean a 5% false positive rate, or 5% false negative rate?
A false positive rate as high as 5% would wrongly flag thousands of people.
Think 99.99% and even then
99% accuracy in a city setting means (more or less) that you'll get the wrong person 1 in 100 times.
If you're scanning a crowd with this, you can imagine what can happen
I am not keen of my state knowing my whereabouts either and think camera deployment would create more problems than it solves. Countries where it has been deployed don't have impressive advantages to show for and all the privacy disadvantages. So why should we even consider it? What problem should it solve?
My opinion, if you are still scared of terrorists a psychiatrist might be more useful than a camera.
It used to be that law enforcement relied on fingerprints, eye-witnesses or a video tape backed by a judicial and legal system that - more or less - severely limited how and when that information could be collected and used e.g. within the context of discrete on-going criminal investigations.
This balance has been shifting entirely over the few short years.
Now, video camera's, fingerprinting and so on are collected pre-emptively; on literally everyone. Undeniably, it's easier to move forward on an investigation if you already have a lot of the data at hand. The argument put forward by proponents is that speeding up investigations saves lives. But that comes with a ton of hard problems.
First, searching accurately through a vast mountain of diffuse information is a hard problem. Law enforcement has outsourced that part to private companies, which opens up a can of worms in terms of confidentiality and privacy.
Second, searching through such a database becomes a black box as far as law enforcement is concerned: just upload a picture or a video fragment and it will easily yield any result. The fallacy being that convenience lulls LE into assuming that the results are accurate enough at face value. No need for verification or critical thinking (Much like you'd accept the results yielded by Google Search at face value).
Third, it's a convenient way to offshore responsibility towards the public. Law enforcement didn't make the wrong assertion if the wrong person ends up getting prosecuted and convicted: the data was just "not good enough". The fallacy here is that data gets treated as a commodity, which it is anything but.
Fourth, there's a difference between what's morally right and what's legal. The latter changes depending on how defines public governance. A database may be a great idea, until control is ceded to someone who uses that data against the public.
There's actually a historic precedent here: the 1943 bombing of the Amsterdam Civil Registry office. Following the 1940 German invasion of the Netherlands, all Dutch Jews had to carry a mandatory identity card and there whereabouts were recorded centrally at the Civil Registry. This was easy as, prior to the war, people's religious denomination had been recorded. The Dutch Resistance understood the importance and the downsides of a centralised record containing private information falling into the wrong hands. And so, they ended up attacking and demolishing the Civil Registry office containing that record. Albeit half-successful.
Obviously, I don't advocate attacking datacenters - that would be an effort in futility anyway - but your assertion that investing in mental health support would be more helpful rings very much true.
That's not great, but people perceive it as "fingerprints are unique".
I’ll take it.
There are narrow use cases where i think facial recognition in law enforcement would be a good thing, but it is ultimately too much power and too easily abused to trust legal systems to adhere to right usage. Banning is the correct action when misuse is as bad and correct use as complex as they are.
Sure, if your police/justice system is not competent you have a problem, but you had that in the first place.
The city council unanimously voted on Wednesday to ban
the use of the technology and prohibit any city
official from obtaining facial surveillance
by asking for it through third parties.
As someone who has done some work with building deep learning models what is it that makes this unfairly target minorities?
Is it that the people who trained the model did not present enough example images of minorities during training? Is it because darker (presumably black) skin does not show up as well on poor quality videos (presumably because the metering of the camera exposed for the surrounding background which was bright)? Or is it the law enforcement using it was poorly trained and assumed the computer was infallible combined with possible prejudice they already had against minorities?
The first problem I would think could be easily solved. The second problem I would think would be rather difficult. The last would require extensive training but I am sure we humans would screw that up also.
Even if the systems were perfectly fair and not the least bit biased and were operated by a perfect utopian police force I still wouldn’t want facial recognition. I’ve yet to hear a potential benefit of this sort of software that would justify the huge cost to citizen privacy.
Just because we can train computers to recognize faces doesn’t mean that we should.
It makes it easy to find suspects and narrow down suspect lists. Meaning far fewer police are needed to catch a greater proportion of known criminals.
Most people consider that a huge benefit.
Let's say you have a db of all faces in country of 60M people. You have a photo/video of a person committing a crime, robbery. False positive rate is 1:100,000. Your search returns 600 people; address match finds 60 with connections to the locality; 5 of those have records, one for robbery. You'd at least sit a person down for an hour to review the matches, consider the records, list people for interview.
According to UK ONS stats, those adults released from prison, in Jan-Mar 2018, had a reoffending rate of 65%.
It seems just tracking known offenders would find the perpetrator in many cases if visual recognition is possible.
I mean, this is _the_ principle benefit.
So, society has totally failed to rehabilitate people, and when their sentence is served, they have no future and re-offend.
I think that's the problem, not mass surveillance that starts with good intentions and then is turned on your doorstep.
> Most people consider that a huge benefit.
Until it's too late (like HK), then what?
For a NYC analogy, imagine its historical consolidation was more limited, and many of its towns and cities remain independent, never having consolidated into boroughs, and the boroughs into one big city. Flushing, Brooklyn Heights, Kingsbridge, are still independent towns. Here, the city council of a city occupying only lower Manhattan, but confusingly named "City of New York", just voted on face recognition.
With their superhuman search capability one person with a bias can discriminate against everyone in the database instead of just whoever is standing in front of them.
As I mentioned in other similar threads I also wonder if police is using the recommended configs and not the default ones, because it can't help if somewhere in the cos it shows with small letters that "we recommend for police to use X but everyone else can use Y so we default to Y so if you are the police we recommend you to change Y to X".
No evidence, by its self should be enough to get a conviction. Not your finger prints all over the scene of a crime, not a video of someone who looks like you, not even your DNA matching a rape kit.
The fact that someone was arrested because of match shows a failure in basic criminal investigation more than anything else.
It really doesn't matter what the legislation says - no doubt they have already been doing and been calling it something else. It's going to be one of the worst things to come out of the 21st century.
For better or worse (and better only being in the sense that things need to get worse before they get better) there are a great many in Massachusetts do believe all-seeing government surveillance (though if you phrased it like that they'd probably take issue) is a net plus to society.
The government already knows that, via Google, Facebook, and the telcos.
Edit: Agreed, no conviction was involved, so their statement looks exaggerated.
Yes, so a wrongful arrest was made, and thus the technology isn’t perfect, so we shouldn’t use it is the thinking?
Well actually no, apparently wrongful arrests aren’t sufficient cause to ban a technology but only one believed to racially discriminate lol.
Or maybe the real reason is because it’s become fashionable of late to fuck the police and the whole because reasons is an afterthought.
In any case, the larger point, the one suggested by the OP, the one met with foolish acquiescence in response, the one suggesting that our justice system convicts people on the basis of facial recognition technology is wrong..
Structurally—not necessarily individually, though often that too—malevolent, not incompetent, at least as is most relevant to the issue at hand.
Umm... I'd like to see technology, which passes this test.
Relevant obligatory xkcd: https://xkcd.com/2030/