Hacker News new | past | comments | ask | show | jobs | submit login
Google contractors reportedly targeted homeless people for facial recognition (theverge.com)
185 points by jamesgagan 10 days ago | hide | past | web | favorite | 147 comments

I remember the outrage when people discovered that Google's AI wasn't properly trained on black faces. It makes sense that they try hard to avoid that happening again by paying black people to let Google scan their faces. It is not unethical to try to diversify your training data.


Anyway, this part sounds directly illegal, seems like it was just Randstad being greedy but if anyone from Google knew about it then it is bad but I doubt that they couldn't budget enough money to get the scans legally:

> They said Randstad project leaders specifically told the TVCs to (...) conceal the fact that people’s faces were being recorded and even lie to maximize their data collections.


Randstad is dishonest in general.

A friend went through the application and they wanted her to digitally sign 46 contracts, one after the other, without a chance to read the following contract before signing the current one. Including one about an arbitration clause. She did see that the first contract offered to send the rest of the contracts printed, by mail, but when she talked to the rep, he acted like he didn't have access to the contracts he wanted her to sign (yeah right and later he'll be like, well you signed y, so you gave up the right to x, probably knows them by heart), and that she should simply sign them and then go back and print them.

Presumably they have to offer to send them by mail for the contract based on online signatures to be binding, so it's interesting that the rep refused to do so. It was especially sad they have a deal with unemployment offices that funnel workers to them using state funds.

I feel like they were needlessly dishonest and misleading and that being truthful would have gotten more people on board. Saying something along the lines of "hey, we noticed our facial recognition algorithms don't work so well for African Americans. Could you help us fix that by letting us take a picture of your face? We'll give you a $5 gift card for your troubles."

That seems pretty clear. Verge title is kind of clickbait by using `Google Contractors` rather than `Randstad` though.

The "and/or" in "it sounds like Google and/or its contractor may have been taking some extreme and unsavory shortcuts to cash in." is clearly use of weasel words, the journalist was unable to substantiate the allegation that Google was aware of this unethical practice. Sloppy reporting demonstrating a lack of journalistic integrity.

That's still much better than the original source (NY Daily Times) which literally just says "Google".

Outsourcing work is not a justification for outsource responsibility. Google has a choice.

That's why it would have been great if the Verge did some journalism so we actually had some clarity on the situation.

There is a premise in this deduction, which Randstad made: 1. We need darker faces in our training data. 2. Therefore, gather training data from homeless people.

How do you go from 1 to 2? With the premise "darker-faced people tend to be homeless".

This is not necessarily a false premise -- statistically, it is true, and it is a reflection of systemic injustice -- but the outrage is not whether it's true or false; the outrage is that Randstad exploited this painful fact.

In the article it said Randstad targeted homeless people because they were less likely to talk to the media.

Also I'm assuming homeless people would be much happier about a 5$ gift card on average.

That's wrong. The assumption would be that homeless people tend to be dark faced.

It's so unsupported, of course. Far more likely that homeless people tend to be a available and amenable to the project.

Kudos to Google's contractor for offering this opportunity to the people who need it most.

I would happily sell anyone a picture or scan of my face for $5. But I would even more happily have that chance go to someone who needs it more than myself.

This article also mentions that the contractor may have lied to or misled the homeless, which is deplorable. But the behavior described by the title itself is nothing objectionable. The fact that many will object is a phenomenon I've seen called "Copenhagen Ethics": https://blog.jaibot.com/the-copenhagen-interpretation-of-eth...

> I would happily sell anyone a picture or scan of my face for $5. But I even more happily have that chance go to someone who needs it more than myself.

Would you really? My gut feeling tells me that's not the case for most people for privacy or ethical reasons. Just because those people are poor, we expect them to have lower privacy or ethical standards.

The link you posted has the following example, I think you're referring to that

> BBH Labs was an exception – they outfitted 13 homeless volunteers with WiFi hotspots and asked them to offer WiFi to SXSW attendees in exchange for donations. In return, they would be paid $20 a day plus whatever attendees gave in donations.

That's completely different. Offering Wifi has zero long term effects. It's providing people with a "business opportunity" that wouldn't have access to it otherwise. Giving someone 5 bucks for their face picture (or other biometrics) is totally different and has long term negative effects.

If you or anyone:

- Provides a link or method to create the scan that takes just a few minutes (on Ubuntu)

- Sends $5 to kauffj@gmail.com via PayPal or via BTC to 17h2GtaBzivnNtP24qoGg4a3pjgShkw7MD

I will complete the process and post the result in this thread.


Don't wear yourself out moving those goalposts. First you accuse him of lying about what he said; then when he offers proof, suddenly it doesn't matter whether he's willing to do it himself or not.

It wasn't my intention to accuse them of lying. I guess I genuinely don't understand what their point is.

All I (unsuccessfully) tried to point out is that the two scenarios are different.

Is there any right to privacy on a public street? I thought anyone can photograph you in public, you have zero recourse, and most times you won't even get $5.

True but the $5 is more paying for data hygiene and organization because you can't get people to standardize naturally and the fewer confusing factors the better.

I don't know, you can do this to all sorts of scenarios. How many men are vulnerable to attractive women and will do all sorts of ridiculous things out of sheer desperation if a cute woman comes on to them? These men wouldn't do it in other situations like if they were already coming from a place of abundance, or if they had low T.

How many people in need of money do all sorts of desperate things like take a job they hate? How many HNers are working jobs they actually hate? Or they're being taken advantage of because they don't have the backbone to stand up for themselves? etc. etc.

I don’t think most people care about privacy all that much.

Most live publicly with their faces on display for all to see and others taking it a step further, participating in Facebook alongside billions of others.

It doesn’t scream facial identity being a major concern.

I don't think most people care much about carcinogens.

Most people have them in their homes, breathe them, eat them, and others take it a step further, participating in the creation of them.

It doesn't scream fear of cancer being a major concern.

You're applying your fear of carcinogens to others. My parents smoke a pack-a-day; I think that's horrifying. There's looking out for others and then there's overreach. It can be hard to discern the difference sometimes.

No, in fact, that's not what I'm doing here at all. The point I'm making has nothing at all to do with my personal feelings about either cancer or privacy issues.

Your sarcasm fails because it's truth. Aside from a few direct intense carcinogens like asbestos, carcinogens are not a big deal.

I think most people actually care quite a lot.

That people live their lives accepting that their faces are on display is not evidence otherwise, since there is literally no other option.

Participating Facebook is also not evidence otherwise -- at most, it's evidence that people are willing to trade privacy in some circumstances (and I think even that's a bit of a stretch), but I'll bet that most Facebook users would object to having their privacy invaded without their consent -- which means they care about privacy.

I'd argue that people care much more about consent than they care about privacy. Like, lots of people give away money for free to beggars but they wouldn't be very happy if a beggar robbed them of the same amount of money.

>since there is literally no other option Living remotely in the mountains/desert/jungle?

I say this with seriousness. When considering this alternative, the option of living alone, without human interaction, public identity shows it positive attributes.

Living in a remote location doesn't take away from the fact that you still must spend at least some time in a public space.

> it's evidence that people are willing to trade privacy in some circumstances

Which is what was being proposed and subsequently doubted: that people were willing to consensually trade their picture for $5.

Then what would you accept as proof/evidence ?

A couple of solid independent studies would go a long way.

Okay ... half of experimental psychology papers are about giving people some small amount of money, or even just some token or just a chance to tell their story for filling out a questionnaire, sometimes extra credit is involved, or incarceration is involved (in psychiatric care) which is where I start having serious ethical problems with it.

Questionnaires where they reveal things, often associated with at least a way to contact them for an interview, but sometimes with name and everything, usually is psychiatric settings (where people are often incarcerated without any proof, trial or any of that I might add). Things like whether they stole from their employer. Whether they ever used violence to obtain sex. They often ask children, homeless, prisoners, patients ... other groups with perceived or real precarious situations. (things that would never pass an ethical review board for, say, medicine)





So yes, I would say that a lot of people are willing to give up a LOT more privacy than a face picture for a small reward.

I never claimed people weren't willing to trade privacy for some benefit. I think most people (including myself, if the cost/benefit is favorable to me) are.

But the fact that that's true doesn't mean that people don't care about privacy.

You could even argue that it's indicative that people do care about privacy, as they attach a material value to it. This isn't an argument that I'm really making, but it isn't an unreasonable one.

What is claimed here is not that nobody values privacy. The only claim is that large amounts of people, some from "vulnerable" groups, would trade privacy for a small reward. The article alleges this only happened because they were lied to.

It's not about that. It's about whether we accept corporate arrogance to make decisions on behalf of a vulnerable demographic.

Billions that participate are now considered "vulnerable"?

Sorry, that was referring to the homeless population from the article.

Everyone who has uploaded a picture of themselves to Facebook, Linkedin or similar has already sold their face for free. That group includes almost everyone in the western world. $5 seems like a bargain compared to that.

to be clear, who has uploaded or had their photo uploaded; or potentially even attended any event someone bothered to setup a camera at.

No, people who had their photo uploaded by others didn't "sell" it. Selling implies an act (even if coerced).

>gut feeling tells me that's not the case for most people for privacy or ethical reasons

Your gut is sadly wrong. The majority of people still do not actually care about privacy when there is more than a few cents of value being offered.

They must spend way too much time on HN or reddit, because out there, most people don't care whatsoever about these sort of issues. It's only on these boards that privacy is a hot button issue.

It is constantly empirically shown that most people don't care. Most people have already given their face to Google or Facebook via Photos, either directly or by being friends with Photos users.

And what are the negative effects of giving away biometrics? Is someone with no assets and no stable residence in danger of harm from someone getting a loan in their name? Of being rounded up by the government for their biometrics and not for the much more immediate threat of being criminalized directly to just for being homeless?

What if it was one dollar instead of five? It would have saved hundreds of thousands for Randstad. Or if google is paying for it and this feature motivates 1% more people to buy the phone, they could be making a lot of money. I bet loads of homeless would be fine with $1 for a snapshot. How does the outcome differ for all these homeless people in this case?

What if they could have bargained for $10 or even more instead? I don’t think either company would even blink at the sum, but many desperate people out there would be a lot better off.

I agree with you that some observers are never going to be satisfied and to them there’s always more an individual or a company can do. There is definitely an observer effect.

Similarly, If we took my line of questioning all the way to an absurd extreme, the best outcome would be if all these people got permanent shelter, jobs, and a stable life. But we can’t expect companies with profit targets to do this them. Nobody would feel bad about this exchange, but it would be pretty unrealistic.

So i guess I need to reframe my original question. Why do certain exchanges feel ok while other ones leave a sour taste in everybody’s mouth?

To me it seems like the answer is because the exchange felt unfair. Both parties stand to benefit but, instead of doing something genuinely beneficial for both, the party in power offered the (almost) bare minimum. That sense of unfairness is multiplied when you contextualize the exchange as Very Large Business vs. Small Homeless Person.

Similarly the link to the phenomenon discusses our role as observers, but it doesn’t discuss the parties’ roles in the exchanges. They’re not only observers, they’re also actors. The people performing the homeless study could, for example offered something to the control group at the completion of the experiment.

The issue is not that the contractor only marginally helped these people. It’s that they exploited a massive power imbalance in order to reap a vastly larger reward than they offered.

“Copenhagen Ethics” really just strikes me as a rhetorical tool to defend exploitation. “What, just because I offered this person a job I have to pay them a minimum wage?”

$5 isn't much. Why not post here a picture and 3d scan of your face as a token of good will?

It could help some start-ups that need such a face for demo purposes or other experiments.

I think most people, Including me, trust google/fb more than random people on hn.

Giving up your HN privacy is different than selling a scan of your face that anyone can already find with a simple Google search on Facebook or Linkedin.

This, if $0 + not giving face instead of $5 was a superior choice, the homeless would have chosen it, but they didnt.


Please stop posting unsubstantive comments to HN. We've had to warn you about this multiple times already.


Unsubstantive? Most homeless have mental health issues. You can’t think logically or know if anyone is taking advantage of you when you have a mental illness. Google giving these helpless people $5 is NOT helping!

That argument applies equally well to buying kidneys from homeless people for $20. I hope we agree that that's bad.

If you allowed people to sell their kidneys the market price would definitely be a lot higher and the trade in kidneys between homeless people and sick people would leave both groups a lot better off. Today you have one group dying in the street and the other dying waiting for a transplant.

And if a few homeless people die from transplant complications, hey they volunteered! And now we can stop investing in social welfare because a steady supply of homeless bodies improves the health of the middle class!

The world is not linear, it has feedback effects.

Oh come on, surely you must realize that selling limited copyright to a digital likeness of your face is much different than selling a literal body part.

Of course it's different. It's obviously different. That doesn't make "if it was a bad exchange they wouldn't do it" a good argument in support of either of those examples.

> which is deplorable. But

There is always a but

The period is important. Lying to homeless is deplorable, full stop. But incorporating homeless faces into your training data is not objectionable.

Giving $5 to a homeless person doesn't help them.

You're selling yourself short.

Google, et al, want to use my likeness to facilitate database lookups. They are welcome to a perpetual, exclusive license of that data at a quarter of a trillion USD. They know how to get in touch with me; I'm 100% serious.

It sounds like there were three issues:

1. The contractor targeted homeless people

2. They targeted people with darker skin

3. They may not have been forthright or truthful about what they were doing.

Number 3 is clearly wrong. But I think so long as the contractors were upfront and truthful about what they were doing, I don't know if 1 or 2 are problematic.

The only argument I can see for why they shouldn't pay homeless people money for an easy job is that the prospect of money might be so enticing that they're willing to give up personal rights or freedoms (the same argument why we don't allow selling of organs). But $5 neither seems high enough, nor the process invasive enough, that this argument would hold water.

As for ensuring that enough of a sample range is in the database as an attempt at avoiding data bias, this should be a no-brainer good thing.

If this report is correct it seems like they were targeting a vulnerable population based on the logic that their vulnerability made them less likely to insist on being treated better than google wanted to treat them. That seems really bad to me.

If you're asking folks on the street and happen to get a lot of unhoused folks because they're around, that's fine. Writing memos telling people to target vulnerable populations because they're vulnerable is gross and deeply unethical.

You’re using words like “targeted” and “vulnerable” to imply there’s some kind of harm being done here, which is laughably false. Just because something strikes you as wrong (for whatever ridiculous reason) doesn’t mean it’s actually wrong, let alone “gross and deeply unethical.”

Are osteoporosis researchers unethical for “targeting” women?

The word "targeted" appears in the document quoted in the article. I think describing the homeless as a vulnerable population is a pretty uncontroversial thing to say, but if you'd like I can only use language from the article.

It's gross and deeply unethical to "target" homeless because they "didn’t know what was going on at all." Giving your subjects or customers incomplete information, such as "characterizing the scan as a 'selfie game'," is also clearly unethical.

Feel free to point to literally any harm to these people - there isn’t any. We’re talking about capturing faces in a public place where there’s no expectation of privacy. What does “gross” even mean beyond “it offends my sensibilities?”

Gross doesn't mean anything beyond "it offends my sensibilities." I also don't know if the individuals in this situation feel harmed - nor do you, because that's not how harm works. What I did say was that the actions were unethical because they made discovering and communicating harms less likely.

Ethics[0] is not exactly concerned with material harms to people. You can unethically help people and ethically harm them. That said, what is considered "ethical" is generally based on ideas of what should promote the beneficence of everyone in the situation. Choosing to engage /specifically/ with a group who you think is unlikely to detect deception or tell others about your interaction is unethical.

If you aren't sure why, here are the problems I immediately think of:

- If you are misleading people, it's likely that you're worried they will perceive a harm (real or imagined) if they were fully informed.

- If you are seeking out people who are socially isolated because they are unlikely to speak to others, it suggests you are concerned about the outcome of them telling others about your actions. You can also see this in abusive personal relationships where the abusive party will socially isolate their victim.

- Both of these conditions (knowledge and social capital imbalances) make understanding the impact of a relationship (positive or negative) difficult to determine. It might be that no harm has been done, but the account in the article suggests that the contractor went out of their way to create conditions where, if the participants were harmed, they would not be aware of it or would not be able to communicate it.

Lest you think this is all bleeding heart hand wringing, you can see these same principles encoded in economics. Contract law has the notion of material misrepresentation and there's lots of economic theory around the harms of information asymmetry (you could also look into companies that are convicted for material misrepresentation in advertising).

All of which I find gross.

[0] https://www.merriam-webster.com/dictionary/ethic

Is it really so hard to imagine they lied because people wouldn’t want to participate otherwise, regardless of any actual harm? There are plenty of people out there who think it’s illegal to take their picture or record them in public without their consent. “I didn’t consent to this” isn’t a trump card.

Material misrepresentations are a big part of contract law because they almost always, ya know, cause harm. Not too many people out there cooking the books to make their company cheaper to buy, for example.

> being treated better than google wanted to treat them

They're asking for 5m of their time and a scan of their face, in exchange for 5$. It's a simple transaction, and unlike what the HN-crowd would like to think, the majority of people in the street would quickly make that deal.

The article is putting a lot of their own feelings and opinions on the situation. Just because you have a fear of Google doesn't mean the whole world does, and no one was forced to do anything they didn't want to.

Also, the article mentions homeless people not going to the media (avoiding leaks), not being vulnerable.

>the article mentions homeless people not going to the media (avoiding leaks), not being vulnerable.

I guess...I didn't think calling the homeless a vulnerable population was controversial? The text in article specifically mentions that the people working with them described them as ignorant and purposely mis-informing them about the nature of the trial. Are you saying you're ok with companies misleading people as long as those people are unlikely to realize they are mislead?

>Just because you have a fear of Google doesn't mean the whole world does

As I said in my post, I don't think there would be anything wrong with collecting that data on the street and happening to get homeless folks because they're around. It's the misleading and targeting that bothered me. Are you sure I'm the one who's letting my feelings get the best of me?

I see, I misunderstood your point. I agree that the misleading, if true, is the most problematic part here. As mentioned elsewhere in the thread, I don't think targeting certain demographic is necessarily bad, but lying about what the collection is for definitely is, if true.

At least the Verge has the less clickbait headline, mentioning that it was contractors. The original source mentions Google in the headline but the rest of articles only refers to Randstad.

One part that is a bit confusing to me is, the original source makes no references whatsoever to any consent form. Usually you can't collect this sort of data without signed consent, and previous reports [0] do mention such a form. I know most people don't read the form, but I'm curious how you can get away with telling someone you're just playing a game and lie so much when the form should clearly state what you're collecting.

Still, there should definitely be better vetting of contractors and stories like this definitely look very bad, even if the intentions were actually to help reduce ML bias.

[0] https://www.engadget.com/2019/07/29/google-paid-for-face-sca...

EDIT: The original article does indeed mention an show a picture of the "agreement".

I'm on the fence regarding whether it's unfair for the a news source to attribute this sin directly to Google via its headline. "Actually, it was our contractors who did it" seems like too common and easy an excuse for companies and governments who want to outsource the blame and the fallout for their questionable projects.

I don't really understand what would Google's motive be. They're already paying 5$, and I highly doubt they'd have a hard time finding people rushing to give away 5m of their time and their face for 5$ (people outside of HN are far less techno paranoid).

To me, it just sounds like the contractor tried to get done with it asap and just half-assed the work.

I agree. I don't care if a company does something directly or outsources the work to someone else, ultimately the company is still responsible for making sure the work is done correctly and ethically.

"I know most people don't read the form, but I'm curious how you can get away with telling someone you're just playing a game and lie so much when the form should clearly state what you're collecting."

Mental illness, addiction, the constant 24/7 stress of being homeless, potentially systemtic issues starting from childhood that gets in the way of developing necessary reading skills to accurately analyze and knowledge base understanding of the concepts being read, etc. are all factors that would make reading and understanding any consent form of enough technological-legal terminology a very difficult task.

IMHO an entity is ethically responsible for anything its contractor does, unless there is reasonable clarity that the contractor acted in opposition to the client's wishes. Having someone else do your dirty work doesn't make it less dirty.

If I order a pizza and the stoned teenager from the local joint runs over a pedestrian, am I ethically responsible? I didn't tell them to not hit people when getting my meat lovers to me and did say I wanted it ASAP.

The pizza joint hired the stoned teen, he's not contracted by you.

It's a pretty rotten analogy.

The pizza joint hired the teen, I contracted the pizza joint to get me a pizza. The teen did it a shitty way that is hard to be expected.

Randstad hired the people, Google contracted Randstad to do a job. Randstad's people did it in a shitty way that is hard to expected.

It's a pretty fresh analogy.

I think the analogy is a poor one, but let's run with it anyway.

No, you are not responsible because the teen was not operating within the terms of the contract. You did not authorize or ask him to get stoned or run someone over.

Did google authorize (or even think) that their staff would contract with a company that would hire folks who would scan homeless people's faces in a way that was improper?

I don't know. I do know that in these kinds of situations, the contract typically spells out clearly what, exactly, the contractor is going to do and how, though.

Even though it is usually reviewed, the methods of data gathering are usually not spelled out in the contracts themselves. Instead you have a distribution of liability where the data broker agency assumes the risk of having gathered and/or sold the data incorrectly to the purchaser.

You are not liable. The pizza joint may be, and should either carry insurance or require their contractor to do so.

Yes, if you order a pizza from a local pizza place that you know to go out of their way to hit pedestrians and break laws, you are ethically responsible.

Why would you order pizza from a joint where stoned teenagers make the deliveries in the first place? If you knew that then yes, you're morally responsible.

Because in some places the only pizza employees you can find are stoned. Your local Papa John's or Domino's is probably a great place to find info for a local weed hookup.

That's entirely fair, I'm not absolving Google here, they should vet their contractors and make sure the job is done right. We probably won't know but I would like to know where these decisions came from as you mention. I just don't see the incentive for Google to push for such a messy work, I highly doubt there's a lack of people who would be happy to give their face data for 5$.

Especially when more and more it seems like the main reason companies are using contractors is to launder responsibility for the shady things they do.

> ...I’m curious how you can get away with telling someone you’re just playing a game and lie so much...

Since you’re curious: homeless people aren’t important to Google or society in general because they have so little and everyone has all but stopped caring about them. They’re poor, they’re unfortunate, and so they’re exploited. This has been happening since... leafs through book forever. Serfs used to toil away in fields until they perished, and nobody gave a damn about them either.

They did this because they could get away with it, because they (and probably Google) knew there wouldn’t be any consequences. It’s the same old song and dance: the poor get explored for the benefit of the rich, and most people don’t seem to care.

I think they may have even been executed under older English common law.

> The original source mentions Google in the headline but the rest of articles only refers to Randstad.

If you hire a contractor, you are responsible for what the contractor is doing unless the contractor is operating outside of the terms of the contract.

If the contractor is within the terms of the contract, saying that "Google is doing this" is not deceptively inaccurate.

Why do you think a form would stop this from happening? Its just words on paper until there is enforcement behind it.

Who is going to enforce the law against Google in the name of homeless people? Its not like the government in the US has been a champion of the downtrodden in recent years

A while ago Google got bad press because some image-tagging service identified some people as "gorillas", and IIRC it was blamed on not having enough diversity of skin color in the training data. So... it sounds like at least the "instructed them to target people of color" part of this is them trying to correct that. But in isolation that sounds even worse than the first instance. I guess you're damned if you do, damned if you don't.

> I guess you're damned if you do, damned if you don't.

This is not a case of that.

Google (or its contractor) could easily have done this in a way that was not objectionable. They simply decided not to.

What would make it “not objectionable?”

For starters, telling the people what you're actually doing and maybe paying them more than 5 bucks.

Okay my point was that the article very early calls out "targeting people of color" and only addresses that much later, clearly trying to get attention on that aspect. Telling people what you're doing and how much you pay them is completely unrelated to Google telling them to target people of color, though I agree lying is scummy.

But since you brought it up, how much money could they have paid to make you not feel like they're exploiting the homeless? You could have everyone read and sign complicated legalese consent forms and really just end up not giving a bunch of homeless people some money.

edit: Presumably this is happening on public property, so they have a right to take people's pictures anyway. I don't know what the laws are regarding rights to use people's "likenesses" the way many entertainment venues tell you they can, but I'd expect using it for model training is going to have a pretty low bar. If they weren't lying they're paying someone to look at a camera for 5 seconds. Hell, I'd agree to that and I'm not even concerned about how I'm going to pay for my dinner tonight.

You can argue that lying would be objectionable, but we’re talking about something with essentially no consequences here. As for the fee, how much are their faces worth beyond a fee they were obviously already willing to accept?

> but we’re talking about something with essentially no consequences here.

Just because it's difficult to identify the harms caused by someone stealing your biometric data that doesn't mean there are no harms. Gaining access to someone's biometric data clearly opens them up to certain types of risks ranging from identify theft to surveillance. Fraudulently gaining access to someone's biometric data is wrong even if the data is never abused or exploited.

Obtaining actual informed consent.

There’s not even an expectation of privacy in a public setting, realistically what harm is being prevented by obtaining “informed consent?”

This isn't about being in a public setting, this is about taking photos of people with the intention of including them in a database for later use. It can't be that simply being in public means that you give up all such rights, since it's impossible to avoid being in public.

That informed consent should be obtained seems obvious -- perhaps some of those people wouldn't want their faces to be used in that way. Are their desires without meaning? From the report, it also sounds like the images were being obtains in a plainly deceptive manner.

Whether or not there is "harm" is beside the point. The point is whether or not people are being deceived, and whether or not we as a society value meaningful autonomy.

But it simply can be. There is no reasonable expectation of privacy walking down a public street. I can walk down the street taking high definition photos and video of everyone I encounter and they would have no recourse to stop me, nor should they in a free society.

But, in the US anyway, the notion that you have no privacy at all in public isn't actually true.

People can take your picture, but there are longstanding legal privacy protections in place for how that picture can be used (for instance, commercial use is restricted). What I'm arguing is that those existing protections are no longer sufficient, and restrictions for use should include requiring having permission to include the pictures in databases as well.

But none of this is terribly relevant to the issue at hand. In this case, an actual transaction and apparent deception is involved.

> People can take your picture, but there are longstanding legal privacy protections in place for how that picture can be used

Right of publicity isn't a privacy right; it's more closely related to copyright or trademark than privacy rights.

Whatever its legal classification, it's still protective of privacy, and it's still an example of how the idea that you have no legal protection when you're in public isn't accurate.

It's not a matter of classification, but effect: it doesn't stop people from revealing things you don't want revealed, but from reading on your identity as, essentially, a commercial brand.

Depending on the state, even if the state recognizes the right (it's not a federal right, and not all states, IIRC, have any version of it), it also may not protect you at all, since some states only recognize right of publicity for celebrities. (In effect, your identity needs to be a valuable brand before it's protected in some jurisdictions.)

You are aware that just because you think something, doesn't mean it can't be any other way and that your view is obvious to everyone, right?

This comment confuses me. Why would you think I wouldn't be aware of that? If I thought that, I wouldn't have felt the need to express my opinion.

EDIT: I guess you're reacting to my "it can't be" comment. That was an expression of personal outrage at the idea of being placed in a powerless position, not an assertion of fact. All kinds of really terribly things are possible, obviously.

Next news: Google engineer sneezes in subway, google trying to infect people.

I mean there's no need to have google's name in there, other than to click-bait-trick people into viewing their subpar journalism with ads.

But, It's kinda shitty to cheat people no matter what. You cannot say hey pixel 4 is gonna have face unlock and i want you face scanned for that obviously, but contractor should have done a better job.

There's a long, long history of leaving women, people of color, poor people and other groups out of data sets. For example, I've read articles that indicate we can't create good photos of people of color because film standards were normalized to white skin.

So, try to fix that and... there's hell to pay?

File under: "No good deed goes unpunished."

I don't think there is anything wrong with the memo saying "we need more people of color in our data set." I hope everyone agrees with that.

What seems to have been bad is the contractor misinforming people about what data would be collected (and for what use), and it's not clear what Google had in their contract to prevent that kind of unethical behavior. It is also very questionable IMO to target the homeless "because they won't talk to the media" which was allegedly in the instructions the contracting firm Randstad gave to it's workers.

Disclaimer: While I work as a low level employee at an unrelated team in Google, my opinions are my own and do not represent those of my employer, and this is the first I am hearing of this.

It also seems bad to Target the homeless "because they won't talk to the media" or whatever the quote was.

To me, this just parses as "We have some new Politically Correct excuse to exclude poor people from our dataset."

Being so unimportant that the world wants you to remain invisible isn't generally a good thing.

There is always some excuse. There is no condition under which it is sufficiently respectful, politely handled, blah blah blah to be A Good Idea.

It's not an easy problem as you imply. It's not that you are "leaving out" groups so much that you have to go out of your way to include minorities in your data set by the definition of the word minority. This can make your project magnitudes more complex.

No matter what you make, some minority corner case will break your tech and generate outrage. ("How DARE your speech recognition not work on AAVE!", "How DARE your facial recognition not work on burn center victims!" etc.)

But now they have a dataset where a disproportionate proportion (possibly the last majority) of people of color represented were homeless.

That's bound to introduce other kinds of bias into the data.

Such as what? Some bizarre and unfounded idea that poverty and skin color have some correlation?

Such a correlation may or may not exist, but surely it's evident that no conclusion either way could be drawn from a dataset constructed using Google's method.

And yet imbalanced datasets are used all over the place, e.g. to identify "criminals" in China (https://www.newscientist.com/article/2114900-concerns-as-fac...) and the US (https://www.engadget.com/2019/08/14/aclu-facial-recognition-...)

I spent time homeless. I was frequently mistaken for a tourist based on how I looked, because of my casual clothing (usually a t-shirt and sweatpants). People typically figured out I was homeless based on my habits, not my appearance.

I'm off the street. I still look and dress the same, in part because I currently do freelance work from home. I don't have to meet a dress code.

While homeless and in downtown San Diego, I fairly often gave away food I had been given but couldn't eat, either because of dietary restrictions or time limits (in that a large amount of stuff that should be refrigerated would spoil before I could eat it). I tried to offer it to other homeless people mostly.

One woman who panhandled regularly was reluctant to accept too much food from me, explaining "I'm not homeless." She panhandled because she was a retiree in high-priced downtown San Diego living on a fixed income. I told her to take it home, stick it in the fridge and eat some tomorrow. I assured her it was fine, I didn't have a fridge.

Another woman got mad at me for offering and told me to feed it to my dog. She was sitting on a curb in a neighborhood near a lot of homeless services where sitting on the curb outside was often a sign of homelessness.

She was also black and I'm white. She likely lived in the apartment building she was in front of and probably thought I was being a racist bitch. She was insulted at my sincere offer of charity and attempt to give away most of the fresh fruit I had been given so it wouldn't go to waste.

There are a lot of stereotypes about what homeless people look like. The reality is that there are a lot of homeless people with jobs and/or attending college and/or living in their car who successfully manage to pass for "normal" much of the time.

I have no idea what criteria was used to target homeless people by Google, but I'm skeptical that the dataset:

A. Is representative of homeless people generally.

B. Was chosen based on people looking homeless, rather than people behaving homeless.

C. Actually is a 100% correlation that people believed to be homeless were actually homeless.

The examples you give are blatant misuses of data sets. How you source the data has little bearing on the dumb ideas people come up with for how to use it.

This seems like a fake outrage to me. If I were a homeless I'd be more than happy for someone to take a photo of me for $5. In fact, I'd find it pretty hypocritical if someone was spending their energy fighting against possible infringement of my rights in such scenario instead of actually providing me with money or food.

> If I were a homeless I'd be more than happy for someone to take a photo of me for $5.

The problem isn't that they were offering money in exchange for photos of homeless people it's that they were tricking homeless people into giving up their biometric data by telling them they'll pay them $5 just to play with a phone for a few minutes.

If they were honest about what they were taking and why I wouldn't have a problem with it.

> Google and Randstad didn’t immediately reply to requests for comment.

The content of the article is interesting enough, but this line at the end caught my attention.

Is it reasonable to expect someone to "immediately reply" before you publish the article? Because that doesn't sound like ethical journalism to me, unless I'm misunderstanding the meaning of "immediately" in this context.

Doesn't matter, I suspect - It works within the narrative and implies they have something to hide. It's good /tabloid/ journalism, and poor investigative journalism.

Randstad are very much the former.

Historically if you wait multiple days / weeks to give them a chance to reply they just do an end-run and publish puff pieces in major outlets to try and defuse your article. There are multiple cases of this in the past year.

It's either investigative journalism or it's not. How long you wait for comment has nothing to do with that. Do you really think this is equivalent to tabloids posting faked photos of some movie star's belly?

That's a disingenuous comparison.

How long did they wait before publishing? We don't know, it doesn't matter. Simply stating that they didn't respond had the desired effect, and - as you rightly pointed out - does nothing to diffuse the story. The implication of the story being that not only are Google potentially taking advantage of vulnerable people to further their unspoken, morally grey agenda, but it may also have a racially questionable angle.

Alternately, they wanted to train their facial recognition dataset with certain characteristics on the cheap.

That in itself is interesting, but probably wouldn't get as many clicks. It's bottom drawer "I leave you, dear reader, to draw your own conclusions" stuff.

>There are multiple cases of this in the past year.

Would you care to list some examples? I can't think of any.

This is too desperate. It seems AI will help in criminalizing and unbiased profiling of people of color, as a person of color it feels really hard to imagine a future where justice is done with due diligence.

Joy at Media Lab has been looking at this issue for a while and advocating for balance. https://www.technologyreview.com/s/612775/algorithms-crimina...

Also I find it weird that Nvidia was able to simulate realisticly looking people last year, and Google is struggling to find humans, can't they use that as ground-truth?

The first thing they teach you about research, do not do it on the vulnerable population. It's not like people going to the public would matter; The Pixel 4's features and hardware all leaked way before the announcement.

> “They [Google contractor] said to target homeless people because they’re the least likely to say anything to the media,” the ex-staffer said. “The homeless people didn’t know what was going on at all.”

How dare they give $5 to homeless people instead of college students walking around campus with airpods.

Yeah, usually you have to pay at least minimum wage to exploit college students.

This really counts as “extreme and unsavory” these days?

Same thing panasonic does in Japan to improve facial recognition - except they just pay foreigners 5000 JPY and tell you what they do.

Google seems to be outsourcing their ethics.

Anyone knows why they were giving gift cards instead of cash? I suppose there's a legal reason.

I can think of several reasons that homeless activism groups would suggest as reasons to use gift cards instead of cash. Most of them are pretty evident if you consider what kinds of goods are only available through cash-only purchases.

Selling ones privacy for a $5 Starbucks G/C. They know their gullible audience!

This is blatantly unethical.

Expanding your database? Great

Forgetting situational ethics? Disgusting

Why doesn’t the model trained on one race generalize to different races? That sounds inferior to human vision.

"All X's look the same" where X is any of [Asians, Black people, white people, etc..] is a very common refrain.

Human eyes also need to be trained on diverse data. It's the cross-race effect: https://en.wikipedia.org/wiki/Cross-race_effect

The wiki post mentioned that people are right 45% of the time. How does dl and ml stack up? Also not mentioned, how long does it take for humans to start recognizing new races, vs ml and dl?

Facial features can vary from ethnicity to ethnicity (see e.g., https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3074358/), and so a machine learning model trained solely on pictures from one ethnic group may not understand how to reliably distinguish different people of another ethnicity.

This strikes me as the ethically best possible way to collect this data. Google is paying people who need the money for something simple and completely harmless.

The main counterargument appears to be that those who sold data "didn't understand what was going on". It's hard to imagine moral convictions in which someone could consistently argue that the homeless don't understand money in exchange for photos, but it's acceptable to leave them to fend for themselves on the street.

Google is, at worst, helping people who need help.

> This strikes me as the ethically best possible way to collect this data

"a contracting agency named Randstad sent teams to Atlanta explicitly to target homeless people and those with dark skin, often without saying they were working for Google, and without letting on that they were actually recording people’s faces"

How can this be the most ethical way to collect data?

The problem isn't in acquiring facial recognition data from homeless people, but in mischaracterising the nature of the experiment when doing so. If the reporting is accurate, they lied to vulnerable people and tricked them into selling their data for cheap.

Companies can't go around hustling people into giving away their private information. It doesn't matter if you think this is "for their own good", a homeless person may want to refuse being catalogued by Google for a variety of reasons.

> This strikes me as the ethically best possible way to collect this data.

I don't see how failing to get informed consent counts as "the ethically best possible way".

OOF. That's not how this works sir. Did you read the article? Some quotes:

“They said to target homeless people because they’re the least likely to say anything to the media,” the ex-staffer said. “The homeless people didn’t know what was going on at all.”

Some were told to gather the face data by characterizing the scan as a “selfie game” similar to Snapchat, they said. One said workers were told to say things like, “Just play with the phone for a couple minutes and get a gift card,” and, “We have a new app, try it and get $5.”

Google (or their contractor if you're going to fight about the semantics here) is, at worst, guilty of misleading people about what they were doing, targeting vulnerable people with the expressed idea that they would be less likely to create problems, and not actually improving anyone's conditions in a real way by doing this.

Here's the moral convictions I have: lying to someone about what is happening to you in order to create a functioning business is bad business. It's entirely removed from the fact that small increments of money were given to some homeless people. I don't get to abuse homeless people as long as I give them 5 dollars afterwards. That's not how morality works. These people weren't lifted out of their conditions because of this life-changing sum. They weren't put into treatment centers or given job training. They were purposefully mislead and then compensated less than the price of a combo meal at McDonald's.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact