Anyway, this part sounds directly illegal, seems like it was just Randstad being greedy but if anyone from Google knew about it then it is bad but I doubt that they couldn't budget enough money to get the scans legally:
> They said Randstad project leaders specifically told the TVCs to (...) conceal the fact that people’s faces were being recorded and even lie to maximize their data collections.
A friend went through the application and they wanted her to digitally sign 46 contracts, one after the other, without a chance to read the following contract before signing the current one. Including one about an arbitration clause. She did see that the first contract offered to send the rest of the contracts printed, by mail, but when she talked to the rep, he acted like he didn't have access to the contracts he wanted her to sign (yeah right and later he'll be like, well you signed y, so you gave up the right to x, probably knows them by heart), and that she should simply sign them and then go back and print them.
Presumably they have to offer to send them by mail for the contract based on online signatures to be binding, so it's interesting that the rep refused to do so. It was especially sad they have a deal with unemployment offices that funnel workers to them using state funds.
The "and/or" in "it sounds like Google and/or its contractor may have been taking some extreme and unsavory shortcuts to cash in." is clearly use of weasel words, the journalist was unable to substantiate the allegation that Google was aware of this unethical practice. Sloppy reporting demonstrating a lack of journalistic integrity.
How do you go from 1 to 2? With the premise "darker-faced people tend to be homeless".
This is not necessarily a false premise -- statistically, it is true, and it is a reflection of systemic injustice -- but the outrage is not whether it's true or false; the outrage is that Randstad exploited this painful fact.
It's so unsupported, of course. Far more likely that homeless people tend to be a available and amenable to the project.
I would happily sell anyone a picture or scan of my face for $5. But I would even more happily have that chance go to someone who needs it more than myself.
This article also mentions that the contractor may have lied to or misled the homeless, which is deplorable. But the behavior described by the title itself is nothing objectionable. The fact that many will object is a phenomenon I've seen called "Copenhagen Ethics": https://blog.jaibot.com/the-copenhagen-interpretation-of-eth...
Would you really? My gut feeling tells me that's not the case for most people for privacy or ethical reasons. Just because those people are poor, we expect them to have lower privacy or ethical standards.
The link you posted has the following example, I think you're referring to that
> BBH Labs was an exception – they outfitted 13 homeless volunteers with WiFi hotspots and asked them to offer WiFi to SXSW attendees in exchange for donations. In return, they would be paid $20 a day plus whatever attendees gave in donations.
That's completely different. Offering Wifi has zero long term effects. It's providing people with a "business opportunity" that wouldn't have access to it otherwise. Giving someone 5 bucks for their face picture (or other biometrics) is totally different and has long term negative effects.
- Provides a link or method to create the scan that takes just a few minutes (on Ubuntu)
- Sends $5 to email@example.com via PayPal or via BTC to 17h2GtaBzivnNtP24qoGg4a3pjgShkw7MD
I will complete the process and post the result in this thread.
All I (unsuccessfully) tried to point out is that the two scenarios are different.
How many people in need of money do all sorts of desperate things like take a job they hate? How many HNers are working jobs they actually hate? Or they're being taken advantage of because they don't have the backbone to stand up for themselves? etc. etc.
Most live publicly with their faces on display for all to see and others taking it a step further, participating in Facebook alongside billions of others.
It doesn’t scream facial identity being a major concern.
Most people have them in their homes, breathe them, eat them, and others take it a step further, participating in the creation of them.
It doesn't scream fear of cancer being a major concern.
That people live their lives accepting that their faces are on display is not evidence otherwise, since there is literally no other option.
Participating Facebook is also not evidence otherwise -- at most, it's evidence that people are willing to trade privacy in some circumstances (and I think even that's a bit of a stretch), but I'll bet that most Facebook users would object to having their privacy invaded without their consent -- which means they care about privacy.
I say this with seriousness. When considering this alternative, the option of living alone, without human interaction, public identity shows it positive attributes.
Which is what was being proposed and subsequently doubted: that people were willing to consensually trade their picture for $5.
Questionnaires where they reveal things, often associated with at least a way to contact them for an interview, but sometimes with name and everything, usually is psychiatric settings (where people are often incarcerated without any proof, trial or any of that I might add). Things like whether they stole from their employer. Whether they ever used violence to obtain sex. They often ask children, homeless, prisoners, patients ... other groups with perceived or real precarious situations. (things that would never pass an ethical review board for, say, medicine)
So yes, I would say that a lot of people are willing to give up a LOT more privacy than a face picture for a small reward.
But the fact that that's true doesn't mean that people don't care about privacy.
You could even argue that it's indicative that people do care about privacy, as they attach a material value to it. This isn't an argument that I'm really making, but it isn't an unreasonable one.
Your gut is sadly wrong. The majority of people still do not actually care about privacy when there is more than a few cents of value being offered.
And what are the negative effects of giving away biometrics? Is someone with no assets and no stable residence in danger of harm from someone getting a loan in their name? Of being rounded up by the government for their biometrics and not for the much more immediate threat of being criminalized directly to just for being homeless?
What if they could have bargained for $10 or even more instead? I don’t think either company would even blink at the sum, but many desperate people out there would be a lot better off.
I agree with you that some observers are never going to be satisfied and to them there’s always more an individual or a company can do. There is definitely an observer effect.
Similarly, If we took my line of questioning all the way to an absurd extreme, the best outcome would be if all these people got permanent shelter, jobs, and a stable life. But we can’t expect companies with profit targets to do this them. Nobody would feel bad about this exchange, but it would be pretty unrealistic.
So i guess I need to reframe my original question. Why do certain exchanges feel ok while other ones leave a sour taste in everybody’s mouth?
To me it seems like the answer is because the exchange felt unfair. Both parties stand to benefit but, instead of doing something genuinely beneficial for both, the party in power offered the (almost) bare minimum. That sense of unfairness is multiplied when you contextualize the exchange as Very Large Business vs. Small Homeless Person.
Similarly the link to the phenomenon discusses our role as observers, but it doesn’t discuss the parties’ roles in the exchanges. They’re not only observers, they’re also actors. The people performing the homeless study could, for example offered something to the control group at the completion of the experiment.
“Copenhagen Ethics” really just strikes me as a rhetorical tool to defend exploitation. “What, just because I offered this person a job I have to pay them a minimum wage?”
It could help some start-ups that need such a face for demo purposes or other experiments.
The world is not linear, it has feedback effects.
There is always a but
Google, et al, want to use my likeness to facilitate database lookups. They are welcome to a perpetual, exclusive license of that data at a quarter of a trillion USD. They know how to get in touch with me; I'm 100% serious.
1. The contractor targeted homeless people
2. They targeted people with darker skin
3. They may not have been forthright or truthful about what they were doing.
Number 3 is clearly wrong. But I think so long as the contractors were upfront and truthful about what they were doing, I don't know if 1 or 2 are problematic.
The only argument I can see for why they shouldn't pay homeless people money for an easy job is that the prospect of money might be so enticing that they're willing to give up personal rights or freedoms (the same argument why we don't allow selling of organs). But $5 neither seems high enough, nor the process invasive enough, that this argument would hold water.
As for ensuring that enough of a sample range is in the database as an attempt at avoiding data bias, this should be a no-brainer good thing.
If you're asking folks on the street and happen to get a lot of unhoused folks because they're around, that's fine. Writing memos telling people to target vulnerable populations because they're vulnerable is gross and deeply unethical.
Are osteoporosis researchers unethical for “targeting” women?
It's gross and deeply unethical to "target" homeless because they "didn’t know what was going on at all." Giving your subjects or customers incomplete information, such as "characterizing the scan as a 'selfie game'," is also clearly unethical.
Ethics is not exactly concerned with material harms to people. You can unethically help people and ethically harm them. That said, what is considered "ethical" is generally based on ideas of what should promote the beneficence of everyone in the situation. Choosing to engage /specifically/ with a group who you think is unlikely to detect deception or tell others about your interaction is unethical.
If you aren't sure why, here are the problems I immediately think of:
- If you are misleading people, it's likely that you're worried they will perceive a harm (real or imagined) if they were fully informed.
- If you are seeking out people who are socially isolated because they are unlikely to speak to others, it suggests you are concerned about the outcome of them telling others about your actions. You can also see this in abusive personal relationships where the abusive party will socially isolate their victim.
- Both of these conditions (knowledge and social capital imbalances) make understanding the impact of a relationship (positive or negative) difficult to determine. It might be that no harm has been done, but the account in the article suggests that the contractor went out of their way to create conditions where, if the participants were harmed, they would not be aware of it or would not be able to communicate it.
Lest you think this is all bleeding heart hand wringing, you can see these same principles encoded in economics. Contract law has the notion of material misrepresentation and there's lots of economic theory around the harms of information asymmetry (you could also look into companies that are convicted for material misrepresentation in advertising).
All of which I find gross.
Material misrepresentations are a big part of contract law because they almost always, ya know, cause harm. Not too many people out there cooking the books to make their company cheaper to buy, for example.
They're asking for 5m of their time and a scan of their face, in exchange for 5$. It's a simple transaction, and unlike what the HN-crowd would like to think, the majority of people in the street would quickly make that deal.
The article is putting a lot of their own feelings and opinions on the situation. Just because you have a fear of Google doesn't mean the whole world does, and no one was forced to do anything they didn't want to.
Also, the article mentions homeless people not going to the media (avoiding leaks), not being vulnerable.
I guess...I didn't think calling the homeless a vulnerable population was controversial? The text in article specifically mentions that the people working with them described them as ignorant and purposely mis-informing them about the nature of the trial. Are you saying you're ok with companies misleading people as long as those people are unlikely to realize they are mislead?
>Just because you have a fear of Google doesn't mean the whole world does
As I said in my post, I don't think there would be anything wrong with collecting that data on the street and happening to get homeless folks because they're around. It's the misleading and targeting that bothered me. Are you sure I'm the one who's letting my feelings get the best of me?
One part that is a bit confusing to me is, the original source makes no references whatsoever to any consent form. Usually you can't collect this sort of data without signed consent, and previous reports  do mention such a form. I know most people don't read the form, but I'm curious how you can get away with telling someone you're just playing a game and lie so much when the form should clearly state what you're collecting.
Still, there should definitely be better vetting of contractors and stories like this definitely look very bad, even if the intentions were actually to help reduce ML bias.
EDIT: The original article does indeed mention an show a picture of the "agreement".
To me, it just sounds like the contractor tried to get done with it asap and just half-assed the work.
Mental illness, addiction, the constant 24/7 stress of being homeless, potentially systemtic issues starting from childhood that gets in the way of developing necessary reading skills to accurately analyze and knowledge base understanding of the concepts being read, etc. are all factors that would make reading and understanding any consent form of enough technological-legal terminology a very difficult task.
It's a pretty rotten analogy.
Randstad hired the people, Google contracted Randstad to do a job. Randstad's people did it in a shitty way that is hard to expected.
It's a pretty fresh analogy.
No, you are not responsible because the teen was not operating within the terms of the contract. You did not authorize or ask him to get stoned or run someone over.
Since you’re curious: homeless people aren’t important to Google or society in general because they have so little and everyone has all but stopped caring about them. They’re poor, they’re unfortunate, and so they’re exploited. This has been happening since... leafs through book forever. Serfs used to toil away in fields until they perished, and nobody gave a damn about them either.
They did this because they could get away with it, because they (and probably Google) knew there wouldn’t be any consequences. It’s the same old song and dance: the poor get explored for the benefit of the rich, and most people don’t seem to care.
If you hire a contractor, you are responsible for what the contractor is doing unless the contractor is operating outside of the terms of the contract.
If the contractor is within the terms of the contract, saying that "Google is doing this" is not deceptively inaccurate.
Who is going to enforce the law against Google in the name of homeless people? Its not like the government in the US has been a champion of the downtrodden in recent years
This is not a case of that.
Google (or its contractor) could easily have done this in a way that was not objectionable. They simply decided not to.
But since you brought it up, how much money could they have paid to make you not feel like they're exploiting the homeless? You could have everyone read and sign complicated legalese consent forms and really just end up not giving a bunch of homeless people some money.
edit: Presumably this is happening on public property, so they have a right to take people's pictures anyway. I don't know what the laws are regarding rights to use people's "likenesses" the way many entertainment venues tell you they can, but I'd expect using it for model training is going to have a pretty low bar. If they weren't lying they're paying someone to look at a camera for 5 seconds. Hell, I'd agree to that and I'm not even concerned about how I'm going to pay for my dinner tonight.
Just because it's difficult to identify the harms caused by someone stealing your biometric data that doesn't mean there are no harms. Gaining access to someone's biometric data clearly opens them up to certain types of risks ranging from identify theft to surveillance. Fraudulently gaining access to someone's biometric data is wrong even if the data is never abused or exploited.
That informed consent should be obtained seems obvious -- perhaps some of those people wouldn't want their faces to be used in that way. Are their desires without meaning? From the report, it also sounds like the images were being obtains in a plainly deceptive manner.
Whether or not there is "harm" is beside the point. The point is whether or not people are being deceived, and whether or not we as a society value meaningful autonomy.
People can take your picture, but there are longstanding legal privacy protections in place for how that picture can be used (for instance, commercial use is restricted). What I'm arguing is that those existing protections are no longer sufficient, and restrictions for use should include requiring having permission to include the pictures in databases as well.
But none of this is terribly relevant to the issue at hand. In this case, an actual transaction and apparent deception is involved.
Right of publicity isn't a privacy right; it's more closely related to copyright or trademark than privacy rights.
Depending on the state, even if the state recognizes the right (it's not a federal right, and not all states, IIRC, have any version of it), it also may not protect you at all, since some states only recognize right of publicity for celebrities. (In effect, your identity needs to be a valuable brand before it's protected in some jurisdictions.)
EDIT: I guess you're reacting to my "it can't be" comment. That was an expression of personal outrage at the idea of being placed in a powerless position, not an assertion of fact. All kinds of really terribly things are possible, obviously.
I mean there's no need to have google's name in there, other than to click-bait-trick people into viewing their subpar journalism with ads.
But, It's kinda shitty to cheat people no matter what. You cannot say hey pixel 4 is gonna have face unlock and i want you face scanned for that obviously, but contractor should have done a better job.
So, try to fix that and... there's hell to pay?
File under: "No good deed goes unpunished."
What seems to have been bad is the contractor misinforming people about what data would be collected
(and for what use), and it's not clear what Google had in their contract to prevent that kind of unethical behavior. It is also very questionable IMO to target the homeless "because they won't talk to the media" which was allegedly in the instructions the contracting firm Randstad gave to it's workers.
Disclaimer: While I work as a low level employee at an unrelated team in Google, my opinions are my own and do not represent those of my employer, and this is the first I am hearing of this.
To me, this just parses as "We have some new Politically Correct excuse to exclude poor people from our dataset."
Being so unimportant that the world wants you to remain invisible isn't generally a good thing.
There is always some excuse. There is no condition under which it is sufficiently respectful, politely handled, blah blah blah to be A Good Idea.
No matter what you make, some minority corner case will break your tech and generate outrage. ("How DARE your speech recognition not work on AAVE!", "How DARE your facial recognition not work on burn center victims!" etc.)
That's bound to introduce other kinds of bias into the data.
And yet imbalanced datasets are used all over the place, e.g. to identify "criminals" in China (https://www.newscientist.com/article/2114900-concerns-as-fac...) and the US (https://www.engadget.com/2019/08/14/aclu-facial-recognition-...)
I'm off the street. I still look and dress the same, in part because I currently do freelance work from home. I don't have to meet a dress code.
While homeless and in downtown San Diego, I fairly often gave away food I had been given but couldn't eat, either because of dietary restrictions or time limits (in that a large amount of stuff that should be refrigerated would spoil before I could eat it). I tried to offer it to other homeless people mostly.
One woman who panhandled regularly was reluctant to accept too much food from me, explaining "I'm not homeless." She panhandled because she was a retiree in high-priced downtown San Diego living on a fixed income. I told her to take it home, stick it in the fridge and eat some tomorrow. I assured her it was fine, I didn't have a fridge.
Another woman got mad at me for offering and told me to feed it to my dog. She was sitting on a curb in a neighborhood near a lot of homeless services where sitting on the curb outside was often a sign of homelessness.
She was also black and I'm white. She likely lived in the apartment building she was in front of and probably thought I was being a racist bitch. She was insulted at my sincere offer of charity and attempt to give away most of the fresh fruit I had been given so it wouldn't go to waste.
There are a lot of stereotypes about what homeless people look like. The reality is that there are a lot of homeless people with jobs and/or attending college and/or living in their car who successfully manage to pass for "normal" much of the time.
I have no idea what criteria was used to target homeless people by Google, but I'm skeptical that the dataset:
A. Is representative of homeless people generally.
B. Was chosen based on people looking homeless, rather than people behaving homeless.
C. Actually is a 100% correlation that people believed to be homeless were actually homeless.
The examples you give are blatant misuses of data sets. How you source the data has little bearing on the dumb ideas people come up with for how to use it.
The problem isn't that they were offering money in exchange for photos of homeless people it's that they were tricking homeless people into giving up their biometric data by telling them they'll pay them $5 just to play with a phone for a few minutes.
If they were honest about what they were taking and why I wouldn't have a problem with it.
The content of the article is interesting enough, but this line at the end caught my attention.
Is it reasonable to expect someone to "immediately reply" before you publish the article? Because that doesn't sound like ethical journalism to me, unless I'm misunderstanding the meaning of "immediately" in this context.
Randstad are very much the former.
It's either investigative journalism or it's not. How long you wait for comment has nothing to do with that. Do you really think this is equivalent to tabloids posting faked photos of some movie star's belly?
How long did they wait before publishing? We don't know, it doesn't matter. Simply stating that they didn't respond had the desired effect, and - as you rightly pointed out - does nothing to diffuse the story. The implication of the story being that not only are Google potentially taking advantage of vulnerable people to further their unspoken, morally grey agenda, but it may also have a racially questionable angle.
Alternately, they wanted to train their facial recognition dataset with certain characteristics on the cheap.
That in itself is interesting, but probably wouldn't get as many clicks. It's bottom drawer "I leave you, dear reader, to draw your own conclusions" stuff.
Would you care to list some examples? I can't think of any.
Joy at Media Lab has been looking at this issue for a while and advocating for balance. https://www.technologyreview.com/s/612775/algorithms-crimina...
Also I find it weird that Nvidia was able to simulate realisticly looking people last year, and Google is struggling to find humans, can't they use that as ground-truth?
Expanding your database? Great
Forgetting situational ethics? Disgusting
Human eyes also need to be trained on diverse data. It's the cross-race effect: https://en.wikipedia.org/wiki/Cross-race_effect
The main counterargument appears to be that those who sold data "didn't understand what was going on". It's hard to imagine moral convictions in which someone could consistently argue that the homeless don't understand money in exchange for photos, but it's acceptable to leave them to fend for themselves on the street.
Google is, at worst, helping people who need help.
"a contracting agency named Randstad sent teams to Atlanta explicitly to target homeless people and those with dark skin, often without saying they were working for Google, and without letting on that they were actually recording people’s faces"
How can this be the most ethical way to collect data?
The problem isn't in acquiring facial recognition data from homeless people, but in mischaracterising the nature of the experiment when doing so. If the reporting is accurate, they lied to vulnerable people and tricked them into selling their data for cheap.
Companies can't go around hustling people into giving away their private information. It doesn't matter if you think this is "for their own good", a homeless person may want to refuse being catalogued by Google for a variety of reasons.
I don't see how failing to get informed consent counts as "the ethically best possible way".
“They said to target homeless people because they’re the least likely to say anything to the media,” the ex-staffer said. “The homeless people didn’t know what was going on at all.”
Some were told to gather the face data by characterizing the scan as a “selfie game” similar to Snapchat, they said. One said workers were told to say things like, “Just play with the phone for a couple minutes and get a gift card,” and, “We have a new app, try it and get $5.”
Google (or their contractor if you're going to fight about the semantics here) is, at worst, guilty of misleading people about what they were doing, targeting vulnerable people with the expressed idea that they would be less likely to create problems, and not actually improving anyone's conditions in a real way by doing this.
Here's the moral convictions I have: lying to someone about what is happening to you in order to create a functioning business is bad business. It's entirely removed from the fact that small increments of money were given to some homeless people. I don't get to abuse homeless people as long as I give them 5 dollars afterwards. That's not how morality works. These people weren't lifted out of their conditions because of this life-changing sum. They weren't put into treatment centers or given job training. They were purposefully mislead and then compensated less than the price of a combo meal at McDonald's.