Hacker News new | past | comments | ask | show | jobs | submit login
Facebook wants to redline your friends list (psmag.com)
44 points by anigbrowl on Sept 4, 2015 | hide | past | favorite | 9 comments



Two sentence summary from the actual patent:

>When an individual applies for a loan, the lender examines the credit ratings of members of the individual's social network who are connected to the individual through authorized nodes. If the average credit rating of these members is at least a minimum credit score, the lender continues to process the loan application. Otherwise, the loan application is rejected.

[http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Sect2=H...]

That could really help divide society even more. Thanks Facebook?

On a second thought, shouldn't it be easy to dismiss this patent as prior art? Certainly someone somewhere has done filtering based on a friend list before.


Technology is superb at remembering everything and helping us sort and cherry pick and focus on whatever segment of data we like. And an unintended consequence is things like this where it becomes even easier for us to segregate ourselves, which is a natural human tendency. What did we do as a society when we only had limited data on a person with which to judge them? We took more chances and went with the flow. That leads to serendipity and opportunities for potential that is sitting on the sidelines. I'd argue that this is a better methodology for society as a whole; a little bit of chance/risk. It promotes patience and normalizes the act of mingling with different peoples. The merit of Facebook's premise here I think is a bit flawed to begin with (that you can predict ones credit rating based on their friends. Are there even studies saying such?), but the real point is the consequences of a system like this being widespread. I feel that most peoples Facebook friends are close to their age and their geographic location. This could have a biasing effect based on ones age and socioeconomic status thereby exaggerating the negative or positive aspects of the group you associate with. What little edge you worked to have above your peers with your credit score could be normalized away just because of who your friends are.


Hence quoting McAfee from his AMA "Be aware that everything that is free is not free! There is nothing free in this world. People don't spend $1M on an application to give it to you for free. They have a reason, and it's not good for you, I promise that."


From a link[0] in the article:

> Facebook bought the patent from Friendster in 2010.

[0] https://venturebeat.com/2015/08/04/facebook-patents-technolo...


When I briefly worked in payday lending, I learned there were certain credit approval methods that were categorically illegal. And we had to be auditable, so wasn't like we could just bury the crime in algorithms.

Address information, e.g. living in a poor neighborhood or living near defaulters could not be used in credit scoring. Race could not be used in credit scoring. I forget what else, but there were some regulations in place. Watch for Facebook to lobby to have those eliminated, or pursue technology that somehow allows them to hide or skirt use of unlawful scoring methods.


Ugh, this is a terribly click-baity article.

Facebook has really, really good people working on machine learning. What Facebook wants to do is use whatever features it can scrape from your profile to accurately price your odds of defaulting on a loan.

It's very unlikely that removing a few 'poor' friends will improve your credit score. There's likely to be many features that will be highly correlated all of which will enter a statistical model.

Keep in mind - actuaries already use all sorts of information about you in order to price a loan. This will just make the pricing more accurate, making (on average) cheaper loans.

The downside of more accurate loan pricing is that it might lead to de-facto discrimination against groups that are classified as high risk. I believe the government has a role there to provide high risk pools, because the alternative will be to push those people towards black market loans.


>> actuaries already use all sorts of information about you in order to price a loan

That is totally true and that is not the problem. The problem is that after they used all the available data to estimate your probability of default with an accuracy of 0.0001% and based on that they refuse to give you a loan, you, as a tax payer, still have to save their and the shareholders' ass once in a decade because they lost all of their and your money on the stock market...


> Facebook has really, really good people working on machine learning.

Being good at machine learning does not imply they are also good evaluating how their ideas will impact society, nor does it imply they don't have other motives (intentionally or not).

Unfortunately, too many people only consider the intended results when they create something, and rarely consider the larger impact their creation would have on other aspects of society.

> more accurate

It's questionable if this is "accuracy". They are doing very precise calculations, but the result is still going to depend on the assumptions in the design (e.g. that you can infer anything about default risk from someone's online "friends"), and the quality of the input data.

However, those technical concerns are not the main problem, which is redlining. Laws are slow in responding to changes, which allowed a lot of racists to maintain de facto segregation by simply claiming that someone "didn't qualify" for some loan. This was easy, because the complex nature calculating someone's default risk makes it easy to hide something that "just happens" to target a certain group of people. This becomes even easier (and provides more cover) when it starts to feedback on itself: (i.e. you're a risk because you're from $bad_area, but you have to live in $bad_area because we denied loans in the past). I highly recommend reading Ta-Nehisi Coates essay that explains this in far more detail[1].

Using a social-media "friends list" to judge loan risk is data with a very poor signal-to-noise ratio. That same "friends list" is also data that can be easily map to race or class, just like "current address" did for the lenders that redlined segregated areas to limit "mud people"[2] to subprime "ghetto loans"[2]. Which of these sounds like a more likely cause? Even if the developers at Facebook believe this is about "more accurate loans", someone has other plans.

The complexity of modern technology has made it even easier to hide the real decision-making process, and machine learning techniques are the worst, where "I'm not denying you a loan, the Magic Machine is! (even though I designed it and decided which data it gets to see)". These tools can certainly be useful, but it is incredibly important that you're not just creating a proxy (both software and legal) that hides the underlying decision making process.

As you say, the government absolutely has a role here, but governments are slow to change. In the meantime, it is important for the people implementing to remember that code is law[3].

> actuaries already use all sorts of information

Which is why some of them have had to pay discrimination lawsuit settlements in the $100M-300M range in recent years, with more lawsuits pending.

[1] http://www.theatlantic.com/magazine/archive/2014/06/the-case...

[2] These terms are from the affidavit[1] of Wells Fargo loan officers

[3] https://en.wikipedia.org/wiki/Code_and_Other_Laws_of_Cybersp...


I finally have a logical reason to refuse to reactivate my Facebook account. Thanks, Facebook!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: